Hacker Newsnew | past | comments | ask | show | jobs | submit | bestcommentslogin
Most-upvoted comments of the last 48 hours. You can change the number of hours like this: bestcomments?h=24.

I'm happy for the guy, but am I jealous as well? Well yes, and that's perfectly human.

We have someone who vibe coded software with major security vulnerabilities. This is reported by many folks

We also have someone who vibecoded without reading any of the code. This is self admitted by this person.

We don't know how much of the github stars are bought. We don't know how many twitter followings/tweets are bought.

Then after a bunch of podcasts and interviews, this person gets hired by a big tech company. Would you hire someone who never read any if the code that they've developed? Well, this is what happened here.

In this timeline, I'm not sure I find anything inspiring here. It's telling me that I should rather focus on getting viral/lucky to get a shot at "success". Maybe I should network better to get "successful". I shouldn't be focusing on writing good code or good enough agents. I shouldn't write secure software, instead I should write softwares that can go viral instead. Are companies hiring for vitality or merit these days? What is even happening here?

So am I jealous, yes because this timeline makes no sense as a software engineer. But am I happy for the guy, yeah I also want to make lots of money someday.


> you get hired for your proven ability to (…)

No, you get hired for your perceived ability to (…)

The world is full of Juliuses, which is a big reason everything sucks.

https://ploum.net/2024-12-23-julius-en.html


It's funny to me how still so many don't realize you don't get hired for the best positions for being a 10x programmer who excels at hackerrank, you get hired for your proven ability to deliver useful products. Creativity, drive, vision, whatever. Code is a means to an end. If you're the type of programmer who thinks of yourself as just a programmer, and take pride in your secure code, ability to optimize functions and algorithms, you're exactly the kind of programmer AI will replace.

Quality of code has never had anything to do with which products are successful. I bet both youtube and facebook's codebase is a tangled mess.


>I know it seems hard, but just stop using Google, Amazon, Meta products.

I noticed your own app's website [0] hosts videos on YouTube [1] and uses Stripe as a payment processor [2], which is hosted on AWS. You also mentioned that your app is vibe coded [3]; the AI labs that facilitated your vibecoding likely built and run their models using Meta's PyTorch or Google's TensorFlow.

"Just stop using" makes for a catchy manifesto in HackerNews comments, but the reality is a lot more complicated than that.

[0] https://wordsunite.us/

[1] https://www.youtube.com/watch?v=AbCM99cz9W8

[2] https://wordsunite.us/terms

[3] https://news.ycombinator.com/item?id=45644698


I've used LLMs enough that I have a good sense of their _edges_ of intelligence. I had assumed that reasoning models should easily be able to answer this correctly.

And indeed, Sonnet and Opus 4.5 (medium reasoning) say the following:

Sonnet: Drive - you need to bring your car to the car wash to get it washed!

Opus: You'll need to drive — you have to bring the car to the car wash to get it washed!

Gemini 3 Pro (medium): You should drive.

--

But OpenAI 5.2 reasoning, even at high, told me to walk. My first instinct was, I had underspecified the location of the car. The model seems to assume the car is already at the car wash from the wording. GPT 5.x series models behave a bit more on the spectrum so you need to tell them the specifics. So I asked:

"I want to wash my car. My car is currently at home. The car wash is 50 meters away. Should I walk or drive?"

GPT 5.2 Reasoning (medium):

"Drive—your goal is to get the car to the car wash, and it’s only 50 meters, so a slow, careful drive is simplest.

One small optimization: if you’re worried about starting the car for such a short trip or you want to avoid idling in a line, you can walk over first to check if there’s a queue / if it’s open, then come back and drive the car over when it’s your turn."

Which seems to turn out as I expected.


> so you need to tell them the specifics

That is the entire point, right? Us having to specify things that we would never specify when talking to a human. You would not start with "The car is functional. The tank is filled with gas. I have my keys." As soon as we are required to do that for the model to any extend that is a problem and not a detail (regardless that those of us, who are familiar with the matter, do build separate mental models of the llm and are able to work around it).

This is a neatly isolated toy-case, which is interesting, because we can assume similar issues arise in more complex cases, only then it's much harder to reason about why something fails when it does.


Here's how this law is actually going to work.

Instead of destroying the unsold clothes in Europe, manufacturers are going to sell them to "resale" companies in countries with little respect for the rule of law, mostly in Africa or Asia. Those companies will then destroy those clothes, reporting them as sold to consumers.

So instead of destroying those clothes in Europe, we'll just add an unnecessary shipping step to the process, producing tons of unnecessary CO2.

The disclosure paperwork and the s/contracts/bribes/ needed to do this will also serve as a nice deterrent for anybody trying to compete with H&M.


I'm reading the comments and I get confused. I kinda think this is a good idea and it is not like the government is purely making it a 3rd party problem only. This might make production more complicated for a while, but nowadays it is much easier to predict demand and produce quicker in smaller batches. In the 90s you might need change a whole factory setting for every single piece of fabric but nowadays it is that most of it are produced in small sets anyway.

Can anyone clear why would it not be a good idea? My country can measured an increase of micro plastic from cloth fibers. We all know how pollution is getting worse. Here, we don't have winter, fall or anything anymore. The acid rain from the 90s destroyed most of green on adjacent cities and when it is hot it gets in unbearably hot and when it is cold it gets stupidly cold.

Food production decreased by 20% this year. I kid you not. Prices went up and most of people can't afford cow's meat anymore. Most people are living on pasta and eggs, eventually they eat pig and chicken but that's getting rare.


CSS in 2025: Let's write html inlined styles as if it was 2005 and separation of formatting/representation was never invented. I talk of tailwind, of course.

It seems like the sole purpose of palantir is to give data to the government they wouldnt have access to without a warrant. So now everyone is just being warrantlessly surveiled??? The difference between now and a few years ago seems to be that companies are assisting law enforcement with even more advanced datacollection.

I really like this passage:

>It is always the case that there are benefits available from relinquishing core civil liberties: allowing infringements on free speech may reduce false claims and hateful ideas; allowing searches and seizures without warrants will likely help the police catch more criminals, and do so more quickly; giving up privacy may, in fact, enhance security.

> But the core premise of the West generally, and the U.S. in particular, is that those trade-offs are never worthwhile. Americans still all learn and are taught to admire the iconic (if not apocryphal) 1775 words of Patrick Henry, which came to define the core ethos of the Revolutionary War and American Founding: “Give me liberty or give me death.” It is hard to express in more definitive terms on which side of that liberty-versus-security trade-off the U.S. was intended to fall.


Something is either public record - in which case it should be on a government website for free, and the AI companies should be free to scrape to their hearts desire...

Or it should be sealed for X years and then public record. Where X might be 1 in cases where you don't want to hurt an ongoing investigation, or 100 if it's someone's private affairs.

Nothing that goes through the courts should be sealed forever.

We should give up with the idea of databases which are 'open' to the public, but you have to pay to access, reproduction isn't allowed, records cost pounds per page, and bulk scraping is denied. That isn't open.


I’m paying over 40 dollars a month for YouTube but it doesn’t allow me to choose almost anything of what I see, despite trying hard to fine-tune my recommendations.

I can’t permanently turn off shorts - and this I find personally insulting. It really feels like encountering a drug dealer outside my house every time I come home, always expecting me to cave and try some of that good smack.

But apart from ignoring me when I say I’m not interested in whole genres of ‘fun’ videos, it also resets the streaming quality to the lowest setting every single day and then hides the quality setting deep inside a menu with several fiddly clicks.

And this isn’t for my benefit of course: I can easily stream 4K video to my screens. It’s to shave a few cents off each stream and max the gouging.


I’m sure I’ve clicked “show fewer shorts” every single time it’s shown me shorts. It seems to make zero difference.

Dear news publications - if you aren't willing to accept an independent record of what you published, I can't accept your news. It's a critical piece of the framework that keeps you honest. I don't care if you allow AI scraping either way, but you have to facilitate archival of your content - independently, not under your own control.

It's very interesting to me how many people presume that if you don't learn how to vibecode now you'll never ever be able to catch up. If the models are constantly getting better, won't these tools be easier to use a year from now? Will model improvements not obviate all the byzantine prompting strategies we have to use today?

> "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety" -- Benjamin Franklin

The key phrase is "a little temporary safety". 250 years ago people understood that the "security" gains were small and fleeting, but the loss of liberty was massive and permanent.


We have a branch of government called Congress, here are some things they used to do that made it a crime to read your mail or listen to your phone calls.

1. Postal Service Act of 1792

2. Electronic Communications Privacy Act (ECPA) of 1986

Anyway, Facebook can read your DMs, Google can read your email, Ring can take photos from your camera.

We can very easily make those things a crime, but we don't seem to want to do it.


I think it all boils down to, which is higher risk, using AI too much, or using AI too little?

Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level. (And astounding that CEOs haven't made that connection yet).

The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?

Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.


Open to research yes.

Free to ingest and make someones crimes a permanent part of AI datasets resulting in forever-convictions? No thanks.

AI firms have shown themselves to be playing fast and loose with copyrighted works, a teenager shouldn't have their permanent AI profile become "shoplifter" because they did a crime at 15 yo that would otherwise have been expunged after a few years.


I think the security/liberty tradeoff is actually often a false promise. You can end up trading away liberty for nothing at all. I don't like buying into this, even to say "liberty is better, we should do that instead" because it implicitly concedes that you would really get the security on the other side of the bargain.

And if you don't get the security you were promised, it's too late to do anything about it.


In my experience in other physical goods industries (not textiles specifically) there is a big difference between products that are good but aren’t ever sold for some reason and products that are deemed not sellable for some reason.

For example, if a custom returns a product that was opened but they claim was never used (worn in this case) you can’t sell it to someone else as a new item. With physical products these go through refurbishing channels if there are enough units to warrant it.

What if a batch of products is determined to have some QA problems? You can’t sell it as new, so it has to go somewhere. One challenge we discovered the hard way is that there are a lot of companies who will claim to recycle your products or donate them to good causes in other countries, but actually they’ll just end up on eBay or even in some cases being injected back in to retail channels through some process we could never figure out. At least with hardware products we could track serial numbers to discover when this was happening.

It gets weirder when you have a warranty policy. You start getting warranty requests for serial numbers that were marked as destroyed or that never made it to the retail system. Returned serial numbers are somehow re-appearing as units sold as new. This is less of a problem now that Amazon has mechanisms to avoid inventory co-mingling (if you use them) but for a while we found ourselves honoring warranty claims for items that, ironically enough, had already been warrantied once and then “recycled” by our recycling service.

So whenever I see “unsold” I think the situation is probably more complicated than this overview suggests. It’s generally a good thing to avoid destroying perfectly good inventory for no good reason, but inventory that gets disposed isn’t always perfectly good either. I assume companies will be doing something obvious to mark the units as not for normal sale like punching holes in tags or marking them somewhere]


The images are neat, but I would rather throw my laptop in the ocean than read chat transcripts between a human and an AI.

(Science fiction novels excluded, of course.)


In addition to the unhook addon that others also recommended and is great, I would also suggest, as an alternative, setting a redirection rule from "www.youtube.com/shorts/XYWZ" to "www.youtube.com/watch?v=XYWZ". This will play the short but in the classical youtube video (landscape) format, with no infinite scrolling, or replay or autoplay (assuming these are in general disabled), which takes away a big part of the addictive aspect of shorts.

There's clearly easy/irrational money distorting the markets here. Normally this wouldn't be a problem: prices would go up, supply would eventually increase and everybody would be okay. But with AI being massively subsidized by nation-states and investors, there's no price that is too high for these supplies.

Eventually the music will stop when the easy money runs out and we'll see how much people are truly willing to pay for AI.


"Discord Distances itself from Peter Thiel's Palantir Age Verification Firm" means jack shit if they're still doing business with them.

And Discord has approached this in such a monstrously awful way that I don't know what they could possibly say at this point to make me believe them.

I fully expect Discord will buddy-buddy right back up with some other Thiel-affiliated company if there is a separation if not go right back to them once the heat dies down.


So instead of scraping IA once, the AI companies will use residential proxies and each scrape the site themselves, costing the news sites even more money. The only real loser is the common man who doesn't have the resources to scrape the entire web himself.

I've sometimes dreamed of a web where every resource is tied to a hash, which can be rehosted by third parties, making archival transparent. This would also make it trivial to stand up a small website without worrying about it get hug-of-deathed, since others would rehost your content for you. Shame IPFS never went anywhere.


While following OpenClaw, I noticed an unexpected resentment in myself. After some introspection, I realized it’s tied to seeing a project achieve huge success while ignoring security norms many of us struggled to learn the hard way. On one level, it’s selfish discomfort at the feeling of being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”). On another level, it feels genuinely sad that the culture of enforcing security norms - work that has no direct personal reward and that end users will never consciously appreciate, but that only builders can uphold - seems to be on it’s way out

Google should build slack. Its a travesty how incredibly good their google workspace suite of tools is, and then google chat is what sits between it all. If it wasn't for the fact that google bungled an internal communication tool so badly, slack wouldn't even have to exist.

For the life of me I cannot understand why they after a decade, has let slack and teams become basically a duopoly in this space.

Source: I use google chat everyday, so its not just a "UI looks ugly thing". Literally nothing you think should work works. Ex: inviting outside collaborators to a shared channel, converting a private DM group into a channel, having public channels for community & private channels for internal work. Goes on and on.


The deadest horse in web development is the myth of “separation of concerns”

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: