Hacker Newsnew | past | comments | ask | show | jobs | submit | retsibsi's commentslogin

Phrases like "actual understanding", "true intelligence" etc. are not conducive to productive discussion unless you take the trouble to define what you mean by them (which ~nobody ever does). They're highly ambiguous and it's never clear what specific claims they do or don't imply when used by any given person.

But I think this specific claim is clearly wrong, if taken at face value:

> They just regurgitate text compressed in their memory

They're clearly capable of producing novel utterances, so they can't just be doing that. (Unless we're dealing with a very loose definition of "regurgitate", in which case it's probably best to use a different word if we want to understand each other.)


That was always kind of a cruel attitude, because real people's emotions were at stake. (I'm not accusing you personally of malice, obviously, but the distinction you're drawing was often used to justify genuinely nasty trolling.)

Nowadays it just seems completely detached from reality, because internet stuff is thoroughly blended into real life. People's social, dating, and work lives are often conducted online as much as they are offline (sometimes more). Real identities and reputations are formed and broken online. Huge amounts of money are earned, lost, and stolen online. And so on and so on


> That was always kind of a cruel attitude, because real people's emotions were at stake.

I agree, but there was an implicit social agreement that most people understood. Everyone was anonymous, the internet wasn't real life, lie to people about who you are, there are no consequences.

You're right about the blend. 10 years ago I would have argued that it's very much a choice for people to break the social paradigm and expose themselves enough to get hurt, but I'm guessing the amount of online people in most first world countries is 90% or more.

With Facebook and the like spending the last 20 years pushing to deanonymise people and normalise hooking their identity to their online activity, my view may be entirely outdated.

There is still - in my view - a key distinction somewhere however between releasing something like this online and releasing it in the "real world". Were they punishable offensed, I would argue the former should hold less consequence due to this.


I think it is outdated honestly. It's no longer a fringe activity to spend most of your socializing time on the internet/social media, especially so mid 20s and under.

>57% of Gen Zers want to be influencers >... >Nearly half, 41% of adults overall would choose the career as well, according to a similar Morning Consult survey of 2,204 U.S. adults.

https://www.cnbc.com/2024/09/14/more-than-half-of-gen-z-want...


I had a guy who lived two hours from me threaten my life…over 30 years ago, on a MUD.

I don’t think there has been much of a firewall between the internet and “reality” for a very long time.


> I'm curious about what other conclusion you may have reached when reading "on Meta's platforms".

The claim was "79% of all child sex trafficking in 2020 occurred on Meta’s platforms", so they probably took it to mean that 79% of all sex trafficking in 2020 occurred on Meta's platforms.

I don't mean to be a smartarse (well maybe a little). But why wouldn't they interpret it that way, when that's exactly what it says? "X% of all Y happened on Z" doesn't implicitly mean "X% of online Y happened on Z" just because Z is an online platform.


> But why wouldn't they interpret it that way, when that's exactly what it says

I assumed online by default because honestly 79% of all trafficking happening on Meta's platform sounded so implausible I didn't even consider it.

I mean if that number were really true then FB/Insta would rival the dark web or whatever it is called these days. Didn't think they were gone that far.


I wonder if they meant this HN post (the OP), rather than https://asteroidos.org/news/2-0-release/? The latter doesn't seem LLM-written to me, but the HN post does.

Correct.

Full disclosure, i spent ~3 hours crafting this post to hit the tone i wanted to convey and am kind of proud of the wrist-size linux banger i came up with. I am usually not good with writing since it takes me ages and i might be a bit over sensitive right now. All i wanted was to spare you all the rocky grammar as a cherry on top. To now find that the polished version triggers your "its (completely?) Ai written" sensor. Lesson learned i guess.

I think the AI-coding skill that is likely to remain useful is the ability (and discipline) to review and genuinely understand the code produced by the AI before committing it.

I don't have that skill; I find that if I'm using AI, I'm strongly drawn toward the lazy approach. At the moment, the only way for me to actually understand the code I'm producing is to write it all myself. (That puts my brain into an active coding/puzzle solving state, rather than a passive energy-saving state.)

If I could have the best of both worlds, that would be a genuine win, and I don't think it's impossible. It won't save as much time as pure vibe coding promises to, of course.


> I think the AI-coding skill that is likely to remain useful is the ability (and discipline) to review and genuinely understand the code produced by the AI before committing it.

> I don't have that skill; I find that if I'm using AI, I'm strongly drawn toward the lazy approach. At the moment, the only way for me to actually understand the code I'm producing is to write it all myself. (That puts my brain into an active coding/puzzle solving state, rather than a passive energy-saving state.)

When I review code, I try to genuinely understand it, but it's a huge mental drain. It's just a slog, and I'm tired at the end. Very little flow state.

Writing code can get me into a flow state.

That's why I pretty much only use LLMs to vibecode one-off scripts and do code reviews (after my own manual review, to see if it can catch something I missed). Anything more would be too exhausting.


I've had reasonable results from using AI to analyse code ("convert this code into a method call graph in graphml format" or similar). Apart from hallucinating one of the edges, this worked reasonably well to throw this into yED and give me a view on the code.

An alternative that occurred to me the other day is, could a PR be broken down into separate changes? As in, break it into a) a commit renaming a variable b) another commit making the functional change c) ...

Feel like there are PR analysis tools out there already for this :)


Don't you think automated evaluation and testing of code is likely to improve at an equally breakneck pace? It doesn't seem very far-fetched to soon have a simulated human that understands software from a user perspective.

I don't think people should be rude to you, but the comment was AI-generated, right? Lots of people dislike that as it feels kind of wasteful and disrespectful of our time; it can literally take you less time to generate the comment than for us to read it, and the only information you added is whatever was in the (presumably much shorter) prompt. If you'd written it yourself, it may or may not be interesting and correct, but I'd at least know that someone cared enough to write it and all of it made sense from that person's perspective. Sometimes I am interested in an LLM's take on a topic, but not when browsing a forum for humans.

I'm sorry but if you're blaming some text on a website to be "disrespectful of our time", I don't know what to say to you.

I stand behind everything on my comment and I have engaged in good faith with every single reply to it here (even though none of them talk about anything in the comment itself).

Go through my profile, see how I engage with people and tell me again I'm AI.

If you do not have anything to say to the subject matter of a comment and just have personal snide remarks, I do think it's a waste of your time but do not blame me for it or tell me to leave the platform.

Typing this comment right now is a waste of time for me but I do not feel the need to grandstand over it as if there's a massive opportunity cost to it. I'm a human writing/interacting in a "forum for humans."


I didn't say or think that your account was AI-run, and I didn't tell you to leave the platform. I just tried to explain why your comment might have annoyed people and triggered negative responses (while agreeing that the rude ones were inappropriate).

Sure, cheers then. I don't care about negative responses if they're negative just because they think it's AI-generated, without having to say anything substantial on the actual comment or the article. I have demonstrated my willingness to engage in good faith but those comments have not.

If negative responses have no substance behind it, it makes no sense to care about them or take them seriously.

Also, the fact that you assume it takes more time reading that comment than it took for me to write is pretty weird (I still don't get what was so wrong about the comment that simply reading it is a waste of time to people).


> the fact that you assume it takes more time reading that comment than it took for me to write is pretty weird

I didn't do that either! I had no idea whether you just fired off a quick prompt and pasted the result without even reading it, or spent ages crafting and rereading and revising it, or (most likely) something in between those extremes. I said generated comments can take less time to create than to read, and that's one reason people push back against them. There's a risk that the forum just gets buried in comments that take near-zero effort to 'write' but create non-trivial time/effort/annoyance for those of us wading through them in search of actual human perspectives. And even the relatively good ones will be little different from what we could all obtain from an LLM if we wanted it.

FWIW, I didn't even get to the substance, because I instinctively bounce off LLM-written content posted in human contexts without explanation. You're obviously free not to care about that, and I wouldn't have replied and got into this meta discussion if not for the back-and-forth you were already involved in.

edit: but if you do care about getting through to people like me, even a short manually-written introduction can make me significantly more likely to read the rest of the content. To me, pure LLM output is a pretty strong signal of a bot/low-effort human account. But if someone acknowledges that they're pasting in an AI response and bothers to explain why they think it's interesting and worthwhile, I'll look at it with more of an open mind.


I stand by the comment in its entirety. If formatting is an issue that makes it unreadable for you (to not even get to the substance), I can't help you. I do not care about "getting through" to anyone, I'm a human interacting on a human forum and I responded to the content of the article which was mostly BS about creating AI slop (on top of being a content marketing piece trying to sell people shit using deceptive claims).

But I will defend myself when I'm told obtuse things without any substance backing it.


I'm obviously just annoying you, which really wasn't my goal, so I'll stop here. But I want to note that if you think this all comes down to "formatting", you're still not hearing what I'm trying to say.

Sometimes, though, it's a question of retaining actual power vs. sending a message that won't be listened to by the people who need to hear it. Jan 6-7 2021 could have ended very differently if Mike Pence and the other relatively normal Republicans in Trump's first administration had resigned in protest at some earlier point and been replaced by loyalists.

They said "between", not "during"; I think the point was that they didn't spend a full weekend with Claude.

edit: anyone care to explain why this is a bad comment, rather than just downvoting? The GP comment says "Over a weekend, between board games and time with my kids,", and the parent comment lectures them based on an obvious strawman: "I can prompt AI while playing with the kids"


I'll bite.

"between time with the kids" is another person's "during time with the kids".

For me, "between time with the kids" means my kids are engaged in another activity that does not require my input until they are done with it. Whatever I am doing during this time also is typically very interruptible, so I am ready to help the kids along to their next "thing" (the joys of being a parent!). On a typical weekend (my oldest is 7), I'll get maybe 2 hours of this time during the time my kids are awake.


That all makes sense, but I don't see how it supports your uncharitable read of the original comment. Both because they could very easily have meant something a bit different (we have no idea how old the kids are or whether they were even around all weekend; maybe there were times when they were at friends' houses or similar), and because vibe coding could be the interruptible activity they do while the kids are busy. Maybe you feel strongly that it's not a legitimate use of that kind of semi-downtime, but we don't know if that's what they were talking about in the first place.

"Hours spent on the project" would be a much more useful metric, with no confounding variables. As it is, the mere mention of interleaving time with your kids and time engrossed in tech hits a nerve at time when IMO too many parents are doing this already, and lends unwarranted validity to the idea.

> This dev is clearly writing his reply with Claude

> You can even see the emdash attempt (markdown renders two normal dashes as an emdash)

He says he wrote it all manually.[0] Obviously I can't know if that's true, but I do think your internal AI detector is at least overconfident. For example, some of us have been regularly using the double hyphen since long before the LLM era. (In Word, it auto-corrects to an en dash, or to an em dash if it's not surrounded by spaces. In plain text, it's the best looking easily-typable alternative to a dash. AFAICT, it's not actually used for dashes in CommonMark Markdown.)

The rest is more subjective, but there are some things Claude would be unlikely to write (like the parenthetical "(ie. progressive disclosure)" -- it would write "i.e." with both dots, and it would probably follow it with a comma). Of course those could all be intentional obfuscations or minimal human edits, but IMO you are conflating the corporate communications vibe with the LLM vibe.

[0] https://news.ycombinator.com/item?id=46982418


> `For example, some of us have been regularly using the double hyphen since long before the LLM era.

This "emdash" and "double dash" discussion and mention is the first time I have heard of it or seen discussion of it. I've never encountered it in the wild, nor seen it used in any meaningful way in all my time on the internet these last 27 years.

And yes - I've seen that special dash character in word for many years. Not once has anyone said "oh hey I type double dashes and word uses that". No it's always been "word has this weird dash and if you copy-paste it it's weird", and no one knows how it pops up in word, etc.

And yes, I've seen the AI spit out the special dash many times. It's a telltale sign of using LLM generated text.

And now, magically, in this single thread, you can see half-dozen different users all using this "--" as if it's normal. It's like upside down world. Either everyone is now using this brand new form of speaking, or they're covering for this Claude code developer.

So yeah, maybe I've been sticking my head in the sand for years now, or maybe I just blindly ignored double-dashes when reading text till now. But it sure seems fishy.


Sounds like you see me as an untrustworthy source, so all I can suggest is that you look into this yourself. Search for "--" in pre-LLM forum postings and see how many hits you get.

Here are my pre-2020 HN comments, with 3 double hyphens in 8 comments: https://hn.algolia.com/?dateEnd=1576108800&dateRange=custom&...

As I was in the process of typing the search term to get my comments (and had just typed 'author'), this happened to come up as the top search result for Comments by Date for Feb 1st 2000 > Dec 12th 2019: https://news.ycombinator.com/item?id=21768030

Note that I wasn't searching directly for the double hyphen, which doesn't seem to work -- the first result just happened to contain one. If I'm covering for the Anthropic guy, I could be lying about the process by which I found that comment, but I think you should at least see this as sufficient reason to question your assumptions and do some searches of your own.


I've just realised I messed up the search, and the algolia link is to my pre-2020 comments containing the word 'author'. But my full (far longer) list of pre-2020 comments also shows some pretty heavy double-hyphen use: 6 hits on page 1 of the results, 15 hits on page 2, and so on.

> Now, it is a reality.

What are the details of this? I'm not playing dumb, and of course I've noticed the decline, but I thought it was a combination of losing the battle with SEO shite and leaning further and further into a 'give the user what you think they want, rather than what they actually asked for' philosophy.



As recently as 15 years ago, Google _explicitly_ stated in their employee handbook that they would NOT, as a matter of principle, include ads in the search results. (Source: worked there at that time.)

Now, they do their best to deprioritize and hide non-ad results...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: