Hacker Newsnew | past | comments | ask | show | jobs | submit | Maxatar's commentslogin

There is a clear and sudden transition on this blog where prior to a certain date there are zero instances of the em-dash and then suddenly it appears like crazy. Like look at his archived posts from 2023, absolutely no em dashes... now look at every post from 2025 and almost every single one of them are literred with them.

I don't think it's a coincidence.


> Freestanding is not a niche technicality — it’s a foundational distinction in the C++ standard.

If this line wasn't written by a LLM I'll eat my hat.

I swear being online these days feels like being one of the dog handlers from the Terminator.


The fact that the average person is seemingly incapable of detecting LLM text drives me insane. Every aspect of that article screams LLM. The tone, the punctuation, the sentence structure, the overall structure, it's so incredibly obvious. But the average person really is oblivious to it.

why? Before comments about LLM I didn't notice this. After I compared pre-LLM posts and post-LLM and looks like AI was used to write/edit this article. But.. why should I matter? Why my ignorance of this fact insane you?

The only ways that comprehending emotions wouldn't belong in its own category of intelligence would be if everyone were equally capable of deducing the emotional state of others, or that performing such deduction is not something intellectual, or that such deduction is strictly a consequence of existing intellectual categories.

>The only ways that comprehending emotions wouldn't belong in its own category of intelligence would be if everyone were equally capable of deducing the emotional state of others

Not every skill gets a whole category of intelligence.

>that such deduction is strictly a consequence of existing intellectual categories

Yes.


>Yes.

The fact that you don't list these says a lot about how much you know on this topic.


A sample size of 198 as per this study is more than sufficient to draw pretty strong conclusions.

The issue is not the sample size, it's that studies like these almost always involve a very homogenous population of young college students.


You mean WEIRD.

(Western, Educated, Industrialized, Rich, Democratic)

But why this matters is there a challenge judging intelligence cross cultures?


>But why this matters is there a challenge judging intelligence cross cultures?

I don't know for sure, but my own anecdotal experience is that yes, there most certainly are challenges when a person from one culture assesses the intelligence of someone else from another culture.

It would be nice to know whether this is supported by scientific evidence, or whether this is simply my own personal bias at play.


I just looked into this a bit because I thought he still had some kind of role at Microsoft even after leaving as CEO/chairman, but it turns out that in 2020 he left any and all positions at Microsoft as it was investigating him over inappropriate sexual relationships he had with Microsoft employees.

Before that he had a role as a technical advisor and sat on the board of directors.

I also found it interesting that Steve Ballmer owns considerably more of Microsoft than Bill Gates (4% for Steve Ballmer while Bill Gates owns less than 1%).


Without a significant amount of needed context that quote just sounds like some awkward rambling.

Also almost every feature added to C++ adds a great deal of complexity, everything from modules, concepts, ranges, coroutines... I mean it's been 6 years since these have been standardized and all the main compilers still have major issues in terms of bugs and quality of implementation issues.

I can hardly think of any major feature added to the language that didn't introduce a great deal of footguns, unintended consequences, significant compilation performance issues... to single out contracts is unusual to say the least.


It doesn't sound that way to me, but there's a lot of context at https://youtu.be/tzXu5KZGMJk?t=3160

Because Disney's deal was specifically and exclusively related to Sora, which was OpenAI's bizzare attempt at a TikTok like social networking site but using AI generated videos.

It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.


Sora was "repurposed" as their AI slop social network. OpenAI is not getting out of the business of AI video in general, they're just realizing that an AI version of TikTok isn't the best use of their capital/resources.


WSJ is reporting that they're entirely dropping their video gen features.

https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...

> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.


[flagged]


Smart people do stupid things all the time. Especially when they are moving fast and trying new things.

At least they were able to recognize their mistake and course correct.


Blacklisted usually means something is banned. OpenCode is not banned from using Anthropic's API.


Anthropic has no issue with the use of OpenCode using Anthropic's API which does charge per token.


I have not found this to be the case. My company has some proprietary DSLs we use and we can provide the spec of the language with examples and it manages to pick up on it and use it in a very idiomatic manner. The total context needed is 41k tokens. That's not trivial but it's also not that much, especially with ChatGPT Codex and Gemini now providing context lengths of 1 million tokens. Claude Code is very likely to soon offer 1 million tokens as well and by this time next year I wouldn't be surprised if we reach context windows 2-4x that amount.

The vast majority of tokens are not used for documentation or reference material but rather are for reasoning/thinking. Unless you somehow design a programming language that is just so drastically different than anything that currently exists, you can safely bet that LLMs will pick them up with relative ease.


> Claude Code is very likely to soon offer 1 million tokens as well

You can do it today if you are willing to pay (API or on top of your subscription) [0]

> The 1M context window is currently in beta. Features, pricing, and availability may change.

> Extended context is available for:

> API and pay-as-you-go users: full access to 1M context

> Pro, Max, Teams, and Enterprise subscribers: available with extra usage enabled

> Selecting a 1M model does not immediately change billing. Your session uses standard rates until it exceeds 200K tokens of context. Beyond 200K tokens, requests are charged at long-context pricing with dedicated rate limits. For subscribers, tokens beyond 200K are billed as extra usage rather than through the subscription.

[0] https://code.claude.com/docs/en/model-config#extended-contex...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: