Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What's Anthropic's optimization target??? Getting you the right answer as fast as possible!

Wait, what? Anthropic makes money by getting you to buy and expend tokens. The last thing they want is for you to get the right answer as fast as possible. They want you to sometimes get the right answer unpredictably, but with enough likelihood that this time will work that you keep hitting Enter.

 help



Given that pre-paid plans are the most popular way to subscribe to Claude, it quite plainly is a "the less tokens you use, the more money Anthropic makes" kind of situation.

In an environment where providers are almost entirely interchangeable and tiniest of perceived edges (because there's still no benchmark unambiguously judging which model is "better") make or break user retention, I just don't see how it's not ludicrous on its face that any LLM provider would be incentivized to give unreliable answers at some high-enough probability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: