Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
pkaye
9 days ago
|
parent
|
context
|
favorite
| on:
Claude Sonnet 4.6
Above 200k token context they charge a premium. I think its $10/M tokens of input.
help
_ink_
9 days ago
[–]
Interesting. Is it because they can or is it really more expensive for them to process bigger context?
reply
cube2222
9 days ago
|
parent
|
next
[–]
Attention is, at its core, quadratic wrt context length. So I'd believe that to be the case, yeah.
reply
pkaye
9 days ago
|
parent
|
prev
[–]
I've read that compute costs for LLMs go up O(n^2) with context window size. But I think it is also a combination of limited compute availability, users preference for Anthropic models and Anthropic planning to go IPO.
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: