I feel like Anthropic is going down a bad path here with billing things this way. Especially as local LLM continues to develop so fast.
I downgraded from my $200 a month plan to my $20 plan and hit limits constantly. I try to use the API access I purchased separately, and it doesn't work with Claude Code (something about the 1 million context requiring extra usage) so I have to use it Continue. Then I get instantly rate limited when it's trying to read 1-2 files.
It just sucks. This whole landscape is still emerging, but if this is what it's like now, pre enshittification, when these companies have shitloads of money - it's going to be so much worse when they start to tighten the screws.
Right now my own incentive is to stop being dependent on Claude for as much as I can as quickly as I can.
This is how free drink refills, airplane tickets, Internet service, unlimited data plans, insurance, flat rate shipping, monthly transit passes, Netflix, Apple Music, gym memberships, museum memberships, car wash plans, amusement park passes, all you can eat buffets, news subscriptions, and many more work.
Either you get a flat rate fee based on certain allowed usage patterns or everyone has to be billed à la carte.
This is a different case - those all have limitations based on human behavior (it's not necessary or possible to constantly be washing your car the entire month when you pay for unlimited washes) - that doesn't exist here. The types of plans available should reflect that reality. If gyms faced a situation where people would go and spend 18 hours working out every day for a month, they would probably change how they billed things.
Your comparisons are all also "unlimited" situations to Claude's very much limited situation. You can't buy a plan for Claude that is marketed as being unlimited. They're already selling people metered usage. They're just also adding restrictions on top of that.
They sell metered usage while having the implied expectation that most wont use it fully. Power users and users of stuff like OpenClaw don't match that idea.
So they further restricted the metered caps, which were only offered to NOT be reached by that many.
Because a big part of Anthropic's story is that they build based on how people actually use AI. Power users aren't just annoying edge cases, they're signal. Throttling them and calling it done is inconsistent with that.
> Power users aren't just annoying edge cases, they're signal.
Not all power users. Some re-invent the wheel and/or do things inefficiently, and in most cases there's no business incentive to adapt the service to fit the usage patterns of those users, or of other users that deviate from the norm in regards to resource usage.
Sorry to tell you but generally any company's "story" is all marketing and PR, if it interferes with their making money, which it does in this case, that company will not hesitate to leave it behind.
Oh the billion bollar vc backed pre ipo companys story was this? Omg and they somehow are not delivering up to your standards? Damn they better get their act together lest people like you will whine on twitter about them losing their way
I didn't write anything about pricing. I just claim that people would love an offering without the discussed restriction, and because there is clear evidence of such a demand, it would make sense for Anthropic to prepare such an offering.
Yes, and that's exactly the problem I'm pointig at.
Your comment "that people would love an offering without the discussed restriction" ignores the pricing burden of that, which is why it's confused why Anthropic don't just offer this.
"Unlimited" has always been a lie. There is no free lunch. There are always limits.
I've had to unwind "unlimited" within startups that oversold. I've been bit by ISPs, storage providers, music streamers, fuckin _Ubers_, now AI subscription services, that all dealt in "unlimited". None of them delivered in the long run.
I'd be mad at Anthropic if it weren't for the fact that my experience now can see this sort of thing from a mile away. There are a lot folks, even on HN, that haven't been around for as long. I understand the outrage. I've been there. But these computers cost money to run, and companies don't operate at a loss in the fullness of time.
Once you know that unlimited trends towards limited, the real question is whether we're equipped as a society to deal with the fact that the capital-L Labor input to the economic equation is about to be replaced with a Capital input for which only a handful of companies have a non-zero value.
On your 1.5Mbps link, you could theoretically download 500GB per month. A huge amount, but I believe it was often genuinely allowed, because their uplinks could cope with it. Unlimited could genuinely be unlimited.
But now you might get things like “unlimited” 1Gbps… which reverts to 10Mbps (1% speed) or worse after 3.6TB (eight hours). And so your new theoretical maximum is about 6.8TB per month rather than 330TB.
>If gyms faced a situation where people would go and spend 18 hours working out every day for a month, they would probably change how they billed things.
Not the best example. The upkeep cost of a gym is pretty flat regardless of how much people use the facilities. Two people can't use a single machine at the same time make it wear out twice as fast. The price of memberships is not correlated to usage, it's inversely correlated to the number of memberships sold.
Two people can't use a machine at the same time is the issue. If you have 50 machines and 200 customers all of whom want to be in the gym 18 hours per day that's quickly going to lead to cancelled subscriptions. Now you need more space and machines or some other way to balance things.
Agreed, but it's an indirect causal link, not a direct one. If the demand far outstrips the possibly supply the demand will have to go down, and it can either go down by people accepting that they can't be in the gym as much time as they would like, or as you say by memberships being cancelled (in which case the price may go up or something else might change).
>Two people can't use a single machine at the same time make it wear out twice as fast
The machine doesn't care about the number of people using it. If it's constantly being used, it will wear out faster. You are conflating "we price based on expected under-utilization" with "costs don't scale with usage." Those are different things.
The inverse correlation you talk about isn't relevant here - People buy gym memberships intending to go, feel good about the intention, and then don't follow through. The business model is built on that gap. That's pretty specific to fitness and a handful of similar industries where aspiration drives purchase.
Anthropic doesn't sell based on a "golly gee I hope people dont use this" gap - they sell compute. Different business.
> Anthropic doesn't sell based on a "golly gee I hope people dont use this" gap - they sell compute. Different business.
There is nothing anywhere hinting at that.
They don’t sell compute. They sell a subscription for LLM token budgets that they hope people don’t use because the compute is vastly more expensive than what they charge or what users are ever willing to pay.
Especially with enterprise subscription plans the idea is for customers to never utilize anywhere close to their limits.
>If it's constantly being used, it will wear out faster.
Yeah, but there's an absolute limit to that, beyond which the cost doesn't keep increasing. Beyond that point, the QoS goes down (queues).
>You are conflating "we price based on expected under-utilization" with "costs don't scale with usage."
I'm not conflating anything, I'm responding to what you said:
>If gyms faced a situation where people would go and spend 18 hours working out every day for a month, they would probably change how they billed things.
Why would a gym need to change how they bill things if all their customers were aiming for maximal utilization, when their costs would barely see any change? I doubt your typical gym operates on razor-thin margins.
Gym costs absolutely scale with usage. Equipment wears faster under heavier use. Cleaning and maintenance staff hours scale with how much the facility is used. Consumables like towels, soap, and chalk go faster. HVAC runs harder. The reason gyms can offer flat-rate pricing is that they bet on under-utilization, not that costs are flat.
Setting that aside, even if we accept your argument that gym costs barely scale with usage, then that makes gyms a bad comparison case for Anthropic, whose costs directly scale with usage. You can't use the gym model to defend Anthropic's pricing decisions if the two cost structures are nothing alike.
I'm arguing that both gyms and Anthropic have usage costs that scale with usage, but gym business model assumes a large margin of under-utilization and there's a hard cap to "power user" - I think both of those extremes don't apply to Anthropic's situation. Under-utilizers aren't paying for AI they have a free tier. There's also a natural ceiling on how much any one person can use a gym. There's no equivalent constraint on API usage.
> The reason gyms can offer flat-rate pricing is that they bet on under-utilization, not that costs are flat.
Yes. In fact i remember hearing about a gym which offered a flat-rate pricing model but explicitly excluded certain professions from partaking in it. I remember the deal was excluding police, bouncers, models, actors and air stewardesses. They had a separate more costly tier for these people. (And I think i heard about it from the indignation the deal has caused online.)
> Under-utilizers aren't paying for AI they have a free tier.
Sure they do. Free tiers suck. I may not always need to use AI, but when I need it, I don't want to immediately get hit by stupidly low quotas and rate limits, or get anything but SOTA models.
> I feel like Anthropic is going down a bad path here with billing things this way.
What do you expect them to do? You are looking at a business currently running at a loss, and complaining about their billing even though this is not a price-rise?
Unrelated, is it still possible to use $10k/m worth of tokens on their $200/plan?
> Anthropic entered 2025 with a run rate of $1 billion; the run rate for March 2026 is estimated at $19 billion.
I don't know what that means in this context.
> Internal projections show the company reaching cash-flow break-even in 2028, after stopping cash burn in 2027.
What does that have to do with them implementing restrictions on their plans because they are currently running at a loss?
Okay, lets say their internal projections[1] are accurate: were those before or after Openclaw released? Maybe their projections were made on the assumption that people would stop using $10k/m worth of tokens on a $200/m plan? Or that those users doing that will only be doing code? Or that the plan users won't be running requests at a rate of 5/minute, every minute of every hour of every day?
--------------------------------
[1] Where did you find those projections? I'm skeptical, at their current prices and current plans, that a break-even at any point in the future is possible unless they shut off or severely scale down training. Running at a per-unit loss means that the more you sell, the larger your loss - increasing your sales increases your loss.
Well, I reinstalled LM Studio today after some ~10 months since I last used it, just to test Gemma 4. On my PC with 32GB RAM and 4070 Ti (12GB VRAM), it (Gemma 4 26B A4B Q4_K_M) loads and runs reasonably fast, with no manual parameter or configuration tuning - just out of the box, on fresh install - and delivers results usable results on the level I remember expecting from SOTA cloud models 12-16 months ago. And handles image input, too. I'm quite impressed with it, TBH. It's something I can finally see myself using, and yay, it even leaves some RAM and VRAM left for doing other stuff.
Look for the current crop of local Mixture of Experts models, where it seems like they've made inroads on the O(n^2) context attention cost problem. Several folks have mentioned Qwen, but there's many more of that ilk. Several of them actually score really high on benchmarks. But when I mess with one of them locally by hand myself, (I have a 3090), it feels a bit like last year's Sonnet. They don't quite make the leaps of understanding you get from Opus.
You can run SOTA local MoE models very slowly by streaming the weights in from a fast PCIe 5 SSD. Kimi 2.5 (generally considered in the ballpark of current sonnet, not opus of course) has been measured as 2 tok/s on Apple M5 hardware, which is the best-case performance unless you have niche HEDT hardware with lots of PCIe lanes to attach storage to and figure out how to use that amount of parallel transfer throughput.
A ~$5000 USD Macbook can run open source models that are competitive with GPT 3.5 or Sonnet 3. So on nice consumer hardware you can have the original groundbreaking ChatGPT experience that runs locally.
We can hope that they optimize the models. I still think its going to be very hard for them to charge $100 or $200 a month at scale from many people, especially with AI "taking jobs". To the extent that happens most of those people won't find replacement income.
I downgraded from my $200 a month plan to my $20 plan and hit limits constantly. I try to use the API access I purchased separately, and it doesn't work with Claude Code (something about the 1 million context requiring extra usage) so I have to use it Continue. Then I get instantly rate limited when it's trying to read 1-2 files.
It just sucks. This whole landscape is still emerging, but if this is what it's like now, pre enshittification, when these companies have shitloads of money - it's going to be so much worse when they start to tighten the screws.
Right now my own incentive is to stop being dependent on Claude for as much as I can as quickly as I can.