If I want to buy today a smartphone that is positioned on the market at the same level as what I was buying for around $500 seven-eight years ago, now I have to spend well over $1000, a price increase between 2 and 3 times.
So your example is not well chosen.
Price increases have affected during the last decade many computing and electronics devices, though for most of them the price increases have been less than for smartphones.
With consumer phones you're not telling your customers "spend $200,000 with us to try and find holes before the bad guys do it". Commercial SAST tools have been around for 20 years and the pricing hasn't moved in all that time. With AI tools you've got a combination of the perfect hostage situation, pay for our stuff before others will find bad things about your product, and a desperate need to create the illusion of some sort of revenue stream, so I doubt prices will be dropping any time soon.
Yeah and to give a more recent example, it's exactly like how RAM, storage, and other computer parts have gotten much cheaper over the last 3 years... oh wait.
Averages tell us the general availability of wealth. To give you some perspective, most European countries are poorer than the poorest U.S. state, which is Mississippi. There just aren't as many high-paying jobs. And these figures encompass everything, including what the government spends on health care and welfare programs and education. So Europeans as a whole get much less per capita from both government and private sector. When it comes to purchasing power parity, which takes into account cost-of-living differences, the gap isn't as big as above, but it's still pretty significant.
This observation makes sense, because all models currently probably use some kind of a sparse attention architecture.
So the closer the two related pieces of information are to each other in the input context, the larger the chance their relationship will be preserved.
DoD kept asking for changes of contract where at least the legalese would be changed to somewhat more permissive but Anthropic stayed their ground.
Sam Altman probably let them do that, while using language like "we have technical means of oversight and the same red lines as Anthropic". But in reality they will allow DoD to do what Anthropic didn't.
> Very often, after a correction, it will focus a lot on the correction itself making for weird-sounding/confusing statements in commit messages and comments.
I've experienced that too. Usually when I request correction, I add something like "Include only production level comments, (not changes)". Recently I also added special instruction for this to CLAUDE.md.
Since some time, Claude Codes's plan mode also writes file with a plan that you could probably edit etc. It's located in ~/.claude/plans/ for me. Actually, there's whole history of plans there.
I sometimes reference some of them to build context, e.g. after few unsuccessful tries to implement something, so that Claude doesn't try the same thing again.
See e.g. https://epoch.ai/data-insights/llm-inference-price-trends/
reply