There is no way this is going to make it so that "engineers can focus on more interesting problems and engineering teams can strive for more ambitious goals."
Instead it will mean that bosses can fire 75-90% of the (very expensive) engineers, with the ones who remain left to prompt the AI and clean up any mistakes/misunderstandings.
I guess this is the future. We've coded ourselves out of a job. People are smiling and celebrating this all - personally I find it kinda sad that we've basically put an end to software engineering as a career and put loads of people out of work. it is not just SWEs - it is impacting a lot of careers... I hope these researchers can sleep well at night because they're dooming huge swathes of people to unemployment.
Are we about to enter a software engineering winter? People will find new careers, no kids will learn to code since AI can do it all. We'll end up with a load of AI researchers being "the new SWEs", but relying on AI to implement everything? Maybe that will work and we'll have a virtuous circle of AIs making AI improvements and we'll never need engineers again? Or maybe we'll hit a wall and progress in comp sci will essentially stop?
>Instead it will mean that bosses can fire 75-90% of the (very expensive) engineers, with the ones who remain left to prompt the AI and clean up any mistakes/misunderstandings.
This is the same logic that has driven cheap off-shoring in non-technical companies.
For decades orgs have been able to buy "human-level" (i.e. humans) engineering for a tiny fraction of an engineer's salary, and there have been millions of eager salesmen for off-shore dev shops pushing them to do it too. After seeing the outcomes of this approach, I understand why well-paid engineers remain well paid. And why they'll remain well-paid after the LLM non-pocalypse.
If you think LLMs are so amazing, I would encourage you to see how much you can rely on them to replace human beings in real world scenarios. Not in contrived PR pieces and cherry picked examples but in situations where actual real people would otherwise be working together to deliver commercially valuable outcomes.
You believe you have domain specific insights that allow you to state, with confidence, that LLMs are able to replace a highly technical and well-compensated role at virtually no cost. If that's the case, you're sitting on a gold mine. If I believed that, I'd be starting a development agency tomorrow.
The "some work" phrase is doing a lot of work for you here. It can easily take them 100 years as well and they will get broke long before.
I see nothing in the original article that doesn't strike me as the techno-optimism of the 1960s where people made movies and books saying "It's the year 2003 and the humanity is exploring the vast depths of the Universe".
So again, it's a very plain old boring techno-optimism.
I am sure they can automate some work (like scaffold a certain CRUD part of the app) but there are always nuances and specifics and the current generation AI has so far proven inadequate in catching those and taking proper care of them.
If it was actually working for anyone, then they would be selling software engineering time at the same but slightly cheaper price as existing software engineering time costs today so they could capture those sweet margins.
This is a company spending investor money selling pickaxe hand grips during a gold rush.
For real evidence, look for companies selling engineering time much greater than the amount of their total engineer count who have good customer retention across projects.
It's hilarious. If Devin were any good, they wouldn't be selling access to it to random SWEs, they would be replacing Microsoft, Apple, Google, etc for that sweet sweet trillions of dollars!
Where's the app they built in an afternoon using Devin? Where's the software product that Devin actually built a month ago and was being used by thousands of people?
Their actual business seems to be closer to "Lets milk some of that sweet sweet high income from SWEs with FOMO about AI"
> Instead it will mean that bosses can fire 75-90% of the (very expensive) engineers, with the ones who remain left to prompt the AI and clean up any mistakes/misunderstandings.
We continually hear anxiety about technology leading to mass unemployment and it keeps not happening. Instead, workers tend to have higher productivity, which drives higher wages.
Technological advancements transformed agriculture from being 75% of the workforce to being less than 5% of the workforce over the last 200 years and instead of mass unemployment, everyone found other ways to add value to society and it has been an absolute win, with higher standards of living.
It's absolutely a win for society (in the longer term), but in the meantime a lot of people's lives were upended, devasted, and even ended due to the upheavel. Maybe we can avoid that this time, but I doubt it. You can't tell someone who's children are starving that it's all worth it.
Were they? Or did their children move to the city and learn different skills, while they bought machines to replace the lost labor? Was the tractor actually bad for the farmer?
Where do you live? Are there other economic opportunities there for a smart, motivated ex-programmer?
Many programmers are good enough generalists to displace people in other jobs if the work dries up. Just like Teach for America college grads displace and outperform professional classroom teachers.
I'm in London, where I earn enough to rent a 2-bed 700sqft flat. Not exactly retraining for years money. But in any case, I think I'll be fine, it's others I'm worried about. I grew up in a poor rural area - I've seen what sudden unemployment does to people, even if they are smart and motivated.
You've got a good point here. I've wondered about this at times. Like, what if we can't find a way to advance these agents much beyond where Devin is now? What if they can't consume a code base and make meaningfully high quality additions to them reliably, so they're essentially stuck as novice/intermediate developers?
This seems nice(ish) for someone like me with a senior title and experience (~15 years), but totally knocks the bottom out of the industry. How do new people enter the industry and become programmers like I did? What's the new entry point into software, and what does that role look like?
I can't tell if it would actually be good to knock the bottom out. If seniors become sort of like engineers going into the machine to tune things and build the big new things, who replaces them eventually? Will software get worse because of this?
One thing I wonder is if this will birth a new generation of software architecture which is essentially a complete mess which AIs can easily and efficiently manage due to being machines, which will require businesses to "take the leap". Basically you tell the system what you want and it'll generate it on a bespoke system using custom infrastructure which the AI is optimized to implement solutions on. The results might be amazing, but if something goes wrong, humans would have a hell of a time figuring it out. That wouldn't be a nice career for someone like me. It might still leave a category of old fashioned human-engineered software, though.
One thing about agriculture worth noting is that we had a LOT of other things we could work on. Agriculture held us back, so to speak. What is software holding back right now? If I'm unemployed by this advance, what would I do instead? I'm not sure how transferable my skills are. It could actually make me quite a bit less productive, not more.
While it certainly make you, the individual, less productive because you've been replaced by a machine, it still makes society more productive.
If replaced, you essentially were the misallocated capital: capital paying your salary was inefficient compared to that same salary paying a more productive AI.
This means you will either have to find a way to become more efficient to compete against AI for developer jobs, or you will have to reskill. This reskilling period is what you're referring to as being less productive, but in the long run you will find a job and therefore will become productive again. We can't guarantee that job will pay as much as your former, but costs in the former greatly fell anyway, so it doesn't much matter.
In short: you will be reallocated and find your optimal productivity subject to your utility preferences through the reallocation.
This is objectively accurate, and I think you mean to be nothing but. However, the mechanistic focus on efficiency and profit at the direct expense of human quality of life is downright goulish. We need political changes that ensure people who are “outmoded” by this tech can still have good lives.
The mechanism for this is charity. I understand people like to see social welfare programs, but I believe they are inferior to private sector charity because they ignore moral hazard, tend to produce worse outcomes, and are less economically efficient. At the core, welfare steals from all to provide provide for some. This introduces deadweight loss.
I see a future where those displaced by automation are able to retrain with minimal cost or by charitable means. But this also means that current workers should be saving hard for a rainy day, as nobody knows when one may come.
"At the core, welfare steals from all to provide for some."
Yep, I want a system of progressive taxation that provides negative pressure on wealth inequality while uplifting the poorest. I like this idea because I like liberty, and a rich person loses almost no agency by having some portion of their income or wealth taxed while a small stipend can make a huge difference in the choices available to a poor person. There's also the utilitarian argument that disproportionately burdening a small-ish number of rich people for the benefit of the majority is a net good. If private sector charity could achieve either of these goals (alleviation of wealth inequality or uplifting of the poor) as well as social welfare can, I'd argue for it instead, since at the end of the day I prefer it be voluntary, but I don't think that it's up to the task, which we can observe in today's charities. Do you have any evidentiary basis for the counter? Why would the system of voluntary charity suddenly improve its outcomes or have more money to spend?
"they ignore moral hazard"
Sometimes economic risks are negative, like spending your paycheck on lottery tickets, but other times they're positive, like starting a risky but ambitious startup or getting an education. If social welfare causes a significant change in the average person's risk tolerance, a claim that definitely requires substantiation, then I still might argue this isn't a net negative for the economy or society at large. Even if the claim is true, is using homelessness and privation as a threat (a threat often realized upon the unfortunate) a worthwhile ethical sacrifice? How much does it cost us not to do this, because I think it's worth some money.
"tend to produce worse outcomes"
If this is true, which I also wouldn't take for granted, then you'd still need to factor in the asymmetry in access that private charities have vs a social welfare system. Perhaps a person who gets the aid of charity benefits more than one who gets welfare, but does the average person?
"are less economically efficient"
This is likely to be true simply because government tends to be inefficient, but
can be worth it for the above reasons and can be assessed objectively. I'd be fine with spending some margin more than should be spent, say 50%, if it means that 95% rather than 10% of people have access to food or housing or whatever else.
Overall, I don't see how charity can even serve as a mechanism for what we're talking about. If we see unprecedented unemployment due to AI, how exactly do we expect voluntary charity to expand to meet demand? What can we do other than some form of wealth redistribution if we don't want lots of people to starve? Furthermore, why can't we do both? If charity will expand to meet demand, then let's use social welfare to fill in the gaps and prevent destitution.
Overall I think we just fundamentally disagree about government's role as well as for what to optimize: individual or society.
> I like this idea because I like liberty, and a rich person loses almost no agency by having some portion of their income or wealth taxed while a small stipend can make a huge difference in the choices available to a poor person.
I also like liberty, but to me the opposite of liberty is achieved when you redistribute for non public goods. Although you may have elevated the poorest, you've dampened the richests' purchasing power, however marginal. It is not pareto optimal, and to me the atomic unit is the individual and not the collective society.
> Do you have any evidentiary basis for the counter? Why would the system of voluntary charity suddenly improve its outcomes or have more money to spend?
"Crowd-out was small as a share of total New Deal spending (3%), but large as a share of church spending: our estimates suggest that church spending fell by 30% in response to the New Deal, and that government relief spending can explain virtually all of the decline in charitable church activity observed between 1933 and 1939."
This is an interesting problem due to its nature: good evidence can really only be collected once policies are in effect. Regression discontinuities around the New Deal are likely good candidates to study. The above paper estimates a 30% drop in religious charitable giving. I didn't look to see if they are able to discern whether this is due to the perception that the poor now get money from the gov so don't need extra or whether the income tax cut disposable income and thusly the charitable contribution budget.
Another instance is the Texas Seed Bill, where Grover Cleveland vetoed the disaster relief bill after finding no power enumerated to the federal government to provide aid. Private donations exceeded the Congressionally approved sum (or so I've heard but cannot find a source atm).
The logical basis is the following: by taxing someone you remove their purchasing power and thus naturally cut their ability to provide charitable contributions. Keep in mind the level of welfare we are already providing far exceeds what the wealthiest provide in their 50%+ tax rates. A lot of burden comes from "well to do but vunerable to economic shocks" folk, for which taxation rates have material impacts.
> If social welfare causes a significant change in the average person's risk tolerance, a claim that definitely requires substantiation, then I still might argue this isn't a net negative for the economy or society at large.
This is exactly why I mention moral hazard. We have to ask ourselves what the unintended consequences or the policies are, and how they may be abused.
Risk tolerance shifts due to free money will broadly impact the economy: you will necessarily increase demand as those with jobs suddenly find the opportunity cost of not working preferable. So you either end up with more unemployment or needing to tax at higher rates to meet the new demand for welfare. This might lead to increased economic output from the poorest but also may lead to less output from the richest. One thing is certain: risk is adjusted due to government subsidiaries/theft.
> Perhaps a person who gets the aid of charity benefits more than one who gets welfare, but does the average person?
How will you measure this? Are you quantifying strictly by the total dollars received by the needy? What if you flipped the script and asked whether the outcome is congruent to the expectations of the givers? E.g., are those dollars given charitably providing a better outcome for the givers than through dollars given by the charitable via taxation?
The benefit of the charitable approach is that there can be conditionals. "Do drugs and the money stops" kind of stuff. Society tends to think that is immoral for the government to do, but arguably the outcomes for the needy would be better if conditions were allowed. Charity also has the added benefit of being able to flow to local needs vs flow from the top down.
> If we see unprecedented unemployment due to AI, how exactly do we expect voluntary charity to expand to meet demand?
What is AI doing that will completely eridicate humans? I struggle to understand. Let's say you're displaced from programming. Could you become a farmer? Would machines undercut your price? Probably. But what if you're good at metal work and Joe is good at farming, and both of you have lost your jobs to AI? Maybe you decide to consume from only human, non AI based businesses. Suddenly, costs aside, people create an underlying economic network of non AI businesses and start to thrive again. This is essentially what we see with people trying to buy only from [insert preferences here] businesses.
I don't think AI is the scare people make it out to be. Ultimately, it will be a fun thing to play out so long as regulation is minimized.
The key phrase is "over the last 200 years". 200 years ago there was just 1 billion people. People had decade(s) to reskill to new profession. Their offspring picked different profession if their parents didn't have good prospect. Changing profession was also not requiring half a decade learning.
Now imagine that AI will make 20% people redundant over next 5 years - thats ~1.6 billion people.
In the short to medium term a bunch of people will be out of a job/career.
In the long term society may benefit overall.
On the other hand I’m not convinced humans are evolving fast enough to keep up with modern society. There are increasing rates of anxiety, depression, ADHD, etc, especially in young people. https://www.thecut.com/2016/03/for-80-years-young-americans-...
It doesn't need to for productivity gains to be good for workers. Employment and real wages both are up. The fact that profits are also up doesn't change that.
There's one notable exception! Tech work in the Silicon Valley area. All other jobs should be paid like that to keep up with productivity and inflation since the 70's.
The only way that happens is if millions of people are significantly worse off. Most people can find work in the big economy, that doesn’t mean replacing a 100k/year job with a 25k/year job is equivalent.
Across a long enough time horizon technology tends to make most people better off, but in the short term it can seriously fuck people over.
A 5% decrease is "significantly worse off"? And it's still higher than 2018--it's not really reasonable to expect an increase every single year, you should look at the longer-term trend.
The median isn’t a simple average. The minimum difference is 5%, people who lost 50+% only count as one more person below the old average. So that 5% represents a great deal of pain for millions not simply a slight haircut.
And sure the long term trends reverse things as I mentioned, but it took 15 years to recover where it was in 2019. Most Americans are simply cut off from wider economic growth and recovery.
Things could be different this time. We've pretty much automated our physical labor, what happens when you automate general thinking? Any new job or field will also be able to be done by AI.
this of course conflats two entirely different things.
1) technology development has tended to hugely improve society-wide productivity and be a general (though not unmitigated) good.
2) technology development has been absolutely shit for many individuals as their careers disappear.
people should be way more worried about good national governance and safety nets to deal with the terrible consequences of 2 while we reap the benefits of 1.
> Instead, workers tend to have higher productivity, which drives higher wages.
workers are capturing less and less of that, especially over the last twenty years.
...for instance, by a larger shift towards service-sector jobs (e.g. janitorial work, dining and entertainment, and retail sales).
It is the case that productivity has grown with automation, but at the same time, median wages have stagnated, as the number of high-paying jobs have steadily shrank. Considering inflation, median income has been stagnant since about 1965. But productivity is at an all time high. But all the wealth that that generates is not going out into the world, its being concentrated.
I'm in automation with physical machines, and there's a part of me that sincerely hopes that the continuing automation of various jobs leads to a golden age where society's basic needs are always met by robots, and we're free to pursue our passions. But I'm honestly not optimistic that that will happen without a series of (likely bloody) revolutions and counter-revolutions until either the species is extinguished or our social system finally achieves a new equilibrium.AI can definitely be understood as an invasive species or a natural disaster in terms of the impact it has to our social ecosystem.
You are vastly overselling current generation AI here. It can do some things -- GitHub Copilot has been useful for people to reduce the boilerplate generation, for example -- but in terms of actual programming which 98% of the time is maintenance (fixing bugs, debugging, adding tests, refactoring, adding features) it's performing mostly bad. It's only good at generating code and maybe "understanding" some of it. Prove me wrong with links and the AI equivalent of CodePen (something that's a very glaring omission in the area).
Secondly, bosses try to fire the most expensive programmers ever since expensive programmers became a thing. All the outsourcing to wherever the salaries are much smaller is being attempted even as I type this now, by probably no less than 1000 companies, in this very second. Why hasn't the area in general still gotten rid of their expensive programmers?
Easy -- the outsourced "talent" produces crap that then costs more to fix and repair than if you hired proper programmers in the first place. Of course that requires some forward thinking that's not limited only by the short-sighted "we are about to save money muahahaha" mindset and as we know many businessmen are incapable of looking forward -- hence this mistake is being done 24/7.
It's 99% the same with these AI coding bots.
Wishing cost savings into existence so far hasn't worked.
Will some interns and smart people who coast on much-higher-than-local wages because they wrote some Python to import their boss' Excel spreadsheets and automatically generate other stuff, lose their jobs? Yes, that's very likely.
Will the senior programmer be replaced? It's not a zero chance, surely, and we already saw significant layoffs in most big US companies but (1) most of the world didn't over-hire during COVID and (2) it's unclear if senior devs were fired, or actual redundancies, and (3) well, most of the world isn't the USA.
So again, you are vastly overselling the current generation AI.
Whether or not the tech can actually live up to the hype doesn't mean execs/VCs/etc won't try to get there anyway. That will, at best, result in a ton of volatility, as those trying to utilize AI figure out that it actually can't do what they want, and then have to hire engineers again, etc...
> Many of us would be out of work, but if we can redistribute the AI-generated surplus, this will still be a net-win for us all.
This won't happen. It does NOT trickle down, Mr. Reagan. The past 40+ years, every single graph you pull up shows the widening gulf between the wealthy and the not.
Arguably we haven't seen redistribution of wealth due to past automation advances, so it seems unreasonable to believe it will happen now. As automation has improved in the last decades especially, wealth has disproportionately moved upwards.
>if we can redistribute the AI-generated surplus, this will still be a net-win for us all.
which historical technological development that hugely improved productivity do you feel has led to an improved universal social safety net / public investment in whatever country you're from?
Yes, the complete impotency of anti-trust and big business regulation over the last 50-60 years.
And any political willpower that exists is constantly used for smearing campaigns for either side of the political spectrum (e.g. look at them, they are the bad guys).
Automation tax is also the worst possible outcome, since it both stifles innovation AND doesn't address the redistribution problem.
> but if we can redistribute the AI-generated surplus
History suggests that we cannot.
This is very much like saying "but if we can redistribute the wealth of all the billionaires". Like, uh huh, but that won't happen because, short of fear of murder, people winning a class war have no incentive to stop.
Millions of people being put out of work might change the political landscape quite a bit. We’re talking about a technological revolution, so I don’t think it’s too crazy to consider an economic or political revolution to go with it.
The (first) French Revolution, the period in history to which all revolution is compared, did not distribute equality to the masses. It distributed The Terror, and then it distributed Emperor Napoleon, and then it distributed the Bourbon monarchy right back into power.
Then the (second) French Revolution changed one king for another.
Then the (third) French Revolution distributed another fucking Emperor Napoleon.
Revolutions aren't as equality-building as people want to believe.
> Millions of people being put out of work might change the political landscape quite a bit.
No, it won't. Even during the vid, when people couldn't work, the politicians found a way to enrich the wealthy while only doing the bare minimum for those who lost their jobs.
A lot of these AI companies are promising the world and of course no one can deliver on that.
I think it's more likely we'll enter another AI winter first after all these impressive party tricks get boring and investors realize that just because a product uses AI doesn't mean it provides any value.
Failing to meet investors hype would be the biggest reason in my book.
Also it seems like training and running these systems is incredibly costly and the prices AI companies charge for their products are being subsidized by investor money, which won't last forever.
Good point. I've wondered about costs becoming prohibitive. However, I've seen impressive optimization of some models where compute requirements are reduced by orders of magnitude. I'm not sure if these accomplishments will be transferrable to all expensive AI use cases, though.
If we were to pay for the actual costs at this point, I do wonder how many people would consider it worth the expense. But I wonder, how much should ChatGPT cost, for example?
The printing press replaced the need for scribes but introduced the need for typesetters.
Talented scribes did cool things with illustrations and flourishes that were lost in the transition, but ultimately spent quite a lot of their time stroking the letter "e" or "i" or whatever.
Meanwhile, typesetters found themselves in a whole new creative domain which was a different one than the scribes had been working in while also being informed by it and related to it. With their different workflow and this different domain, they were able to innovate in many ways unique to typesetting but also in ways that would circulate back to calligraphers and other inheritors of the hand-crafted letterwork tradition.
I'm personally not sold that we're soon to see LLM-based code generators replace software engineers anyway, but things are not so black and white as you suggest even were that to happen.
That's fine, but once one assumes the impact of this technology is wholly without precedent then they're left speculating about a future informed only by their own imagination.
They'll always be able to re-affirm their own preconceptions (fears) because its the "just so" fantasy future of their own making.
I don't see the point of coming to HN to trade those invented stories, as people here traditionally push to stay within the engineer's realm of how real things work, how they fit into the history of innovation, and what people might practically build with those things based on how they work.
There are countless speculative fiction communities better suited to idle "yeah, but what if the sky was purple tomorrow?" discussions.
eventually we will reach the limits of what can be discovered with physics. the same applies here. eventually the limit of what a human can improve on through a job will be done. is this that point? idk but there will be one time that "new jobs" arent made
This is like saying that because the possible books that can be written are finite (assuming some max length) then eventually every books will be written and writers will have nothing to do. While mathematically true, the word "eventually" is doing a lot of work.
Your reference would only be true if we could actually catalog, index, reference, retain, and absorb all there is to know about the physical world into a model so simple we can still comrehend it. That's... unlikely.
More likely is that progress, discovery, and improvement behave more like dispersing bits of fog in an intractably large cloud that's always creeping back in on the clearings your made previously. You can sustain positive progress but its asymptotic at best and there are always regressions eating away at what you've done in the past.
So don't worry, there's always going to be more kinds of work to do, just like there's always going to be more physics to study.
Kind of like how art was supposed to be what humans would be doing while AI does the jobs we don't want, but looks like that will be the first thing to fall to the machines, while humans fight for carpet installation and plumbing jobs (for a while).
AI still can't do art. Tacky AI generated imagery is mid-2020s clip art, already recognizable to consumers and signalling negative brand associations like "cheap", "scam", "low quality."
Have you ever seen people dreaming of becoming a soulless cog in the machine creating forgettable 3D assets for mobile video games at a no-name studio once we achieve "full automation"?
Me neither.
What people dream about is being able to go to a field and paint or sing at their leisure, not engaging in the soul crushing rat race that is artistic entrepreneurship.
AI is replacing the jobs that make money but are most certainly creatively bankrupt.
The difference with self-driving cars hype is that they need to be 99.999% good so pretty much perfect to be useful on road and be incorporated mainstream. AI doing some tasks 90% as good as human is good enough. Self driving cars got massively improved in the last 15 years. I remember 15 years ago when DARPA were doing their first self driving challenge and the current tech we have is like magic comparing to what we had back then.
But with software doesn't technical debt accumulate over time when low quality engineers keep working on it?
That's why starting projects with something like Cursor makes you seem superhuman, but as the project grows the AI is more likely to get stuck because of previous low-quality choices. Just like with driving cars, it seems like you need strong supervision. (at least for the foreseeable future)
> Are we about to enter a software engineering winter? People will find new careers, no kids will learn to code since AI can do it all.
It's important to remember that the reason AI can do it at all is that millions of software engineers wrote decent/good code and made it available on the internet. The reason it's going to be a winter is that going forward, there's going to be a huge reluctance to share code by people.
And that AI will never write code better than humans. It will write it 90% as well, still requiring a human to fix the last 10%.
It will further create a bimodal distribution of wages in the software industry. Those who know how to clean up the last 10%, and those who don't (AI prompt monkeys). The rift between these 2 categories will keep widening.
I would have dismissed this thought a year ago but seeing how fast openai is moving, in 5 years those ai assistants will be what nowadays human junior/intermediate devs are.
That's exactly what people were saying of self driving cars 15 years ago. "We're so close, within 5 years we're have full self driving, and in 10 nobody will need a driver's licence!"
The fact that these agents don't sleep is what will really kill human developers.
Even with my nearly 15 years of experience, I'm not sure I see companies justifying the cost of employing me soon when someone half as "capable"* as me can work relentlessly and tirelessly at churning out half-baked features.
*I doubt AI agents will be able to use bigger picture foresight and reasoning (especially reasoning as software pertains to human user experiences) to architect sane applications (as we understand them today, at least) but this likely won't matter in a vast majority of cases.
If this is the case, it would be more inline with how other complex professions work.
Entry level positions in fields like medicine, law, the sciences, architecture, engineering etc can require years of intensive training before you're skilled enough to take on the role even at entry level.
Software engineering still needs a human in the loop, just like art still needs to be prompted and tweaked by a human that can do composition, or writing that needs to be edited and refined, and so on.
AI can't do 100% of the jobs, but it seems like we're somewhere past the 100x capabilities point. AI should be able to make a good employee able to do 100x the output at the same level of quality, and AI only gets more efficient and capable from here on out.
Lawyers doing highly specialized work like doc review can use million token context lengths to achieve days or weeks of work in minutes or hours. Doctors can review huge quantities of literature in search of information relevant to their patients, maximizing the value of their time. Any knowledge work, anything that has repeatable processes, anything not requiring physical work in the real world, will be subject to increasing, accelerating, and unstoppable automation. Market forces will reward efficiency, and eliminating human overhead with relative cost reductions approaching 99.99% is a victory for companies nimble enough to pull it off.
The next wave of AI will just take one high-level goal, and prompt you to reduce ambiguity. You’re already seeing it with this Devin thing, but it’ll get way more effective.
Lawyers will usually bill at a 1 hour minimum, occasionally 30 minutes.
Lawyer A cares about productivity, and is paid by the hour, and so charges 1 hour of doc review at $4000, using the latest greatest technology, and is able to bill 1 hour of doc review to 100 clients. He begins to charge $3500/hr to undercut competition, and intersperses each week of work with training his paralegals and associates to amplify their capabilities.
Law Firm B doesn't care about productivity, and bills 1 hour of doc review at $4000 to 100 clients, but have to pay 100 lawyers in turn. Several weeks later, their clients have left for the cheaper, higher quality competition.
Being paid hourly doesn't mean you benefit from inefficiency. Small hungry shops who can rapidly embrace and integrate new technology are going to eat giant firms alive.
When you can set up agents to replace employees, consultants, contractors, and so on, even if the AI is only 90% as good as the average human, that's a staggering advantage for fast movers.
> Being paid hourly doesn't mean you benefit from inefficiency.
That is exactly what it means and what happened.
Lawyers are one of the last professions to move to the digital world.
For ages work was still done on paper and even right now most law firms don't use any non-LLM content search, autocompletes, content organisation, that could be considered "modern".
We’re still not quite there but you’re correct though.
This tech could free up software engineers to focus on more interesting things. But that’s also true every time there are layoffs. Those engineers they got rid of were free to focus on more interesting things, had the company wanted to utilize them that way. Instead, of course, they reduced headcount to reduce cost.
This is correct but somewhat unfair given that it applies to any technological project, since the purpose of technological project is to improve efficiency in some workflow, and improved efficiency means less demand for worker time.
I don't know which part of technology you work in but I can probably spin it as making possible for some manager to reduce staff.
They're 100% shooting for being able to fire most engineers. That's the dream that allows raising a bunch of $$$. Just know there's a reason their product is a closed demo. Any working "agent" will require intervention. If a product requires intervention, you still need someone managing the AI and they just become software devs using a powerful tool.
How many coders do you know that can make AI improvements? That ability is already reserved for the top humans. AI doesn’t even need to reach this bar to be better than the average coder.
I don't really think it is the "top humans" who are doing AI, it is just people whose skillsets and interests mesh with what is used in AI now. I will say that I am doing AI work and I am certainly not a top human when it comes to intellect.
>Devin correctly resolves 13.86%* of the issues end-to-end... previous state-of-the-art of 1.96%."
While the cliff everybody will be shoved off is coming closer, it still seems engineers have plenty of room to do their job. Whether the next shove will come in months or years it is still early to tell.
Or maybe the best engineers will be able to shine since they have the best ai management ability with the coding knowledge.
Let’s give all the shovelers spoons and live in the past where we can ignore the benefits technology can bring because we’re so afraid of what it means about the way we run society.
Because the human talent will not have many alternatives left in this scenario. So far they could stay in the technical lane, now they will be forced into AI-management.
If AI gets good enough to wholesale replace developers, that is amazing news for the world. It's basically AGI. Productivity and GDP growth would skyrocket, tax receipts will explode, gov debts will be paid off, etc.
At first it would be a big win for companies. But as jobs gets absorbed, the productivity boost will become pointless. The pool of potential buyers will shrink because of unemployment and we will end up with the opposite of what you are describing.
I'm not saying that's what will happen, nor your version. Things are never that simple.
But shouldn't we be more precautious? And take time to understand the shift and prepare as a society for this big change? Why rushing on something that would potentially negatively affect millions people life in the hope of productivity boost?
The outcome is not all good, not all bad. We need carefulthinking and planning.
Something I kind of tell myself is that if AI effectively becomes a drop-in replacement for software engineers (or is at least 95% as good as the median one) it's going to suck because I lost my job and we've killed one of the few good careers that exist out there, but on the other hand, look at the big picture. I'm not even sure the "software" industry will exist.
I mean, think about it. Rationally speaking, if people can hire AI engineers at a fraction of the cost of minimum wage employee, then the price pressure on software will be "significant" to put it mildly. The more complex a piece of software is, the more incentive there is for open-source developers to collaborate and pool their resources to make a cheaper alternative using AI agents. This logic could even extend to the AI agents themselves.
I know my prediction is deeply flawed, but basically, if we create an AI that's a drop-in replacement for (most) software engineers, we're probably going to have a massive deflationary crash not just from the software sector, but in the economy in general considering that many white-collar workers have also probably been automated, with the consequences that follow.
I might lose my job, but I'm going to take so much more down with me. My loss is their catastrophe. Sounds terrifying, maybe terrifying enough for our political representatives to address the elephant in the room, that an AI just took out their tax base, the people who still have a job probably make so little that their tax contributions are cancelled out by government benefits (not just welfare mind you, but the simple fact that they drive on free roads), and taxing corporations won't fix this when corporate revenue is probably going to also decrease.
Otherwise though, we're notoriously bad at predicting the future and estimating the general difficulty of something as ambitious as trying to replace certain careers with AI or robots. Any argument you could make for why software engineers (or white-collar work in general) will be different this time was probably made earlier by someone in reference to something like self-driving cars or physical labor, or possibly in reference to a prior attempt at automating white-collar work using a more primitive form of AI.
Best course of action, individually, is probably to skate to where the puck is heading and make an earnest effort at improving your productivity with software copilots, but otherwise I think the least likely outcome is that automating the software engineering profession is as easy as the e/acc crowd on Twitter would have you believe.
Also, something else to consider, I kind of consider the automation of writing code not all that different from the automation of architectural and engineering drawings. No doubt some engineers were distraught when computers took the fun out of drawing, and professional draftsmen were devastated, but otherwise engineering is still viable enough as a career for parents to pressure their kids into studying it.
Perhaps a more realistic prediction of the future is that we start to identify a little more as designers, concerned more with the design of software and how it functions, rather than engineering concerns like making code performant and efficient.
I felt that software engineering as a career had a short shelf-life around 2020, but I wasn't too worried about it, as I figured I'd probably just transition to design as a career, I kind of think graphic design is cooler anyway. I began to have doubts about this plan once DALL-E caused an existential panic among illustrators, but otherwise I think it still has legs.
Those who adapt will probably do so by essentially becoming UI/UX designers who happen to also know how to code well, and probably know some other more general design skills just to round things out a little more.
The answer is not hating AI, it’s implementing UBI when AI produces x10 more wealth and resources for humanity. That’s a people problem, not an AI problem.
And furthermore, we’re not even remotely close to that. I guarantee this product actually sucks and is totally useless in practice. No offense, the tech is just not there yet. Saying this as someone at the cutting edge of AI and software development.
If you are so sure this is happening you can just buy some stock or calls on these AI companies and you will become a millionaire, no need to worry about your job.
How is this any different than the industrial revolution ushering in a new age, or for that matter any technology that creates huge efficiencies in labor? The truth is that the future is still very uncertain, and while the easiest thing to do is yell "they took er jerbs" from the top of your lungs, maybe think about how to effectively move forward into the future.
There is a book called "Who Moved My Cheese?" and I wouldn't say its an amazing book, but the concept that not everything lasts forever, in relation to job security, is the takeaway.
You seem to be really dumbing down AGI, considering AGI will likely do everything a human can do, as well as many things a human cannot do, and all of those things will be done vastly faster and better.
Your statement augments my point. All human beings should be concerned about their place in a system that doesn't need them, to the extent it's constructive to think about.
> How is this any different than the industrial revolution ushering in a new age, or for that matter any technology that creates huge efficiencies in labor?
Indeed it isn't. It's just that at each technological breakthrough, a number of (vocal) people thinks that it's a new type of revolution, ignoring those of the previous centuries /shrug.
The "goal" of AGI seems to be a pretty subjective thing, doesn't it? Idealistically, I could believe that the goal of AGI is to help humans build the future faster, and expand our footprint beyond what we currently believe is possible. Your pessimistic view of AGI is definitely valid, but it doesn't make it anything more than opinion, just like my view.
AGI is being developed by companies which have billions to fund it, such as Google, Meta, and Microsoft. Their goals are not subjective. These are ruthless for-profit entities that exist to increase shareholder returns.
Instead it will mean that bosses can fire 75-90% of the (very expensive) engineers, with the ones who remain left to prompt the AI and clean up any mistakes/misunderstandings.
I guess this is the future. We've coded ourselves out of a job. People are smiling and celebrating this all - personally I find it kinda sad that we've basically put an end to software engineering as a career and put loads of people out of work. it is not just SWEs - it is impacting a lot of careers... I hope these researchers can sleep well at night because they're dooming huge swathes of people to unemployment.
Are we about to enter a software engineering winter? People will find new careers, no kids will learn to code since AI can do it all. We'll end up with a load of AI researchers being "the new SWEs", but relying on AI to implement everything? Maybe that will work and we'll have a virtuous circle of AIs making AI improvements and we'll never need engineers again? Or maybe we'll hit a wall and progress in comp sci will essentially stop?