I'm curious, what can I, as a full-stack developer, do to prepare for things like GPT-X eventually making a lot of the work I do obsolete in the next 10-20 years?
Seeing all these demonstrations is starting to make me a little bit nervous and I feel it is time for a long term plan.
The parts of programming that are going to get automated are going to be the parts that require little skill, take a long time, and are boring as hell: writing boilerplate CRUD code, wiring up buttons to actions, etc.
Automating the harder and more interesting parts of programming is many orders of magnitude more difficult. This requires a true understanding of the problem domain and the ability to "think." GPT-3 and similar are just really good prediction engines that can extrapolate based on training data of what's already been done.
The answer therefore is the same as "how do I stay competitive vs. lower skill offshore labor?" You need to level up and become skilled in higher-order thinking and problem solving, not just grinding out glue code and grunt work.
Ruby on Rails scaffolding didn't make backend developers obsolete. I know you said GPT-X, but GPT-3 is at the boundaries of technology. The jump to GPT-4 will either take much longer or be much less impressive than the jump from GPT-2 to GPT-3. I would say that your job is safe from automation from GPT. But the technology that might put you out of job, which I personally think will not be something like a neural network, might be spontaneously discovered in the next 10-20 years just like the spontaneity of smart phones. To answer your question, be a human; be adaptable, be useful.
On the other hand, when it’s good enough to replace us, it’s also good enough to replace basically any job where you transform a written request into some written output, e.g. law, politics, pharmacology, hedge fund management, and writing books.
I have no idea how to prepare, only that I should.
(Edit: what makes us redundant may well not be in the GPT family, but I do expect some form of AGI to be good-enough in 20 years).
There’s a good book called “Rebooting AI” that does some fundamental analysis about current state of deep learning and its applications.
The biggest problem with GPT or any massive neural net is explainability. When it doesn’t do the correct thing, no one quite knows why. GPT makes all sorts of silly mistakes.
The human brain, albeit being a form of a neural net, can do some very deep symbolic reasoning about things. Artificial Neural nets just don’t to that (Yet). We haven’t figured out that not have I seen a system that is close. We haven’t got generic neural nets that can perform arithmetic operations to arbitrary precision. For computers to learn proper language, they have to embed themselves into the world for years like children do and learn the relationship of objects in the world.
So if I were a fake comment house, I’d worry about GPT. Not so much if I were a programmer or a lawyer. We do some very deep symbolic thinking to produce our work. If computers are able to replace us, they can probably replace a large part of humanity. At which point we have way bigger problems to worry about.
Symbolic reasoning is a very hard problem to crack. Something like “how old was Obama’s 2nd child when the US hit 4 digit deaths due to covid-19?”. Answering that question not only requires context like “4 digit” means 1000, it requires a bunch of lookups and ability to break a big problem into smaller problems.
Siri/Ok Google/Alexa/Cortana/GPT3 - all of them fail.
They can’t even answer “Find fast food restaurants around me that aren’t mcdonalds”.
I'm actually looking forward to more code generation tools. Things like wiring up a button aren't stimulating and I wouldn't mind that level of programming becoming automated.
That’s what I loved about Visual Basic. You could just draw your user interface and specify actions and then just fill in the one or two lines of code that need to run when that button is pressed.
I’m surprised React doesn’t have something like that. At least not that I’m aware of. Is there a GUI interface builder for React?
I am as well, especially ever since i saw [1]. It's a small test that someone tried with GPT-3 that translates natural language descriptions and phrases into shell commands.
Some of the examples from the tweets:
> Q: find occurrences of the string "pepsi" in every file in the current directory recursively
> A: grep -r "pepsi"*
> Q: run prettier against every file in this directory recursively, rewriting the files
> A: prettier --write "*.js"
It also seems to work the other way as well. You can set it up to give it a shell command and have it write a plain english description of it.
Granted sometimes the results are wrong, and in a video I saw someone playing with it like 1/10 of the commands were subtly wrong or didn't have enough context to generate what you really meant, but as a starting point it seems like such a powerful tool!
I personally spend a lot of time looking up shell command flags, thinking of ways to combine tools to get the data I want out of a log or something, or running help commands to figure out the kubectl incantation that will just let me force a deployment to redeploy with the latest image.
Imagine having a VS-Code style command palette where I can just type a "plain english" description of what i'm trying to do, and have it generate a command that I can tweak or just run. Turning a 10 minute process of recalling esoteric flags or finding documentation into 10 seconds of typing.
If it's really as good as it seems, imagine being able to type stuff like "setup test scaffolding for the LoginPage component" and having it just generate a "close enough" starting point!
Get good at specifying and documenting product requirements apparently.
Also remember that ultimately even if GPT-X is successful at transforming text into working code, all that's done is essentially define a new programming language. Instead of writing Python, you'll write GPT-X-code at a higher level.
GPT-x will be able to perform most copy-and-paste operations soon enough so that's the kind of jobs that would be made obsolete by it. Low code and point and click jobs are the ones that will follow. At first it will be "aiding" developers by suggesting code, and then GPT's successors will finally deliver the "no code, only a business description promise" that has been hanging on the industry for decades.
Of course GPT-3 is not there but it's only a matter of time: the capabilities are there. You are already thinking in decades which is the right mindset. Fortunately, tech is not something that will be done ever so there will always be opportunities just not in the fields we are looking at this time --digital products like web or mobile apps will be as exciting as a custom invoicing Windows app in a matter of years, but then you have IoT, autonomous vehicles, blockchain, and whatnot. Stay ahead of the ball as an engineer.
Of course you can also move up the food chain and become a manager or technical architect or lead.
Being worried about new potentially disruptive tech is legitimate, it's hard to see our place in an environment we can't predict.
However, particularly as a full stack dev, I think that it will create more opportunities for jobs than concurrence. You mention 10-20 years ahead, if you look that same horizon back in the past it seems (I wasn't working then) that the job also changed significantly, without making devs obsolete.
AGI might happen in our lifetime (I hope so), but I'm dubious that it will happen through a singularity [1]. Therefore, I'm not worried that as tech experts we won't have time to adapt.
It's not clear how much the demos have been gamed for presentation, and it seems more of an opportunity than a threat - it will still need devs to put stuff together, and (assuming it is as impressive as demoed) will take a lot of the donkey work away.
A significant chunk of what devs are paid for is the donkey work. Making every dev significantly more productive increases the supply of dev power relative to the demand, dropping the price.
It may be that deep learning as we know it (TF, PyTorch) is going to be replaced by prompting large models, thus making most applications straightforward for anyone to use.
Your main value add as a developer is understanding the problem domain. Machines won't be able to do this in your, or your children's lifetime, outside some important, but very constrained niches.
Seeing all these demonstrations is starting to make me a little bit nervous and I feel it is time for a long term plan.