A week ago my brother sent a message to our family group:
“My team at work launched something! It’s called ChatGPT. Give it a go: https://chat.openai.com”
I talked to ChatGPT for ten minutes and then had a crisis of meaning for a few days. I eventually texted my brother back to say well done, because family will still be important, whatever happens next.
At first I thought this was the end of the world. ChatGPT is nowhere near an Artificial General Intelligence (AGI): an AI capable of performing most tasks that a human can. But until last week I thought that even ChatGPT’s level of abstract reasoning was impossible. It can already – to an extent – code, rhyme, correct, advise, and tell stories. How fast is it going to improve? When’s it going to stop? I know that GPT is just a pile of floating point numbers predicting the next token in an output sequence, but perhaps that’s all you need in order to be human enough. I suddenly thought that AGI was inevitable, and I’d never given this possibility much credit before. I found that it made me very unhappy. This is a post about feelings, not analysis.
I texted everyone I could to warn them what was coming. I sounded like an uncritical AI futurist. But all the real futurists seemed excited, or at least frantic. I just felt glum. Our parents had got both the property boom and the last shreds of meaning.
I cycled to a friend’s house to watch England play in the World Cup. I thought of Richard Feynman and how he felt after the Manhattan Project. He said that he saw people building a bridge in New York and thought that they were being absurd, that they didn’t understand. There were atomic bombs now, it was senseless to make anything. It would all be destroyed soon.
As I pedalled I watched people through their office windows, composing unclear emails and editing buggy spreadsheets. I hadn’t played any part in GPT, but I felt like Feynman. Why were these people wasting their time? Didn’t they read the news? Human striving was over, this was all going to be annihilated. Just go home and wait.
This was an overreaction. ChatGPT is impressive, but it’s not an AGI or even proof that AGI is possible. It makes more accessible some skills that I’ve worked hard to cultivate, such as writing clear sentences and decent programs. This is somewhat good for the world and probably somewhat bad for me, to the first degree. But I can still write and code better than GPT.
On the other hand, whilst I can peacefully coexist with GPT as it currently stands, it won’t be standing there for long. Perhaps I should calm down; disruption is everywhere and necessary. Artists should be afraid of Stable Diffusion; master weavers are anxious about mechanised looms. But now machines are encroaching on things I care about and everyone needs to pay attention.
Even if AGI isn’t imminent, I suspect that big ideas will become more important and their implementation will become more automated. But I don’t have big ideas. If I did then I’d be the kind of person who sees AI as an opportunity, not a threat. Recently there’s been a lot of VC money sloshing around Silicon Valley and not enough programmers. This has made it possible for the programmers who are there to do well in cash and cachet just by being competent implementers. We can even get another rung or two higher by lightly exaggerating the impact of our work during performance reviews. This has allowed me to have a type of success without much brilliance.
It’s been a good deal, but it’s made me professionally complacent. Today’s system works for me, why would it ever change? I assumed there could never be another tech crash. I found AI think-pieces boring and didn’t have time or expertise for the maths. I ignored the boosters, the blowhards, and apparently the experts. Now I’d guess that AI is going to change everything, somehow or another. Luckily I’m allowed to be complacent. I don’t have to be right about the future; I’m not responsible for a company or a product that needed to see this coming. I’ve probably been fortunate enough in the old world that I’m more worried about what AI means for my chances of self-actualisation than for my stability.
I think that AI will make programmers – and almost all other workers – much more productive, but what this means for the industry will depend on the size of the productivity gains. A 50% increase in output per programmer would be incredible (and perhaps even a wild overestimate), but I think it could be absorbed by something that looks like the current tech market. A 500% increase couldn’t.
Some grunt work will get eaten by AI, probably at first as an augmentation to human programmers. What counts as grunt work will depend on how much GPT’s code runs in production. I don’t know where the people currently doing the digested tasks will end up. Perhaps this will be the dawn of a new golden age of software, with more companies, more jobs, and more pie. Perhaps they’ll have to find something else to do. If there is any kind of divide and cull then I assume – perhaps again complacently – that I’ll be on the right side of it, at least for now. But even if I am, will I enjoy my new job? I already regret not knowing much about operating systems. I’m not sure I can handle another layer of abstraction. Present-day GPT isn’t that much more than a personalised Stack Overflow, but what about in five years?
Shit, shit, shit, what if I’ve been wrong about cryptocurrency too?
Perhaps my distress isn’t even about the practical implications of AI. In the last week I’ve discovered that I care more about status than I thought. Status doesn’t have to mean razzle dazzle; I drive a pedal bike and my hoodies have holes in the elbows. But until now I’ve always felt like part of the main event. I already lost a sort of status when I left San Francisco. In London my industry isn’t the centre of attention. I miss the billboards advertising support desk software and telephony APIs. In SF when I told people that I work at Stripe they nodded approvingly. In London they ask what Stripe is.
I’ve still always been in the growth sector, playing a small part in automating other people. I’m not on the cutting edge of technology; I plumb together libraries I didn’t write on top of AWS just like almost everyone else. But I’ve been on the cutting edge of industry, bolting together pipes that move billions of dollars a day. Now I feel like I might be part of a legacy system, being hauled into the future by AI. Perhaps I only care because it’s my own brother with his hands on my lapels. And what a privilege to be fussing about status on the morning of what I’m claiming might be the apocalypse.
Pause; breathe. This might not even be the apocalypse. ChatGPT confidently hallucinates nonsense, and it can’t absorb enough context to do anything all that useful. The commentators focussing on these shortcomings are either missing or nailing the point; I’m not sure which.
On the one hand, the models are going to get better. Lots of AI labs are working on them, and many are at least aiming for full AGI. Even if today’s obstacles are intractable, how many other intractable obstacles have been overcome in order to get this far? I know that most knowledge work is design, coordination, and maintenance, not turning well-specified paragraphs into short scripts or emails. But couldn’t the next version of GPT eat your company’s wiki and chat logs and take over operations from there? Obviously it couldn’t, because the docs are incomplete and out of date and not incentivised by human performance reviews, but you get the idea.
On the other hand, maybe the impossible problems of the future really will be impossible. From the outside it’s easy to underestimate the size of a field’s remaining challenges and the degree of reliability required in order to be transformational. Five years ago self-driving cars were just around the bend; now most companies seem to be giving up. There’s presumably a hard theoretical limit on the power of Large Language Models like GPT, and I’d guess that this boundary is well short of AGI. Perhaps the next leap is still several lifetimes away, like I used to assume.
On balance, taking both sides into account, I have no idea.
So what should I do? I could try to get into AI myself. I’m sure I could help build some training tools. AI infrastructure will go the way of all other programming jobs, wherever that turns out to be, but at least I’d feel like I was on the inside again. I just checked and there are plenty of AI companies in London, if necessary.
For now I’ll wait and see. Until last week I had vague plans to one day write books about teaching programming, maybe a novel, maybe work on some music, spend more time with the kids. I thought I had a plausible path to a highly circumscribed form of greatness. But I suspect that ChatGPT is in many ways already a better teacher than me; certainly it’s more patient and available. I don’t know how long until AI can write novels and synthwave, but it could be soon. That just leaves the kids. I might have to get comfortable with the idea that I have inherent value as a human beyond what I produce.
That’s melodramatic, I’m sorry. But here’s something concrete – I do think I’m going to have to rely less on my blog for self-worth. I mostly write accessible explanations of complex technical topics, like Tor and Off-The-Record Messaging. These essays don’t require novel ideas; just time and interest and some facility with words. ChatGPT can’t yet write extended prose or explain fine details as well as me, but it will one day, plus it will answer follow-up questions. Even if it turns out that I have an inimitable stylistic flair that people appreciate and GPT can’t reproduce (a fanciful hope), I’m not interested in editing for hours and hours just for that. I’m not going to stop writing yet, but I expect to need an alternative sideline before too long.
I know I have to listen to the techno-optimists as well as the techno-pessimists. Economic progress requires productivity gains, as melancholy as they can be for the people on the wrong end of them. I haven’t even considered the good things that will come of AI or the presumably invigorating work that will be required to deploy it. I find that much harder to visualise. I’m sure it will be begrudgingly magnificent.
A possible rule of thumb until things become clearer: before getting too deep into a new field, consider whether you’d be OK if it became an old-fashioned hobby that you only do for yourself.