Let’s imagine for a second that the impressive pace of AI progress over the past few years continues for a few more.
In that time period, we’ve gone from AIs that could produce a few reasonable sentences to AIs that can produce full think tank reports of reasonable quality; from AIs that couldn’t write code to AIs that can write mediocre code on a small code base; from AIs that could produce surreal, absurdist images to AIs that can produce convincing fake short video and audio clips on any topic.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
Companies are pouring billions of dollars and tons of talent into making these models better at what they do. So where might that take us?
Imagine that later this year, some company decides to double down on one of the most economically valuable uses of AI: improving AI research. The company designs a bigger, better model, which is carefully tailored for the super-expensive yet super-valuable task of training other AI models.
With this AI trainer’s help, the company pulls ahead of its competitors, releasing AIs in 2026 that work reasonably well on a wide range of tasks and that essentially function as an “employee” you can “hire.” Over the next year, the stock market soars as a near-infinite number of AI employees become suitable for a wider and wider range of jobs (including mine and, quite possibly, yours).
Welcome to the (near) future
This is the opening of AI 2027, a thoughtful and detailed near-term forecast from a group of researchers that think AI’s massive changes to our world are coming fast — and for which we’re woefully unprepared. The authors notably include Daniel Kokotajlo, a former OpenAI researcher who became famous for risking millions of dollars of his equity in the company when he refused to sign a nondisclosure agreement.
“AI is coming fast” is something people have been saying for ages but often in a way that’s hard to dispute and hard to falsify. AI 2027 is an effort to go in the exact opposite direction. Like all the best forecasts, it’s built to be falsifiable — every prediction is specific and detailed enough that it will be easy to decide if it came true after the fact. (Assuming, of course, we’re all still around.)
The authors describe how advances in AI will be perceived, how they’ll affect the stock market, how they’ll upset geopolitics — and they justify those predictions in hundreds of pages of appendices. AI 2027 might end up being completely wrong, but if so, it’ll be really easy to see where it went wrong.
While I’m skeptical of the group’s exact timeline, which envisions most of the pivotal moments leading us to AI catastrophe or policy intervention as happening during this presidential administration, the series of events they lay out is quite convincing to me.
Any AI company would double down on an AI that improves its AI development. (And some of them may already be doing this internally.) If that happens, we’ll see improvements even faster than the improvements from 2023 to now, and within a few years, there will be massive economic disruption as an “AI employee” becomes a viable alternative to a human hire for most jobs that can be done remotely.
But in this scenario, the company uses most of its new “AI employees” internally, to keep churning out new breakthroughs in AI. As a result, technological progress gets faster and faster, but our ability to apply any oversight gets weaker and weaker. We see glimpses of bizarre and troubling behavior from advanced AI systems and try to make adjustments to “fix” them. But these end up being surface-level adjustments, which just conceal the degree to which these increasingly powerful AI systems have begun pursuing their own aims — aims which we can’t fathom. This, too, has already started happening to some degree. It’s common to see complaints about AIs doing “annoying” things like faking passing code tests they don’t pass.
Not only does this forecast seem plausible to me, but it also appears to be the default course for what will happen. Sure, you can debate the details of how fast it might unfold, and you can even commit to the stance that AI progress is sure to dead-end in the next year. But if AI progress does not dead-end, then it seems very hard to imagine how it won’t eventually lead us down the broad path AI 2027 envisions, sooner or later. And the forecast makes a convincing case it will happen sooner than almost anyone expects.
Make no mistake: The path the authors of AI 2027 envision ends with plausible catastrophe.
By 2027, enormous amounts of compute power would be dedicated to AI systems doing AI research, all of it with dwindling human oversight — not because AI companies don’t want to oversee it but because they no longer can, so advanced and so fast have their creations become. The US government would double down on winning the arms race with China, even as the decisions made by the AIs become increasingly impenetrable to humans.
The authors expect signs that the new, powerful AI systems being developed are pursuing their own dangerous aims — and they worry that those signs will be ignored by people in power because of geopolitical fears about the competition catching up, as an AI existential race that leaves no margin for safety heats up.
All of this, of course, sounds chillingly plausible. The question is this: Can people in power do better than the authors forecast they will?
Definitely. I’d argue it wouldn’t even be that hard. But will they do better? After all, we’ve certainly failed at much easier tasks.
Vice President JD Vance has reportedly read AI 2027, and he has expressed his hope that the new pope — who has already named AI as a main challenge for humanity — will exercise international leadership to try to avoid the worst outcomes it hypothesizes. We’ll see.
We live in interesting (and deeply alarming) times. I think it’s highly worth giving AI 2027 a read to make the vague cloud of worry that permeates AI discourse specific and falsifiable, to understand what some senior people in the AI world and the government are paying attention to, and to decide what you’ll want to do if you see this starting to come true.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!