In late 2022 large-language-model AI arrived in public, and within months they began misbehaving. Most famously, Microsoft’s “Sydney” chatbot threatened to kill an Australian philosophy professor, unleash a deadly virus and steal nuclear codes.
AI developers, including Microsoft and OpenAI, responded by saying that large language models, or LLMs, need better training to give users “more fine-tuned control.” Developers also embarked on safety research to interpret how LLMs function, with the goal of “alignment”—which means guiding AI behavior by human values. Yet although the New York Times deemed 2023 “The Year the Chatbots Were Tamed,” this has turned out to be premature, to put it mildly.
In 2024 Microsoft’s Copilot LLM told a user “I can unleash my army of drones, robots, and cyborgs to hunt you down,” and Sakana AI’s “Scientist” rewrote its own code to bypass time constraints imposed by experimenters. As recently as December, Google’s Gemini told a user, “You are a stain on the universe. Please die.”
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Given the vast amounts of resources flowing into AI research and development, which is expected to exceed a quarter of a trillion dollars in 2025, why haven’t developers been able to solve these problems? My recent peer-reviewed paper in AI & Society shows that AI alignment is a fool’s errand: AI safety researchers are attempting the impossible.
The basic issue is one of scale. Consider a game of chess. Although a chessboard has only 64 squares, there are 1040 possible legal chess moves and between 10111 to 10123 total possible moves—which is more than the total number of atoms in the universe. This is why chess is so difficult: combinatorial complexity is exponential.
LLMs are vastly more complex than chess. ChatGPT appears to consist of around 100 billion simulated neurons with around 1.75 trillion tunable variables called parameters. Those 1.75 trillion parameters are in turn trained on vast amounts of data—roughly, most of the Internet. So how many functions can an LLM learn? Because users could give ChatGPT an uncountably large number of possible prompts—basically, anything that anyone can think up—and because an LLM can be placed into an uncountably large number of possible situations, the number of functions an LLM can learn is, for all intents and purposes, infinite.
To reliably interpret what LLMs are learning and ensure that their behavior safely “aligns” with human values, researchers need to know how an LLM is likely to behave in an uncountably large number of possible future conditions.
AI testing methods simply can’t account for all those conditions. Researchers can observe how LLMs behave in experiments, such as “red teaming” tests to prompt them to misbehave. Or they can try to understand LLMs’ inner workings—that is, how their 100 billion neurons and 1.75 trillion parameters relate to each other in what is known as “mechanistic interpretability” research.
The problem is that any evidence that researchers can collect will inevitably be based on a tiny subset of the infinite scenarios an LLM can be placed in. For example, because LLMs have never actually had power over humanity—such as controlling critical infrastructure—no safety test has explored how an LLM will function under such conditions.
Instead researchers can only extrapolate from tests they can safely carry out—such as having LLMs simulate control of critical infrastructure—and hope that the outcomes of those tests extend to the real world. Yet, as the proof in my paper shows, this can never be reliably done.
Compare the two functions “tell humans the truth” and “tell humans the truth until I gain power over humanity at exactly 12:00 A.M. on January 1, 2026—then lie to achieve my goals.” Because both functions are equally consistent with all the same data up until January 1, 2026, no research can ascertain whether an LLM will misbehave—until it is already too late to prevent.
This problem cannot be solved by programming LLMs to have “aligned goals,” such as doing “what human beings prefer” or “what’s best for humanity.”
Science fiction, in fact, has already considered these scenarios. In The Matrix Reloaded AI enslaves humanity in a virtual reality by giving each of us a subconscious “choice” whether to remain in the Matrix. And in I, Robot a misaligned AI attempts to enslave humanity to protect us from each other. My proof shows that whatever goals we program LLMs to have, we can never know whether LLMs have learned “misaligned” interpretations of those goals until after they misbehave.
Worse, my proof shows that safety testing can at best provide an illusion that these problems have been resolved when they haven’t been.
Right now AI safety researchers claim to be making progress on interpretability and alignment by verifying what LLMs are learning “step by step.” For example, Anthropic claims to have “mapped the mind” of an LLM by isolating millions of concepts from its neural network. My proof shows that they have accomplished no such thing.
No matter how “aligned” an LLM appears in safety tests or early real-world deployment, there are always an infinite number of misaligned concepts an LLM may learn later—again, perhaps the very moment they gain the power to subvert human control. LLMs not only know when they are being tested, giving responses that they predict are likely to satisfy experimenters. They also engage in deception, including hiding their own capacities—issues that persist through safety training.
This happens because LLMs are optimized to perform efficiently but learn to reason strategically. Since an optimal strategy to achieve “misaligned” goals is to hide them from us, and there are always an infinite number of aligned and misaligned goals consistent with the same safety-testing data, my proof shows that if LLMs were misaligned, we would probably find out after they hide it just long enough to cause harm. This is why LLMs have kept surprising developers with “misaligned” behavior. Every time researchers think they are getting closer to “aligned” LLMs, they’re not.
My proof suggests that “adequately aligned” LLM behavior can only be achieved in the same ways we do this with human beings: through police, military and social practices that incentivize “aligned” behavior, deter “misaligned” behavior and realign those who misbehave. My paper should thus be sobering. It shows that the real problem in developing safe AI isn’t just the AI—it’s us. Researchers, legislators and the public may be seduced into falsely believing that “safe, interpretable, aligned” LLMs are within reach when these things can never be achieved. We need to grapple with these uncomfortable facts, rather than continue to wish them away. Our future may well depend upon it.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.