With help from Derek Robertson
Welcome back to our weekly feature: The Future in 5 Questions. Today we have Mark Brakel — director for policy for the nonprofit Future of Life Institute. FLI’s transatlantic policy team aims to reduce extreme, large-scale AI risks by advising near-term governance efforts on emerging technologies. FLI has worked with the National Institute of Standards and Technology in the U.S. on their AI Risk Management Framework and provided input to the European Union on their AI Act.
Read on to hear Brakel’s thoughts about slowing down AI releases, not taking system robustness for granted and cross-border regulatory collaboration.
Responses have been edited for length and clarity.
What’s one underrated big idea?
International agreement through diplomacy is hugely underrated.
Policymakers and diplomats seem to have forgotten that in 1972 — at the height of the Cold War — the world agreed on a Biological Weapons Convention. The Convention came about because the U.S. and Russia were really concerned about the proliferation risks of these weapons — how easy it would be for terrorist groups or non-state armed groups to produce these types of weapons.
At least to us at FLI, the parallel with autonomous weapons is obvious — it will also be really easy for terrorists or a non-state armed group to produce autonomous weapons at relatively low cost. So the proliferation risks are therefore enormous. We were one of the first organizations to reach out to the public about autonomous weapons building through our Slaughterbots video on YouTube in 2017.
Three weeks ago, I was in Costa Rica, at the first conference on autonomous weapons between governments outside of the U.N.. All of the Latin American and Caribbean States came together to say we need a treaty. And despite the ongoing strategic rivalry dynamic between the US and China, there will definitely be areas where it will be possible to find an international agreement. I think that’s an idea that’s slowly gone out of fashion.
What’s a technology you think is overhyped?
Counter-intuitively, I’m going to say AI and neural nets.
It’s the founding philosophy of FLI that we worry about AI’s long term potential. But in the same week that we’ve had all this GPT 4 craziness, we’ve also had a human beat a successor to AlphaGo at the Go game for the first time in seven years, almost to the day, after we’d basically surrendered that game to computers.
We found out that actually, systems based on neural nets weren’t as good as we thought they were. If you make a circle around the stones of the AI’s game and you distract it in a corner, then you’re able to win. There’s important lessons there because it shows these systems are more brittle than we think they are, even seven years after we thought they had reached perfection. An insight that Stuart Russell — AI professor and one of our advisors — shared recently is that in AI development, we put too much confidence in systems that, upon inspection, turn out to be flawed.
What book most shaped your conception of the future?
I am professionally bound to say “Life 3.0,” because it was written by our president, Max Tegmark. But the book that really gripped me most is “To Paradise” by Hanya Yanagihara. It’s a book in three parts. Part three is set in New York in 2093. It’s this world where there have been four pandemics. And you can only really buy apples in January, because that’s when it’s cool enough to grow them. You have to wear your cooling suit when you go out otherwise.
It’s this eerily realistic view of what the world would be like to live in after four pandemics, huge bio risk and climate crisis. AI doesn’t feature so you have to suspend that thought.
What could government be doing regarding tech that it isn’t?
Take measures to slow down the race. I saw this article earlier today that Baidu put out Ernie. And I was like, “Oh, this is another example of a company feeling pressure from the likes of OpenAI and Google to also come out with something.” And now their stock has tumbled because it isn’t as good as they claimed.
And you have people like Sam Altman coming out to say it’s really worrying how these systems might transform society — we should be quite slow in terms of letting society and regulations adjust.
I think government should step in here to help make sure that happens — so forcing people through regulation to test their systems, to do a risk management analysis before you put stuff out, rather than give people this incentive to just one up each other and put out more and more systems.
What has surprised you most this year?
How little the EU AI act gets a mention in the U.S. debate around chatGPT and large language models. All this work has already been done — like writing very specific legal language on how to deal with these systems. Yet, I’ve seen some one liners from various CEOs saying they support regulation, but it’s going to be super difficult.
I find that narrative surprising because there is this quite concise draft that you can take bits and pieces from.
One cornerstone of the AI act is its transparency requirements — that if a human communicates with an AI system, then it needs to be labeled. That’s a basic transparency requirement that would work very well in some U.S. states or at the federal level. There’s all these good bits and pieces that legislators can and ought to look at.
What do we actually know about the just-released GPT-4?
Aside from the fact that it’s already jailbroken, that is. Matthew Mittelsteadt, a researcher at the Mercatus Center, tackled the question yesterday in a blog post — one that also directly addresses the policy implications for the new language model.
The early returns: Mostly that, well, it’s early. “What we can confidently say is that this will catalyze increased hype and AI competition,” Mittelsteadt writes. “Any predictions beyond that are largely telegraphed.”
He does, however, offer his own policy evaluations: That GPT-4 reveals how much, and how rapidly, improvement is possible in reducing errors and bias, something regulators should keep in mind; that their priors should therefore be frequently updated with new research when considering regulation; that open critique and stress-testing of AI tools is a good thing; and that discourse around AI “alignment,” sentience, and potential destruction is wildly overheated. — Derek Robertson
The European Commission convened the second of its citizens’ panels on metaverse technology this week, and it revealed more in real-time about the long, messy process of regulating new tech.
Patrick Grady, a policy analyst at the Center for Data Innovation, recapped the session in another blog published today (the first of which we covered last month). He contrasts a comment from Renate Nikolay, Deputy Director General of the European Commission’s tech department, who said that the EU should tackle metaverse regulation “our own way,” with one from Yvo Volman, another member of the Commission, who said on Friday that the EU was open to bringing other countries into the mix.
If nothing else, the seeming contradiction is a reminder of how very early this regulatory process is. (Grady additionally notes that “Also contra Yvo, Renate described the internet as a ‘wild west,’ and [that] this initiative is a precursor to regulation.”)
Another reminder of how early the tech still is, and how Europe might lag behind: Apparently, technical issues marred the entire session. “Many participants could not join the metaverse platform,” Grady writes. “…Shortcomings meant audience questions had to be skipped and some participants suffered heavy delays in joining,” a reminder that “the best products are outside the bloc.” — Derek Robertson
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.