Hello, and welcome to this week’s installment of The Future in Five Questions. This week we interviewed Tom Lue, general counsel and head of governance at Google DeepMind (which this week launched the AI chatbot Gemini). Lue, a recent guest on the POLITICO Tech podcast, is a veteran at the intersection of emerging tech, policy, and the world of law. He served as deputy general counsel at former President Barack Obama’s Office of Management and Budget, with professional experience as a Congressional staffer and Supreme Court clerk. We discussed the need for AI talent and knowledge in government, the surprising pace of regulatory efforts around AI, and the sci-fi of Kazuo Ishiguro. This conversation was edited and condensed for length and clarity:
What’s one underrated big idea?
Using AI to time travel! Not literally, of course, at least not yet. But AI has already been a powerful catalyst for speeding up innovation and scientific discovery.
Revolutionary discoveries historically have taken years of trial and error and often cost (many) millions of dollars. The Human Genome Project took more than a decade, thousands of researchers, and roughly $3 billion, but its achievements kicked off the modern study of genomics. By automating expensive and painstaking processes, AI can leapfrog many of the barriers limiting researchers today.
For example, scientists for decades tried to find a method to reliably determine a protein’s structure to uncover how it works and help find new medicines. It would take a student an entire PhD to decode just one protein structure. But using our AI system AlphaFold, we were able to map the entire known protein universe — over 200 million proteins — and we made it freely available to researchers around the world, saving countless dollars and years in the lab and dramatically accelerating the pace of progress in structural biology.
This is also true when we look at enabling new technologies. Modern tech, from computer chips to solar panels, all relies on stable inorganic crystals. Behind each new stable crystal can be months of painstaking experimentation. Just last month we published GNoME, our AI tool which discovered 2.2 million new crystals, including 380,000 stable materials that could power future technologies.
And this is just the beginning.
What’s a technology that you think is overhyped?
Overhyped isn’t the right word here, but I think polarizing discourse around AI and existential risk can sometimes generate headlines and overshadow the need to address the more immediate risks that exist in AI systems today. Whether in developing AI systems or putting in place the guardrails to keep them safe, it’s critical to consider the entire spectrum of risks — both the near and longer term — and to bring together the different communities of experts, industry, and civil society that each have important perspectives to share on how AI can be developed and governed most effectively.
What book most shaped your conception of the future?
Kazuo Ishiguro’s “Klara and the Sun“ is a beautiful, thought-provoking, and moving story about what a world with digital companions (what the book terms “Artificial Friends”) could look like. All of the book’s themes — around love, creativity, loss, physical and emotional connection, and how AI might powerfully shape our most important life experiences — really stuck with me. And that’s why it’s so important to take a human-centered approach to the development of AI systems, and to make sure they reflect and respect the broad diversity of human experiences and values.
What could the government be doing regarding technology that it isn’t?
We need more AI expertise and talent in government. Having spent nearly a decade in the U.S. government across all three branches earlier in my career, I’ve seen how difficult it can be for policymakers and regulators to keep up with advances in industry — let alone an industry moving at the breakneck pace we’ve seen in AI over the past couple of years.
The good news is that we are seeing positive signs that governments are ramping up their efforts to recruit top AI talent. We know that in the U.K. for example, PhD-level experience among senior government officials working on AI safety has grown significantly this year alone, and the new AI Safety Institutes recently announced by the U.S. and U.K. governments are a promising development which should play a key role in continuing to build up that in-house expertise. We also very much welcomed the commitment in President Biden’s Executive Order to accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge.
What has surprised you the most this year?
The momentum around global coordination on AI. Before this year, there wasn’t much international coordination on AI governance. But 2023 brought a wave of collaboration, with efforts such as the White House AI commitments, the G7 Code of Conduct, the creation of the UN’s High-Level Advisory Body on AI, and the first global Safety Summit on AI, hosted by the U.K. Government.
These milestones are laying the foundation for established norms for how this technology should be governed. This is crucial as AI models continue to advance at a rapid pace. Indeed, this week Google introduced Gemini to the world — the most capable and general model we’ve ever built. Gemini was built from the ground up to be multimodal, which means it can seamlessly understand, operate across, and combine different types of information including text, code, audio, image, and video. Crucially, it also has the most comprehensive safety evaluations of any Google AI model to date.
We’re heading into the new year with models more capable than ever before, with more robust safety testing and international coordination on AI clearly prioritized across the ecosystem.
A New York startup is hoping AI can become a mom’s best friend.
POLITICO’s Sophie Gardner reported today for the Women Rule newsletter on PaidLeave.ai, a language model meant to help New Yorkers figure out how much paid maternity leave they’re entitled to and how to access it. The model is run by Moms First, a nonprofit advocacy group. CEO Reshma Saujani said she connected with OpenAI co-founder Sam Altman early in the development process, who provided her with guidance from his development team.
The software’s secret sauce: Instead of being trained on the contents of the entire internet, like ChatGPT, it’s trained only on official documents related to paid leave from New York state. By targeting its data set so specifically, the software vastly reduces the “hallucinations” and irrelevant information ChatGPT can be prone to cough up.
Saujani told Sophie the software would “help more women get access to benefits, which is going to boost women’s economic empowerment,” and that she’s currently talking to governors in 13 other states with paid family leave.
European legislators tacked on another day of negotiation over the AI Act, after a marathon 22-hour session Thursday failed to reach a settlement.
POLITICO’s Gian Volpicelli reported that negotiators could not agree on AI use by security agencies and law enforcement, including a proposed ban on facial recognition technology. The rift was largely between member governments who want access to those tools and the European Parliament that wants to ban them.
Civil rights groups are not happy: “Drawing out the trilogue is a tactic to force sleepy negotiators to accept a weaker deal,” Sarah Chander, a senior policy advisor at European Digital Rights, told Gian. “There’s a major concern that some in the Parliament will accept a disastrous deal when it comes to facial recognition and predictive policing.”
Negotiations, as of this writing, are still ongoing.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.