Former Google chief executive Eric Schmidt is launching a $125mn philanthropic project to fund artificial intelligence research that solves “hard problems” in the field, including issues of bias, harm and misuse, geopolitical conflict, and scientific limitations of the technology.
The fund, known as AI2050, will be paid out over five years to individual academics, and co-chaired by Schmidt, and James Manyika, Google’s new head of technology and society, in a “personal capacity”.
The initiative launches at a time when corporations, governments and civil society are widely debating the societal impacts of artificial intelligence, including how to neutralise its toxic consequences, and its effect on jobs and the economy. Issues undermining trust in AI include the poisoning of public discourse by social networking algorithms, and deliberate weaponisation of AI technologies such as deepfakes.
“The hard problems list was inspired by the Hilbert math problems from 100 years ago . . . A lot of people have expressed concerns [about AI] but very few people are working on solutions to them,” Schmidt told the Financial Times. “If we can find the next generation of researchers who are perfectly timed to make discoveries in these areas, that’s a great outcome.”
It comes as Schmidt continues to be a key AI powerbroker. He chaired the US’s National Security Commission on Artificial Intelligence until October last year, when it warned China was on track to surpass the US as an AI superpower and urged co-operation between the US, Japan, South Korea and Europe to counter Chinese capabilities.
The remit of AI2050 is broad: to create AI technology that everyone can generally agree is beneficial to society by 2050. Projects being funded include how AI can help measure and mitigate socio-economic inequality, and developing new, powerful algorithms called liquid neural networks, inspired by the human brain.
Tech companies that are leading research into AI in the US and China, including Google, Amazon, Facebook’s parent Meta and ByteDance, have been under fire for their lack of ethical approaches in building AI systems, including using them for surveillance purposes, and algorithmic bias of their programmes, where computers inadvertently propagate bias through unfair or corrupt data inputs.
Schmidt said he wants to avoid making the same mistakes that were made with some of the technologies we use widely today.
“I don’t think we understood the impact of society from social media — both the positive and the negative. And AI has the potential to have both a greater positive and negative impact, because of its ability to understand and target and change people’s behaviour, belief systems,” he said.
Schmidt said the money would be ringfenced for philanthropic purposes, and would therefore not be awarded to those working for corporations, although he pointed out AI academics and research tends to be porous, moving frequently between university and company labs such as DeepMind and OpenAI.
While Google is not involved with this fund, Schmidt said: “This money comes from Google wealth, so there is a certain recycling, Google creates the wealth and that is recycled into these broader societal purposes.”
The first six fellows awarded grants by Schmidt Futures, the philanthropic initiative of Schmidt and his wife Wendy, include University of California, Berkeley academics Stuart Russell and Rediet Abebe, who is also co-founder of Black in AI, and focuses on AI inequality and distributive justice concerns.
According to Manyika, even established academics, let alone junior researchers, were unable to work on intractable AI problems because they struggled with funding. The competition for funds, even at pre-eminent universities such as Stanford and Oxford, arises because AI investment is concentrated in corporations, rather than research organisations, he added.
“They’d say, it’s hard to get these big, ambiguous questions funded . . . typically grants are very narrow,” said Manyika. “Most of the money going into AI is focused on building commercial applications. It’s what you’d expect because these technologies will be incredibly commercially useful. But money going towards these difficult problems, it’s much harder.”