Hello and welcome to Eye on AI. In this edition…Trump restricts Nvidia’s H20 exports, but will the policy actually hobble China?…OpenAI contemplates a social network…faster RAG…and dolphin chat.
Last week, Nvidia told investors it was taking a $5.5 billion charge to account for the impact of the Trump Administration’s decision to restrict the sale of the company’s H20 chips to China. Nvidia had created the H20 specifically for the Chinese market to get around export controls on Nvidia’s Hopper H100s and H200 chips and its newest Blackwell B100 and B200 chips. (The H20 is slower than an H100 for training an AI model, but is actually faster at running a model that has already been trained—what’s known as inference. And it turned out that Chinese AI companies had success training competitive AI models on H20s too.)
Given geopolitics and the number of China hawks in President Donald Trump’s circle, Nvidia must have known restrictions were probable, but the timing of the announcement seems to have caught the company off guard. Nvidia CEO Jensen Huang attended a $1 million-a-head dinner with Trump in Mar-a-Lago a week earlier and had just announced plans to move some of the manufacturing and assembly of its Blackwell chips from Taiwan to the U.S., moves that the company had hoped would possibly win it a reprieve from both export restrictions and the impact of Trump’s tariff policies, as my colleague Sharon Goldman reported. And Reuters reported that Nvidia’s Chinese sales teams had not informed customers there about the H20 restrictions in advance, another sign Nvidia thought it could persuade Trump to back off.
As Sharon noted in her story, there are plenty of reasons to question the coherence of Trump’s AI policies so far. It certainly won’t be easy—at least in the near-term—to square his goals of both promoting U.S. technology companies, like Nvidia, and wanting to see U.S. tech widely adopted by other nations, while also imposing high-tariffs on U.S. imports and trying to reshore the production of semiconductors. Add to that Trump’s predilection for cutting deals that may let certain companies or countries escape tariffs and export restrictions, and it’s a recipe for what international relations experts would call “suboptimal” policy outcomes.
For instance, earlier this week, Peng Xiao, the CEO of G42, the United Arab Emirates’ leading AI company, said he was optimistic the country would be allowed access to Nvidia’s top-of-the-line chips again soon, after the country pledged to invest $1.4 trillion in the U.S. over the next decade. In the past, the U.S. has raised concerns about whether G42 had too many ties to China, and in general concerns remain about Chinese AI companies accessing high-end computing clusters in data centers located outside the U.S. The data center sector still lacks know-your-customer rules, and there is no obvious way to enforce such rules.
This past week also brought two further pieces of news that will complicate Trump’s efforts to stay ahead of China in AI by restricting access to advanced chips. First, Chinese electronics giant Huawei announced it had trained a large AI model called Pangu Ultra. What’s notable is that they did so using its own Ascend NPUs (NPU is short for neural-network processing unit), a computer chip Huawei designed to rival Nvidia’s. Pengu Ultra is similar to Meta’s Llama 3 series of AI models in size and architecture and, according to benchmark tests Huawei performed, does decently at a number of English, coding, and math tasks, although it is still not as good as the latest generation of U.S. AI models or models that Chinese company DeepSeek debuted in December and January.
It’s unclear if any deficiencies in Pengu Ultra can be blamed on the inferiority of the Ascend chips, although the Ascend’s are believed to be less capable—lagging the performance of Nvidia’s H100s by about 20% and its Blackwells by at least 66%. But the fact that Chinese companies are creeping up on the performance of Western models using homegrown chips means that restricting the export of Nvidia and other top AI chips may not be enough to ensure the U.S. stays ahead. No doubt Nvidia might look at these results and argue that there’s no reason to restrict its exports at all—since all the export controls are doing is encouraging China to develop its own technology. But, as some AI policy thinkers, including former OpenAI policy researcher Miles Brundage, have argued, all things being equal, it probably still makes sense for the U.S. to make it more difficult for China to train its AI models, and that means the export controls are a useful policy tool, even if they are not a silver bullet.
The other development last week that will pose a growing challenge to bottling up access to advanced AI is news that a startup called Prime Intellect managed to train the largest AI model to date in a completely distributed way (pooling GPUs from 18 different locations worldwide, rather than in a central data center.) It was once thought that training capable large language models in this way would be impossible. But in the past year, researchers have made rapid progress. And Intellect-2, with 32 billion parameters (or tunable nodes in its neural network), is of the same size as the small-ish, yet highly capable models many AI companies are now releasing. Distributed training may soon frustrate any AI policy built around the idea that access to advanced AI models can be tightly controlled.
With that, here’s the rest of this week’s AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
Before we get to the news, if you’re interested in learning more about how AI will impact your business, the economy, and our societies (and given that you’re reading this newsletter, you probably are), please consider joining me at the Fortune Brainstorm AI London 2025 conference. The conference is being held May 6–7 at the Rosewood Hotel in London. Confirmed speakers include Hugging Face cofounder and chief scientist Thomas Wolf, Mastercard chief product officer Jorn Lambert, eBay chief AI officer Nitzan Mekel, Sequoia partner Shaun Maguire, noted tech analyst Benedict Evans, and many more. I’ll be there, of course. I hope to see you there too. You can apply to attend here.
And if I miss you in London, why not consider joining me in Singapore on July 22–23 for Fortune Brainstorm AI Singapore. You can learn more about that event here.
This story was originally featured on Fortune.com