President-elect Donald Trump has telegraphed big changes to the nationâ€s all-important AI strategy, many of which are expected to be implemented immediately after his inauguration in January.ÂÂ
But while some of Trumpâ€s plans are predictable, as part of an effort to make the U.S. the worldâ€s leader in the fast-emerging technology, others are still a mystery, experts told Fortune.ÂÂ
Part of the reason is that AI policy is complex. And because AI is such a new technology, officials are still trying to figure it out. “Nobody has clearly laid out a perfect AI regulation strategy, because, frankly, there probably isnâ€t one, weâ€re still so early in this innovation cycle,†said Aaron Levie, CEO of cloud storage company Box.
Another wildcard is the chorus of voices advising Trump on technology and AI policy, including billionaire Elon Musk, who campaigned for Trump and contributed over $100 million to a pro-Trump political action committee. Who Trump will ultimately choose to listen to, among the conflicting agendas, is unknown. “Given that there are so many voices in that room and so many powerful men with egos, how is that going to work out?†said Chloe Autio, an AI policy consultant who works with AI companies and government.
Still, Trump has sent some very clear signals about what heâ€ll do about AI. The most obvious, experts agree, is that heâ€ll make good on his promise to repeal President Joe Bidenâ€s year-old executive order aimed at making AI safe and secure.
The order sets safety and privacy standards for AI, and promotes its ethical use. But the 2024 Republican platform called the order “dangerous,†saying that it “hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.â€ÂÂ
In general, Trump will likely pick up where his first administration left off in January 2020, when it issued guidance to federal agencies about AI. The memo called on the government to reduce barriers to AI development and adoption and avoid regulations that hamper innovation and growth, said Adam Thierer, a senior fellow at the R Street Institute, a center-right think tank in Washington, D.C.
AI safety on the chopping block?
One thing that may be on the chopping block is the AI Safety Institute (AISI). The executive order directed the Department of Commerce to create the institute, housed within the National Institute of Standards and Technology, and is intended to evaluate the safety of the most advanced artificial intelligence to national security, public safety, and individual rights.ÂÂ
Adam Aft, the lead attorney in Baker McKenzieâ€s North America technology transactions group, with a focus on AI, said the safety institute is among the elements of Bidenâ€s executive order that is most likely to be killed. And since Trump has said he would repeal the order, it would likely be one of the first and easiest changes.
However, there are many supporters inside and outside government who donâ€t want the AISI to vanish, said Thierer. A group of tech industry players and think tanks have been pushing Congress to make the AISI permanent before the end of the year, and before Trump takes office.
If AISI survives, Trump could appoint new leaders to it that, in a twist, could be among those who fear AI is long-term risk to humanity. Among those who have talked about the dangers is Musk, who is now in a position to influence Trumpâ€s AI policies and his picks for the AISIâ€s leadership. “Trump could turn to Musk and say, ‘Who do you want to bring in?â€â€ said Thierer. “And thatâ€s going to be a really interesting moment.â€
Open source AI: Friend or foe?
Another big question is Trumpâ€s position on open source AI, or AI tools and models available for anyone to use, modify, and distribute. Supporters of open source AI, which includes models from Meta, Mistral, and Muskâ€s xAI, describe it as a counterbalance to AI from Big Tech companies like OpenAI, Anthropic, and Google, which typically keep their AI models closed and proprietary.
But there is also a strong push to block unfriendly nations from getting access to advanced AI, due to national security concerns, by regulating AI exports and limiting cybersecurity improvements. For example, Chinese researchers reportedly developed an AI model for military use by building on Metaâ€s open source model, Llama.
“That is going to be a high-level cat fight all the way up,†Thierer said about the coming debate within the Trump administration about how to regulate open source AI.
Autio pointed out that JD Vance, Trumpâ€s vice president-elect, has previously supported open source AI development. “How do we reconcile that? I think it will be a big question like who will be the loudest voice in [Trumpâ€s] ear when it comes to figuring out some of these very deeply substantive, thorny issues,†she said. ÂÂ
AI is also an indirect consideration when it comes to Trumpâ€s plan to increase tariffs on products imported from countries like China. It was a core part of his campaign, intended to encourage U.S. manufacturing.
But tariffs could increase costs for hardware that is critical for AI, such as chips, many of which are manufactured abroad. They may also disrupt the supply chains of tech companies and put U.S. businesses at a competitive disadvantage to companies in Asia and Europe, due to higher component costs, retaliatory tariffs, or foreign firms that can undercut on price. “Weâ€re hearing from people, across the board, the possibly unintended impacts that might have on research and development in this space,†Danielle Benecke, global head of law firm Baker McKenzieâ€s machine learning practice.
You can also expect pushback on so-called woke AI, Thierer said, using a term for AI that is considered too left-leaning. Trump could use an executive order to pressure tech companies to disclose or revise algorithms deemed politically biased, or establish guidelines or oversight that review algorithms for bias, ensuring they do not favor one political viewpoint over another.
Previously, Musk has attacked OpenAI and Google, claiming they are influenced by a “woke mind virus.†For example, in February, when Googleâ€s Gemini chatbot generated historically inaccurate images, such as Black Nazis and Vikings, Musk cited it as evidence of Googleâ€s AI promoting what he viewed as an excessively “woke†perspective.
“Conservatives, since the time Trump left office and his de-platforming on X, have been very fired up about what they regard as algorithmic bias or discrimination,†Thierer said. “Iâ€ve pushed back myself kind of aggressively against that, but the bottom line is they feel itâ€s very real, and it made for a strong shift by MAGA conservatives against so-called woke tech issues.â€ÂÂ
Any efforts by Trump to regulate or censor what AI produces, however, could face legal challenges under the First Amendment, which guarantees free speech. But could still have a chilling effect on AI research or adoption, as businesses pull back on developing or deploying AI systems, if they face unpredictable legal consequences based on perceived social or political bias.
Division in Silicon Valley
Much of what Trump ultimately does will depend on who advises him on AI. In addition to Musk, thereâ€s Andreessen, investor and podcaster David Sacks, and Sequoia Capitalâ€s Shaun Maguire. Jacob Helberg, founder of software company Palantir, is another who may have Trumpâ€s ear.
Trumpâ€s tech supporters are willing to work closely with the government on national security issues to counter China, Thierer said. Itâ€s a big change from recent years when Big Tech largely balked at allying with Washington. “This is a very different voice from Silicon Valley than in the past,†Thierer said.
The U.S. political divide also risks playing out among career government employees working on AI technology or policy issues. Some may decide to quit if they disagree with Trumpâ€s policies while recruiting replacements may be made more difficult, said Dr. Rumman Chowdhury, a member of the U.S. Department of Homeland Securityâ€s AI Safety and Security Board as well as a U.S. Science Envoy for AI for the State Department. “There are thoughtful, hardworking and kind people in government who are about to be in a difficult situation, and I have every sympathy for the tough decisions they are going to have to make,†she said.
No matter what happens, Boxâ€s Levie, for one, said heâ€s more optimistic about Trumpâ€s future AI policy than he would have been during Trump 1.0. It boils down to what he considers to be more knowledgeable people in his orbit now. “Trump is surrounded by more tech-centric folks, like Elon, that I think are directionally aligned with where I see a lot of the most important technology innovations going, whether thatâ€s AI, EVs or energy production.â€