US President Donald Trump displays a signed executive order at an AI summit on 23 July 2025 in Washington, DC
Chip Somodevilla/Getty Images
President Donald Trump wants to ensure the US government only gives federal contracts to artificial intelligence developers whose systems are “free from ideological bias”. But the new requirements could allow his administration to impose its own worldview on tech companies’ AI models – and companies may face significant challenges and risks in trying to modify their models to comply.
“The suggestion that government contracts should be structured to ensure AI systems are ‘objective’ and ‘free from top-down ideological bias’ prompts the question: objective according to whom?” says Becca Branum at the Center for Democracy & Technology, a public policy non-profit in Washington DC.
The Trump White House’s AI Action Plan, released on 23 July, recommends updating federal guidelines “to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias”. Trump signed a related executive order titled “Preventing Woke AI in the Federal Government” on the same day.
The AI action plan also recommends the US National Institute of Standards and Technology revise its AI risk management framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change”. The Trump administration has already defunded research studying misinformation and shut down DEI initiatives, along with dismissing researchers working on the US National Climate Assessment report and cutting clean energy spending in a bill backed by the Republican-dominated Congress.
“AI systems cannot be considered ‘free from top-down bias’ if the government itself is imposing its worldview on developers and users of these systems,” says Branum. “These impossibly vague standards are ripe for abuse.”
Now AI developers holding or seeking federal contracts face the prospect of having to comply with the Trump administration’s push for AI models free from “ideological bias”. Amazon, Google and Microsoft have held federal contracts supplying AI-powered and cloud computing services to various government agencies, whereas Meta has made its Llama AI models available for use by US government agencies working on defence and national security applications.
In July 2025, the US Department of Defense’s Chief Digital and Artificial Office announced it had awarded new contracts worth up to $200 million each to Anthropic, Google, OpenAI and Elon Musk’s xAI. The inclusion of xAI was notable given Musk’s recent role leading President Trump’s DOGE task force, which has fired thousands of government employees – not to mention xAI’s chatbot Grok recently making headlines for expressing racist and antisemitic views while describing itself as “MechaHitler”. None of the companies provided responses when contacted by New Scientist, but a few referred to their executives’ general statements praising Trump’s AI action plan.
It could prove difficult in any case for tech companies to ensure their AI models always align with the Trump administration’s preferred worldview, says Paul Röttger at Bocconi University in Italy. That is because large language models – the models powering popular AI chatbots such as OpenAI’s ChatGPT – have certain tendencies or biases instilled in them by the swathes of internet data they were originally trained on.
Some popular AI chatbots from both US and Chinese developers demonstrate surprisingly similar views that align more with US liberal voter stances on many political issues – such as gender pay equality and transgender women’s participation in women’s sports – when used for writing assistance tasks, according to research by Röttger and his colleagues. It is unclear why this trend exists, but the team speculated it could be a consequence of training AI models to follow more general principles, such as incentivising truthfulness, fairness and kindness, rather than developers specifically aligning models with liberal stances.
AI developers can still “steer the model to write very specific things about specific issues” by refining AI responses to certain user prompts, but that won’t comprehensively change a model’s default stance and implicit biases, says Röttger. This approach could also clash with general AI training goals, such as prioritising truthfulness, he says.
US tech companies could also potentially alienate many of their customers worldwide if they try to align their commercial AI models with the Trump administration’s worldview. “I’m interested to see how this will pan out if the US now tries to impose a specific ideology on a model with a global userbase,” says Röttger. “I think that could get very messy.”
AI models could attempt to approximate political neutrality if their developers share more information publicly about each model’s biases, or build a collection of “deliberately diverse models with differing ideological leanings”, says Jillian Fisher at the University of Washington. But “as of today, creating a truly politically neutral AI model may be impossible given the inherently subjective nature of neutrality and the many human choices needed to build these systems”, she says.
Topics: