The promise of artificial intelligence has been a staple of government technology roadmaps for decades. But too often, AI has remained an aspirational concept — more a collection of PowerPoint slides than a tangible operational capability. However, recent developments suggest that the gap between AI’s theoretical potential and its practical applications is finally narrowing.
Pockets of genuine progress are emerging. In satellite imagery analysis, machine learning algorithms have become indispensable, sifting through terabytes of Earth observation data to identify patterns and anomalies at a scale that would be impossible for human analysts. Once considered experimental, these AI-driven capabilities are now essential tools for intelligence gathering, climate monitoring and disaster response.
Yet, in other applications, skepticism persists. At a recent Booz Allen conference, executives warned that the adoption of AI is not moving as fast as many had hoped.
A trust deficit
“It boils down to trust,” said Brien Flewelling, director of strategic program development at ExoAnalytic Solutions, which tracks space objects and uses AI for predictive analysis. “It wasn’t that we bought some AI one day, and it just does all the projects, and no one needs to do any checks and balances against it.”
Nate Hamet, CEO of Quindar, a startup specializing in AI-driven satellite operations, pointed to the growing congestion in space due to proliferating megaconstellations as a catalyst for AI adoption.
“These are the discussions people are having on console: how can AI fast forward decisions accurately enough … versus taking a day to do all that analysis?” he said.
But even as AI becomes integral to managing space assets, government agencies remain cautious. Patrick Biltgen, a principal at Booz Allen who is focused on AI, noted that intelligence analysts often resist fully automated decision-making.
“There are tremendous human, mechanized workflows throughout the federal government,” he said. “Some are not yet comfortable taking an answer from AI, but things are slowly changing.”
One sign of change: agencies are beginning to procure “analytics as a service” rather than just raw data. Yet, transparency remains a sticking point. “Government customers still want to see ‘inside the data,’” Biltgen said. “They’re saying: ‘We want to see how AI made that decision. We still don’t trust it.’”
The ultimate test: Golden Dome
A proving ground for AI in government may be the Pentagon’s ambitious Golden Dome missile defense program. A multi-layer network of sensors and weapons designed to detect and neutralize hypersonic and other advanced threats, Golden Dome will require unprecedented levels of data integration and autonomous decision-making.
“The magic of Golden Dome, in my mind, is going to be the integration of capabilities that were never meant to be networked or integrated before,” said Gen. Michael Guetlein, vice chief of space operations. “That really starts getting into artificial intelligence, machine learning, data orchestration across all domains.”
Biltgen sees Golden Dome as a critical opportunity for AI adoption. “This is about shortening the timeline between detecting a threat and shooting it down,” he said.
“That’s going to drive us to automation — not just sprinkling AI on top, but truly leveraging it for data fusion, multi-sensor orchestration, and decision-making.”
Doug Philippone, co-founder of Snowpoint Ventures and a former Palantir executive, has witnessed multiple AI hype cycles. “I worried about the technology not being ready for prime time,” he said. “But now, there are finally real use cases and real companies doing the work.”
As an investor, he sees AI as key to increasing autonomy in military operations and accelerating workflows. Yet, he remains wary of exaggerated claims. “Every week, I get pitches from AI companies saying, ‘We have some AI stuff. It’s so amazing. We’re going to change the world.’”
The key challenge, he explained, isn’t just developing AI but ensuring that it integrates seamlessly with an organization’s expertise and resources.
“If the job is to intercept a missile — if my data is going to support Golden Dome — then it’d better be ready,” he said.
Biltgen believes that, ultimately, trust in AI will increase as younger, AI-native professionals assume decision-making roles. “As a society, we’re going to have to get used to some things being done in a completely automated way,” he said.
The key message from experts is that the technology is advancing, but wider adoption depends on transparency, integration with human expertise, and — above all — trust. As Philippone cautions: “There’s a difference between what you can do in a lab with curated data and actually using it in the real world.”
This article first appeared in the April 2025 issue of SpaceNews Magazine with the title “It boils down to trust.”