When national security officials claimed a statement had a 90% chance of being accurate, it turned out to be true only about 60% of the time. That jarring disconnect forms the heart of new research examining how NATO’s military and civilian leaders assess uncertainty, and the findings suggest a cognitive blind spot that could shape everything from battlefield decisions to diplomatic negotiations.
Jeffrey Friedman, a Dartmouth government professor, surveyed nearly 1,900 national security officials from over 40 NATO countries and partner nations, collecting more than 60,000 assessments of uncertainty. The officials came from elite military colleges including the U.S. National War College, Canadian Forces College, NATO Defense College, and Norwegian Defense Intelligence School. These are institutions where colonels and their civilian counterparts earn advanced degrees as part of their professional education.
The Overconfidence Pattern Spans Borders and Ranks
The research revealed what Friedman calls an “overwhelming” pattern of overconfidence that appeared consistently across all demographics tested. Military officers and civilian officials showed the same bias. Men and women displayed it equally. American and non-American participants demonstrated identical tendencies. The cognitive flaw transcended institutional culture and national boundaries.
National security officials are like many of us, in the sense that we tend to think we know more than we really do. This means that national security officials, like members of the general public, are consistently overconfident.
The study also uncovered a troubling asymmetry in how officials process information. When researchers flipped question wording, asking half the participants whether ISIS had killed more civilians than Boko Haram and the other half whether Boko Haram had killed more than ISIS, the probability estimates consistently added up to more than 100%. This mathematical impossibility reveals what Friedman describes as a bias toward false positives, a tendency to confirm rather than refute possibilities presented to them.
For organizations built on gathering intelligence and making calculated risk assessments, these results present an uncomfortable reality. Even when participants expressed complete certainty about their judgments, assigning either 0% or 100% probability to statements, they were wrong more than a quarter of the time. About 96% of participants would have achieved better accuracy scores if they had simply expressed less confidence in every single answer they provided.
Two Minutes of Training Made a Measurable Difference
The research does offer a counterintuitive bright spot. Before some participants took the survey, researchers showed them data about how previous cohorts had performed, explaining the overconfidence patterns and illustrating them with graphs. This intervention lasted roughly two minutes. Those who received this brief warning made significantly more accurate assessments than the control group.
However, the study also showed that it is possible to mitigate that bias substantially with just two minutes of training.
The improvement came almost entirely from participants becoming more cautious in their estimates, attaching less certainty to their judgments. The finding suggests that overconfidence among national security officials, while pervasive, may be more tractable than deeply ingrained. It appears to be a learned habit rather than an immutable cognitive feature.
Friedman conducted the research by partnering with military education institutions, which agreed to administer online surveys as part of their core curricula. Participation rates exceeded 90% for most groups. The surveys posed questions about international military, political, and economic affairs, with more than 250 unique questions rotated across survey waves. Some asked about current facts that could be verified immediately, while others required forecasts that could only be evaluated months or years later.
The overconfidence appeared equally strong whether officials were assessing current situations or predicting future events. It persisted whether they expressed uncertainty using precise percentages or qualitative terms like “likely” and “almost certain.” When participants said something was “almost certain” to be true, those statements turned out to be false 32% of the time.
The research contributes to ongoing debates about whether elite decision makers think differently than ordinary people. Some scholars argue that high stakes environments and professional training should produce more rational assessments. Others contend that cognitive biases are universal features of human thinking that persist even among experts. This study’s data suggests national security officials share the same overconfidence bias that appears in general populations. They were actually more overconfident than participants in the Good Judgment Project, a large scale forecasting study that recruited ordinary citizens.
The implications extend beyond individual judgment to questions of institutional design. If intuitive assessments are this flawed, Friedman argues, then rational decision making must rely on organizational procedures that can counteract cognitive biases. Some group processes might amplify individual errors through groupthink, while others might mitigate them by exposing people to diverse perspectives. The study suggests that national security bureaucracies would benefit from systematically gathering data on how officials assess uncertainty, then providing quantitative feedback to help them calibrate their judgments.
The four military institutions that participated deserve credit for allowing researchers to document these patterns, Friedman notes. The initial partnership with the National War College led to invitations from other institutions after participants found the feedback valuable. Several schools have since built the training into their core curricula.
For practitioners, Friedman offers straightforward advice rooted in the data: assume the world is more uncertain than you think, remember that your judgments lean toward false positives, and seek quantitative feedback on your assessments whenever possible. The research code and training materials are posted online for any organization to adapt.
Texas National Security Review: 10.1353/tns.00010
If our reporting has informed or inspired you, please consider making a donation. Every contribution, no matter the size, empowers us to continue delivering accurate, engaging, and trustworthy science and medical news. Independent journalism requires time, effort, and resources—your support ensures we can keep uncovering the stories that matter most to you.
Join us in making knowledge accessible and impactful. Thank you for standing with us!











