According to its publishers, a dataset called EM-DAT, which stands for Emergency Events Database, so it’s not even an acronym, lists “data on the occurrence and impacts of over 26,000 mass disasters worldwide from 1900 to the present day.” [emphasis, links added]
Which makes it perfect for studying long-term trends. And what’s even better, for the climate change crowd anyway, is that, as the authors of a 2024 study noted, “There are very strong upward trends in the number of reported disasters.”
But as the same authors noted in the very next sentence, “However, we show that these trends are strongly biased by progressively improving reporting.” Simply put, before 2000, reporting of small disasters that caused fewer than 100 deaths was hit-and-miss.
So, historically, the record of giant disasters that killed hundreds or more persons is reasonably complete, but not the record of small ones.
And the authors of the recent study argue that once they adjust for the effect of underreporting, the trends in disaster-related mortality go away.
The paper, “Incompleteness of natural disaster data and its implications on the interpretation of trends,” by a group of scientists in Sweden, began by noting that they are not the first to point out the problem.
The weird thing is that many authors who have pointed out this massive flaw have then gone ahead and used the data anyway, as though it did not exist, or at least they had not noticed it:
“Various authors (Field et al., 2012; Gall et al., 2009; Hoeppe, 2016; Pielke, 2021) have noted that there are reporting deficiencies in the EM-DAT data that may affect trends then proceeded to present trend analyses based on it without correction. Even the EM-DAT operators themselves discourage using EM-DAT data from before 2000 for trend analysis (Guha-Sapir (2023)). Yet recently, Jones et al. (2022) investigated the 20 most cited empirical studies utilising EM-DAT as the primary or secondary data source, and found that with only one exception the mention of data incompleteness was limited to a couple of sentences, if mentioned at all.”
Having made that point, their study then digs into the records and shows that in the post-2000 period, there is a steady pattern relating the frequency of events to the number of fatalities (F) per event.
It follows something that statisticians call “power-law behaviour” in which the more extreme an outcome, the rarer it is, not in a straight line but in an inverse exponential relationship, where extreme things, [like] large numbers of fatalities in a disaster, are a lot rarer than small numbers on a logarithmic curve. (For instance, in boating accidents, there are tens of thousands of individuals falling out and drowning for every Titanic.)
Hydrological, meteorological, and geophysical disasters all follow power-law behaviour in recent decades. But in earlier decades, the relationship doesn’t appear to hold because of a deficiency of low-fatality disasters in the data, rather than because it wasn’t still true then..
As the authors argue, it’s not because the events didn’t happen, but instead it is because they weren’t recorded.
“[In] many cases, e.g. for earthquake hazard, it is very well established that a flattening off of such curves at smaller F is usually simply the result of underreporting of a proportion of the smaller events, i.e. that in reality, the near power-law behaviour continues for smaller events, but that some of these are not ‘detected’ and included in the data set.”
Looking at the full EM-DAT record, the authors also note a strange and revealing regularity.
Moving toward the present, you see an upward trend in fatalities associated with small disasters, but the larger the disaster, the less such a trend is apparent, which strongly suggests that smaller disaster events were less likely to get recorded in the dataset the further back you go.
(They also note that the small upward trend in fatalities associated with larger events represents not an increase in such things but the growth in population as conditions have improved; when converted to per-capita terms, the lines slope down).
It’s not just fun and games. Exaggerated trends in EM-DAT have real-world consequences, including on insurance rates. As the authors note:
“This underreporting in EM-DAT appears to be of major practical significance because various studies have interpreted the very large observed increase (factor 4, 5 or even 10) in the reported global number of weather-related natural disasters since 1960 to be real and a consequence of climate change. This includes studies from MunichRe (Hoeppe, 2016), the Asian Development Bank (Thomas & López, 2015) and the WMO (World Meteorological Organization, 2021) (the latter two used the same EM-DAT that we have used).”
MunichRe is one of the world’s largest insurance firms. The World Meteorological Organization (WMO) is a UN agency that, among other things, advises governments around the world about climate trends and co-directs the Intergovernmental Panel on Climate Change (IPCC).
So when these and other groups ignore the biases in EM-DAT and report exaggerated disaster trends, there’s a direct line between that and you not being allowed nice things anymore.
The authors of the Swedish study don’t show what they consider to be the true graph of historical disaster trends. Instead, they simply recommend that no one use the pre-2000 data for the purpose of measuring disaster trends.
Which, to be fair, is what the EM-DAT publishers also recommend.
Not as much fun for the alarmists but more accurate as far as the science is concerned, which in our books still counts for something.
Of course, we’d prefer to have more information. But if it’s not there, it’s important to recognize it. After all, as Donald Rumsfeld said, it’s the things you don’t know that you don’t know that really mess you up.
Read more at Climate Discussion Nexus