Blog

What was the death toll from Chernobyl and Fukushima?

Our World in Data presents the empirical evidence on global development in entries dedicated to specific topics.
This blog post draws on data and research discussed in our entry on Energy Production & Changing Energy Sources.

When it comes to the safety of nuclear energy, discussion often quickly turns towards the nuclear accidents at Chernobyl in Ukraine (1986) and Fukushima in Japan (2011). These two events were by far the largest nuclear incidents in history; the only disasters to receive a level 7 (the maximum classification) on the International Nuclear Event Scale.

How many deaths did each of these events cause?

When it comes to nuclear accidents there are really two fatal impacts to consider: the first being the number of direct deaths which occurred either at the time of incident, or in the days which followed (i.e. the acute impacts); the second being the long-term (chronic) impacts of radiation exposure, which has known links to the incidence of several forms of cancer.

Deaths from Chernobyl

In the case of Chernobyl, 31 people died as a direct result of the accident; two died from blast effects and a further 29 firemen died as a result of acute radiation exposure (where acute refers to infrequent exposure over a short period of time) in the days which followed. The number of people who were impacted over long-term radiation exposure is more difficult to discern and remains highly contested. The difficulty lies in the fact that the relationship between low-level radiation exposure and cancer incidence can be challenging to decouple from other environmental and lifestyle factors; cancers can in many cases be multi-causal, and not solely correlated with factors such as radiation exposure.

In an assessment titled ‘Chernobyl’s Legacy: Health, Environmental and Socio-Economic Impacts’ the World Health Organisation (WHO) estimated that the total number of deaths of the population in close proximity to Chernobyl will be around 4,000. Estimating the numbers of people affected across Europe by radiation is more contested. A study in the International Journal of Cancer by Cardis et al. (2006) estimates another 16,000 deaths across the former USSR. If we extend this to total attributed deaths across Europe, radiation scientists Fairlie and Sumner suggest this total rises to between 30,000-60,000 deaths. The highest estimate to date comes from a 2006 Greenpeace report which estimates an eventual death toll of 93,000. Although uncertain, the total number of deaths attributed to the Chernobyl disaster is likely to be in the range of tens of thousands. The upper and lower estimates are shown in the chart below.

Deaths from Fukushima

In the case of Fukushima, although 40-50 people experienced physical injury or radiation burns at the nuclear facility, the number of direct deaths from the incident are quoted to be zero. However, mortality from radiation exposure was not the only threat to human health: it’s estimated that around 1,600 people died as a result of evacuation procedures and stress-induced factors. This affected mostly older people; more than 90 percent of these deaths occurred in individuals over the age of 66.

How many are projected to suffer in the long-term from low-level radiation exposure? In its Health Risk Assessment of the nuclear disaster, the World Health Organization (WHO) note exposure levels too low to affect human health for the national population, with exception to a few communities in closest proximity. In these localities, it is those who were infants at the time of exposure who are at greatest risk of cancer—at the two closest sites, the incidence of cancer in this demographic is projected to be between 4-7 percent higher than baseline cancer rates for both males and females (with the exception of thyroid cancer in females, which is 70 percent higher). The WHO project the number of deaths from low-level exposure to be close to zero, and up to 400 in upper estimates (as represented in the chart below).

In more recent evaluations of rates of perinatal mortality (that is, stillbirths or deaths within the first week of life) in areas closest to the Fukushima site, there were no statistical indications of increased incidence. In fact, rates of perinatal mortality showed an overall decline with time—the general trend we see through improved healthcare and healthier lifestyles.

The death toll of the Fukushima nuclear accident dominated headlines for weeks after the event and overshadowed the much larger tragedy that happened at the same time and place: the Tsunami killed 15,893 people, around eight times (if we assume the upper estimate of the long-term death toll from the nuclear incident) the number of the nuclear accident.

Why was the death toll from Chernobyl so much higher than Fukushima?

Chernobyl and Fukushima are the only two disasters to receive a level 7 (the maximum classification) on the International Nuclear Event Scale. But why is the upper estimate of deaths from Chernobyl almost fifty times higher than that of Fukushima?

There are a couple of factors which are likely to have played a key role here. The first of these concerns the technical functionality and safety measures of the respective nuclear facilities. Chernobyl occurred 25 years prior to Fukushima; it was the first instance of a nuclear accident at this scale. From a technical perspective, the nuclear reactors at Chernobyl were poorly designed to deal with such a scenario. Its fatal RBMK reactor had no containment structure, allowing radioactive material to spill into the atmosphere (in contrast, Fukushima’s reactors had steel-and-concrete containment structures, although it’s likely that at least one of these were also breached). Crucially, the cooling systems of both plants worked very differently; at Chernobyl, the loss of cooling water as steam actually served to accelerate reactivity levels in the reactor core, creating a positive feedback loop towards fatal explosion (the opposite is true of Fukushima, where the reactivity reduces as temperatures rise, effectively operating as a self-shutdown measure).

These technical differences undoubtedly played a role in the relative levels of exposure from both events. However, the governmental response to both events is also likely to have played a crucial role in the number of people who were exposed to high levels of radiation in the days which followed. In the case of Fukushima, the Japanese government responded quickly to the crisis with evacuation efforts extending rapidly from a three kilometre (km), to 10km, to 20km radius whilst the incident at the site continued to unfold. In comparison, the response in the former Soviet Union was one of denial and secrecy.

It’s reported that in the days which followed the Chernobyl disasters, residents in surrounding areas were uninformed of the radioactive material in the air around them. In fact, it took at least three days for the Soviet Union to admit an accident had taken place, and did so after radioactive sensors at a Swedish plant were triggered from dispersing radionuclides. It’s estimated that the delayed reaction from the Soviet government and poor precautionary steps taken (people continued to drink locally-produced, contaminated milk, for example) led to at least 6,000 thyroid cancer cases in exposed children.

Whilst prevention, and ultimately containment (which are predominantly technical issues), are crucial to the safety of nuclear energy production, these two events also highlight the importance of political governance and response in the aftermath of such disasters.

Risk of nuclear in context

The potential risks of nuclear energy are real: in both Chernobyl and Fukushima, deaths occurred as a result of direct nuclear impacts, radiation exposure and psychological stress. Nonetheless, of the two largest nuclear disasters, the death toll was of the order of tens of thousands in one, and thousands in the latest. Arguably still too many, but far fewer than the millions who die every year from impacts of other conventional energy sources.

As covered in a separate blog post on the relative safety of energy sources, the comparatively low death toll from nuclear energy (resulting in 442 times fewer deaths relative to brown coal per unit of energy, even with radioactive exposure deaths included) is largely at-odds with public perceptions, where public support for nuclear energy is often low as a result of high safety concerns. The key distinction here is that nuclear risk is generally focused within low-probability, high-impact single events in contrast to air pollution impacts which provide a persistent background health risk.

It goes completely against what most believe, but out of all major energy sources, nuclear is the safest

Our World in Data presents the empirical evidence on global development in entries dedicated to specific topics.
This blog post draws on data and research discussed in our entry on Energy Production and Changing Energy Sources.

Energy production and consumption is a fundamental component to economic development, poverty alleviation, improvements in living standards, and ultimately health outcomes. We show this link between energy production and prosperity here, where we see a distinct relationship between energy use and gross domestic product per capita.
The unintentional consequences of energy production can, however, also result in negative health outcomes. The production of energy can be attributed to both mortality (deaths) and morbidity (severe illness) cases as a consequence of each stage of the energy production process: this includes accidents in the mining of the raw material, the processing and production phases, and pollution-related impacts. We have recently explored this trade-off with respect to development and air pollution.

If we want to produce energy with the lowest negative health impacts, which source of energy should we choose? Here we limit our comparison to the dominant energy sources—brown coal, coal, oil, gas, biomass and nuclear energy; in 2014 these sources accounted for about 96% of global energy production. While the negative health impacts of modern renewable energy technologies are so far thought to be small, they have been less fully explored.

There are two key timeframes to consider when attempting to quantify potential fatalities from energy production. The first is the short or generational timespan, which covers deaths related to accidents in the mining, processing or production phase of energy sources as well as the outdoor air pollution impacts from the production, transport and combustion of fuels. The second is the long-term or intergenerational impacts (and resultant deaths) from climate change.

Deaths from accidents and air pollution

In the chart below we see the results of the analysis by Markandya and Wilkinson (2007) published in the medical journal The Lancet. This is the rate of short-term deaths from accidents and air pollution related to energy production. Since we want to compare the relative safety of producing energy from various sources, this data has been standardised to the deaths resultant from the production of one terrawatt-hour (TWh) of energy in each case. One terrawatt-hour is roughly equivalent to the annual energy consumption of 12,400 US citizens. Although deaths from accidents and air pollution have been combined, it’s important to note that air-pollution related deaths are dominant. In the case of brown coal, coal, oil and gas, they account for greater than 99% of deaths, as well as 70% of nuclear-related deaths, and all biomass-related deaths.

We can see that brown coal and coal rate the worst when it comes to energy-related fatalities. Coal-fired power plants are a key source of sulphur dioxide and nitrogen oxides, key precursors to ozone and particulate matter (PM) pollution, which can have an impact on human health, even at low concentrations.

At the other end of the scale as the safest source of energy we have nuclear energy, resulting in 442 times fewer deaths relative to brown coal per unit of energy. Note that these figures also account for cancer-related deaths as a result of radioactive exposure from nuclear energy production.

In the second chart below we have estimated the hypothetical number of global deaths which would have occurred as a result of energy production if all energy was produced from a given source, by multiplying respective death rates (per TWh) by IEA estimates of global energy production in 2014 of 159,000 TWh. If global energy demand in 2014 was met solely through brown coal, we estimate global deaths as a result of energy production to be more than five million. In contrast, if global energy demand was met through nuclear sources, the number of deaths would have been only 11,800 (442-times less).

What would our worst-case probabilities suggest about risk?

It’s important to note that we may consider the values of death rates and hypothetical number of deaths from nuclear power quoted above to be worst-case projection of risk and mortality. The figures for death rates per TWh from Markandya and Wilkinson (2007) are calculated on a theoretical basis using a method called the ‘linear, no-threshold model’. The basis of this model assumes that the number of deaths is directly and linearly proportional to the dosage of radiation; additionally it assumes there is no lower limit or “safe” level of exposure, meaning individuals are at risk even at very low doses.

However, this model for estimating mortality risk from radiation exposure is particularly controversial, with suggestions that it leads to an overestimation of probable risk. Furthermore, as James Hansen argues in his 2011 paper, our empirical evidence of mortality risk based on historical nuclear events (of which, there has been only three large-scale incidents: Three Mile Island, Chernobyl, and Fukushima) is several orders of magnitude lower than those we would predict from theoretical linear, no-threshold models of probability. As a result, we may consider these models (and the quoted figures used in the charts above) to be an upper, worst-case estimate of risk rather than one based on historical evidence (which may fail to accurately reflect worst-case conditions).

Managing nuclear waste

An additional concern for nuclear energy, beyond direct attributed deaths from accidents is the challenge of radioactive waste management. Waste produced from the nuclear fission process (and facility) varies in levels of radioactivity, as well as the period of time for which it poses a risk to human health. This period of concern extends anywhere from 10,000 to one million years. We therefore segregate waste into three categories: low, intermediate, and high-level waste. Our ability to deal with low and intermediate levels of waste (LLW and ILW) are typically well-established. LLW can be compacted, incinerated and buried safely at a shallow depth. ILW, containing higher amounts of radioactivity, needs to be shielded in concrete or bitumen before disposal.

Dealing with high-level waste (HWL) is more challenging. The long lifetime and high amounts of radioactivity in spent nuclear fuel means waste must not only be appropriately shielded, but also be in a stable environment for up to a million years. To achieve this, most propose storage in deep geological sites. The difficulty therefore lies in ensuring chosen sites would be geologically stable (including temperature and water fluctuations) over this period of time. To date, the majority of our HWL is stored in multi-barrier repositories at surface sites. However, to deal with them appropriately, long-term deep geological solutions must be developed. Sweden and Finland are arguably the furthest forward in the development of long-term storage solutions.

Deaths from climate change

Energy production not only has short-term health impacts related to accidents and air pollution; it also contributes to the long-term impact of global warming, the impacts of which (e.g. extreme weather, sea level rise, reduced freshwater resources, crop yields, heatstroke) are likely to be fatal for some. It’s particularly challenging to predict how many climate change related deaths we might experience decades from now, and how much we could attribute to a specific energy source. This makes it difficult to compare specific figures related to long-term deaths.

We can, however, use a proxy (a related or substitute indicator) to compare the potential contribution of energy sources to climate change. For this, we use the carbon intensity of energy, which measures the grams of carbon dioxide (CO2) emitted in the production of a kilowatt-hour of energy (gCO2e per kWh). Using this proxy, we can assume that energy sources with a higher carbon intensity would have a larger impact on mortality rates from climate change for a given level of energy production. In the chart below we see both measures of fatality: on the y-axis we have the death rates (per TWh) from accidents and air pollution as we discussed in the previous chart; on the x-axis we have each energy source’s carbon intensity, measured in gCO2e per kWh.

We see a strong correlation between both measures, energy sources that are unhealthy in the short-term are also unhealthy in the long-term. And those that are safer for the current generation are safer for future generations.
Coal rates poorly for both variables (and especially brown coal) has both a high death rate from local air pollution, as well as having a high carbon intensity. Oil is also associated with high short- and long-term health impacts. At the other end of the scale, both nuclear and biomass have a low carbon intensity (with nuclear lowest at 12gCO2e/kWh and biomass at 18gCO2e/kWh). This is 83 and 55 times lower than coal, respectively.

Nuclear energy therefore scores lowest on both short- and long-term mortality related to energy production. It’s estimated that up to 1.8 million air-pollution related deaths were avoided between 1971-2009 as a result of producing energy with nuclear power plants rather than available alternatives.

Conclusions on energy safety

Discussions with regards to energy safety often incite the question of: how many died from the nuclear incidents at Chernobyl and Fukushima? We addressed this question in a separate blog post. In summary: estimates vary but the death toll from Chernobyl is likely to be of the order of tens of thousands. For Fukushima, the majority of deaths are expected to be related to induced stress from the evacuation process (standing at 1600 deaths) rather than from direct radiation exposure.

As stand-alone events these impacts are large. However, even as isolated, large-impact events, the death toll stands at several orders of magnitude lower than deaths attributed to air pollution from other traditional energy sources—the World Health Organization estimates that 3 million die every year from ambient air pollution, and 4.3 million from indoor air pollution. As so often is the case, single events that make headlines overshadow permanent risks that result in silent tragedies.

Based on historical and current figures of deaths related to energy production, nuclear appears to have caused by far the least harm of the current major energy sources. This empirical reality is largely at odds with public perceptions, where public support for nuclear energy is often low as a result of safety concerns. This is shown in the chart below which measures the share of survey respondents in a given country who are opposed to nuclear energy as a means of electricity production. At a global level, opposition to nuclear energy stood at 62 percent in 2011.

Public support for renewable energy production is much stronger than for nuclear or fossil fuels. Why are we therefore concerned with the comparison between the latter two? Whilst the share of energy production from renewable technologies is slowly growing, 96 percent of global energy production is produced from fossil fuels, nuclear and traditional biomass sources. Our global transition to renewable energy systems will be a process which takes time—an extensive period during which we must make important choices on bridging sources of energy production. The safety of our energy sources should be an important consideration in designing the transitional pathways we want to take.

Not all deaths are equal: How many deaths make a natural disaster newsworthy?

Our World in Data presents the empirical evidence on global development in entries dedicated to specific topics.
This blog post draws on data and research discussed in our entry on Natural Catastrophes.

How many deaths does it take for a natural disaster to be newsworthy? This is a question researchers Thomas Eisensee and David Strömberg asked in a 2007 study. The two authors found that for every person killed by a volcano, nearly 40,000 people have to die of a food shortage to get the same probability of coverage in US televised news. In other words, the type of disaster matters to how newsworthy networks find it to be. The visualizations below show the extent of this observed “news effect”. The first chart shows the proportion of each type of disaster that receives news coverage, and the second shows the “casualties ratio”, which tells us—all else equal—how many casualties would make media coverage equally likely for each type of disaster.

The study, which primarily set out to examine mass media’s influence on US natural disaster response, considered over 5,000 natural disasters and 700,000 news stories from the major US national broadcast networks (ABC, CBS, NBC, and CNN) between 1968 and 2002. The findings tells us, among other important things, that networks tend to be selective in their coverage in a way that does not adequately account for the severity and number of people killed or affected by a natural disaster.

Instead of considering the objective damage caused by natural disasters, networks tend to look for disasters that are “rife with drama”, as one New York Times article put it—hurricanes, tornadoes, forest fires, earthquakes all make for splashy headlines and captivating visuals. Thanks to this selectivity, less “spectacular” but often times more deadly natural disasters tend to get passed over. Food shortages, for example, result in the most casualties and affect the most people per incident but their onset is more gradual than that of a volcanic explosion or sudden earthquake. As a result, food shortages are covered only 3% of the time while a comparatively indulgent 30% of earthquakes and volcanic events get their time in the spotlight. Additionally, when the researchers “hold all else equal” by controlling for factors such as yearly trends in news intensity and the number of people killed and affected, the difference in coverage is even more pronounced. This bias for the spectacular is not only unfair and misleading, but also has the potential to misallocate attention and aid. Disasters that happen in an instant leave little time for preventative intervention. On the other hand, the gradual disasters that tend to affect more lives build up slowly, allowing more time for preventative measures to be taken. However, in a Catch-22 situation, the gradual nature of these calamities is also what prevents them from garnering the media attention they deserve.

There are other biases, too. Eisensee and Strömberg found that while television networks cover more than 15% of the disasters in Europe and South Central America, they show less than 5% of the disasters in Africa and the Pacific. Disasters in Africa tend to get less coverage than ones in Asia because they are less “spectacular”, with more droughts and food shortages occurring there relative to Asia. However, after controlling for disaster type, along with other factors such as the number killed and the timing of the news, there is no significant difference between coverage of African and Asian disasters. Instead, a huge difference emerges between coverage of Africa, Asia, and the Pacific on the one hand, and Europe and South and Central America, on the other. According to the researchers’ estimates, 45 times as many people would have to die in an African disaster for it to garner the same media attention as a European one. The visualizations below illustrate this bias.

ABC News’s slogan is “See the whole picture” and CNN’s is “Go there”, but good follow-up questions might be: what exactly, and where?

What the history of London’s air pollution can tell us about the future of today’s growing megacities

Our World in Data presents the empirical evidence on global development in entries dedicated to specific topics.
This blog post draws on data and research discussed in our entry on Air Pollution.

Cities in most high income countries have relatively low levels of local air pollution compared to cities in more rapidly developing ’emerging economies’. This, however, hasn’t always been the case.

National air pollution trends often follow the environmental kuznets curve (EKC). The EKC provides a hypothesis of the link between environmental degradation and economic development: in this case, air pollution initially worsens with the onset of industrial growth, but then peaks at a certain stage of economic development and from then on pollution levels begin to decline with increased development. Many high income nations are now at the late stage of this curve, with comparably low pollution levels. Meanwhile, developing nations span various stages of the growth-to-peak phase. I have previously written about this phenomenon in relation to sulphur dioxide (SO2) emissions here on Our World in Data.

If we take a historical look at pollution levels in London, for example, we see this EKC clearly. In the graph below, we have plotted the average levels of suspended particulate matter (SPM) in London’s air from 1700 to 2016. Suspended particulate matter (SPM) refers to fine solid or liquid particles which are suspended in Earth’s atmosphere (such as soot, smoke, dust and pollen). Exposure to SPM (especially very small particles, which can more easily infiltrate the respiratory system) has been strongly linked to negative cardiorespiratory health impacts, and even premature death. As we see, from 1700 on, London experienced a worsening of air pollution decade after decade. Over the course of two centuries the suspended particulate matter in London’s air doubled. But at the very end of the 19th century the concentration reached a peak and then began a steep decline so that today’s levels are almost 40-times lower than at that peak.

The data presented below has been kindly provided by Roger Fouquet, who has studied the topic of environmental quality, energy costs and economic development in great detail.

What explains this worsening and the subsequent improvement of London’s air quality?

The dominant contributor to London’s historic air pollution was coal burning. Throughout the 18th and 19th centuries, the coal industry in Great Britain expanded rapidly; driven not only by economic growth, but also by an expanding labour force and improved distribution networks (such as railways and waterways)., Increasing demand and falling coal prices (prices nearly halved between 1820 and 1850) led to a rapid increase in national coal consumption, rising from 20 million tonnes in 1820 to 160 million tonnes in 1900 (an eight-fold increase).

The decline in air pollution can be attributed to a complex mix of factors, including economic restructuring away from heavy industry, switching energy sources, and increased environmental regulation. There are thought to be three primary developments which led to this decline. Firstly, by the late 1800s, improved connectivity and commuter links allowed London’s population to spread further into surrounding suburban areas, inevitably leading to an overall reduction in population density. Even if such changes did not lead to a reduction in total emissions of pollutants, the dispersal and spreading of these population centres could have had some alleviation impact on concentrations in prime pollution hotspots.

Secondly, the United Kingdom introduced its Public Health Act for London in 1891. Under this new regulation, businesses in London which produced excessive smoke ran the risk of financial penalties if they did not adopt cleaner and more efficient energy practices, such as switching to less polluting (but more expensive) coal sources, and ensuring fires were adequately stoked. This put increasing pressure on businesses to shift towards better and cleaner industry practice.

The third potential source of this decline was a notable shift in heating and cooking sources from coal towards gas. Uptake of gas cookers rose sharply in Great Britain during the 1800 and 1900s. The Gas Light & Coke Company—which was the leading London supplier at the time—noted that in 1892 only 2 percent of residents had a gas cooker. By 1911, this had increased to 69 percent. In terms of air pollution impacts, gas is a much cleaner fuel relative to coal, meaning that such a large shift in heating and cooking sources may have contributed to the declining trend.

It’s difficult to fully capture just how polluted London’s air was throughout the 19th century. Throughout this period, London experienced frequent and severe fogs. Such fogs were often so dense that they halted railway journeys, interrupted general economic activities, and even contributed to London becoming a breeding ground for crime (crime rates rose sharply during these fog periods). London averaged 80 dense fog days per year, with some areas recording up to 180 in 1885.

Not only did air pollution incur a severe economic price, it also resulted in significant health costs. Air pollution deaths throughout this period rose steeply; in London, mortality from bronchitis increased from 25 deaths per 100,000 inhabitants in 1840 to 300 deaths per 100,000 in 1890. At its peak, 1-in-350 people died from bronchitis.

Although London was arguably one of the worst polluted cities during this time (and often referred to as the “Big Smoke”), many other industrial cities across Great Britain (and indeed across other nations) experienced similar air pollution problems. In the photograph below, we see pollution in Widnes, an industrial town close to Liverpool, in the late 19th century.

Air pollution in Widnes, late 19th century

London vs. today’s developing cities

From our first chart, we see that concentrations of suspended particulate matter (SPM) reached up to 623 micrograms per cubic metre. This figure will be meaningless to most without proper context. Let’s therefore compare historic London concentrations to those experienced in recent years in New Delhi—one of the world’s most polluted cities today.

In the first chart above, we can add SPM trends for Delhi from the later 1990’s to 2010 using the ‘add city’ button.

What we see is that concentrations in Delhi range from around 450 to 500 micrograms per cubic metre. This is undoubtedly high, but remains lower than peak concentrations in London during its rapid industrialization. It is wrong to assume that today’s major developing cities—such as Delhi, Beijing, Jakarta, Karachi—are experiencing unprecedented levels of air pollution. It’s likely that many of today’s high income cities have gone through similar periods of high (or higher) pollution levels. Perhaps what differentiates today’s transitioning cities is the population sizes which inhabit them; exposure to such pollution undoubtedly leads to high mortality figures in absolute terms.

If we see air pollution as an unfortunate by-product of economic and industrial development, an appropriate comparison would be based on levels of prosperity, rather than versus time. In the chart below we have plotted the same trends in SPM (on the y-axis) for London and Delhi, but now map these levels relative to gross domestic product (GDP) per capita (on the x-axis). These GDP per capita figures are adjusted for inflation and expressed in international dollars to reflect differences in living costs.

Interestingly, if we observe the evolution of these trends with time, we see that to achieve a given level of GDP per capita, Delhi’s air pollution levels have, and continue to, follow a similar pathway to that of London in the 19th century.

But the often-forgotten history of air pollution in today’s rich countries offers an important lesson about what is possible for world regions with lower levels of prosperity today. After air pollution worsens at the initial stages of development it declines at later stages and can reach historically low levels.

The key for Delhi and other transitioning cities will therefore be to continue shifting rightwards (increasing GDP per capita), but to try to peak below London’s 19th century pathway. If they can achieve this, then they will have succeeded in developing in a cleaner way than today’s high income cities.

There is a ‘happiness gap’ between East and West Germany

Our World in Data presents the empirical evidence on global development in entries dedicated to specific topics.
This blog post draws on data and research discussed in our entry on Happiness and Life Satisfaction.

In global surveys of happiness and life satisfaction, Germany usually ranks high. However, these national averages mask large inequalities. In the map below we focus on regional inequalities—specifically the gap in life satisfaction between West and East Germany.

The map below plots self-reported life satisfaction in Germany (using the 0-10 Cantril Ladder question), aggregating averages scores at the level of Federal States. The first thing that stands out is a clear divide between the East and the West, along the political division that existed before the reunification of Germany in 1990.
For example, the difference in levels between neighboring Schleswig-Holstein (in West Germany) and Mecklenburg-Vorpommern (in East Germany) are similar to the difference between Sweden and the US – a considerable contrast in self-reported life satisfaction.


Several academic studies have looked more closely at this ‘happiness gap’ in Germany using data from more detailed surveys, such as the German Socio-Economic Panel (e.g. Petrunyk and Pfeifer 2016). These studies provide two main insights:

First, the gap is partly driven by differences in household income and employment. But this is not the only aspect; even after controlling for socioeconomic and demographic differences, the East-West gap remains significant.

And second, the gap has been narrowing in recent years. In fact, the finding that the gap is narrowing is true both for the raw average differences, as well as for the ‘conditional differences’ (i.e. the differences that are estimated after controlling for socioeconomic and demographic characteristics). The following charts shows this.

Trends in life satisfaction for East and West Germany, 1992-2013

The observation that socioeconomic and demographic differences do not fully predict the observed East-West differences in self-reported happiness is related to a broader empirical phenomenon: Culture and history matter for self-reported life satisfaction—and in particular, ex-communist countries tend to have a lower subjective well-being than other countries with comparable levels of economic development. (More on this in our data-entry on Life Satisfaction.)