Income inequality and happiness inequality: a tale of two trends

Our World in Data presents the empirical evidence on global development in entries dedicated to specific topics.
This blog post draws on data and research discussed in our entry on Happiness and Life Satisfaction.

The General Social Survey (GSS) in the US is a survey administered to a nationally representative sample of about 1,500 respondents each year since 1972, and is an important source of information on long-run trends of self-reported life satisfaction in the country. As such, it is a key indicator of happiness in the US.

Using this source, Stevenson and Wolfers (2008) show that while average happiness in the US has remained broadly constant, inequality in happiness has fallen substantially in recent decades.

The authors further note that this is true both when we think about inequality in terms of the dispersion of answers, and also when we think about inequality in terms of gaps between demographic groups. They note that two-thirds of the black-white happiness gap has been eroded (although today white Americans remain happier on average, even after controlling for differences in education and income), and the gender happiness gap has disappeared entirely (women used to be slightly happier than men, but they are becoming less happy, and today there is no statistical difference once we control for other characteristics).

Curiously, these reductions in ‘happiness inequality’ have taken place alongside growing income inequality. As the following chart shows, income inequality in the US is exceptionally high and has been on the rise in the last four decades, with incomes for the median household growing much more slowly than incomes for the top 10%. (More on this data in our entry on incomes across the income distribution.)

Income inequality (Gini coefficient) and growth of living standards across the income distribution (by decile) in the USA, 1974-2013 – Brian Nolan, Stefan Thewissen, and Max Roser

The results from Stevenson and Wolfers for the US are consistent with other studies looking at changes in happiness inequality (or life satisfaction inequality) in different countries. In particular, researchers have noted that there is a correlation between economic growth and reductions in happiness inequality—even when income inequality is increasing at the same time. The visualization below, from Clark, Fleche and Senik (2015) shows this. It plots the evolution of happiness inequality within a selection of rich countries that experienced uninterrupted GDP growth.

In this chart, happiness inequality is measured by the dispersion—specifically the standard deviation—of answers in the World Value Survey. As we can see, there is a broad negative trend; happiness inequality is falling in countries where GDP per capita is rising. And the opposite is also true: In their paper the authors show that happiness inequality is increasing in those countries and times when GDP is falling.

Evolution of happiness inequality within countries with uninterrupted GDP growth – Clark, Fleche and Senik (2015)

So: Why could it be that happiness inequality falls with rising income inequality?

Clark, Fleche, and Senik argue that part of the reason is that the growth of national income allows for the greater provision of public goods, which in turn tighten the distribution of subjective well-being. This can still be consistent with growing income inequality, since public goods such as better health affect incomes and well-being differently.

Another possibility is that economic growth and social change in rich countries has translated into a more diverse society in terms of cultural expressions (e.g. through the emergence of alternative lifestyles), which has allowed people to converge in happiness even if they diverge in incomes, tastes and consumption. Steven Quartz and Annette Asp explain this hypothesis in a New York Times article, discussing evidence from experimental psychology.

Collective pessimism and our inability to guess the happiness of others

Our World in Data presents the empirical evidence on global development in entries dedicated to specific topics.
This blog post draws on data and research discussed in our entry on Happiness and Life Satisfaction.

We tend to underestimate the average happiness of people around us. The following visualization shows this for countries around the world, using data from Ipsos’ Perils of Perception—a cross-country survey asking people to guess what others in their country have answered to the happiness question in the World Value Survey.

The horizontal axis in the chart below shows the actual share of people who said they are ‘Very Happy’ or ‘Rather Happy’ in the World Value Survey; the vertical axis shows the average guess of the same number (i.e. the average guess that respondents made of the share of people reporting to be ‘Very Happy’ or ‘Rather Happy’ in their country).

If respondents would have guessed the correct share, all observations would fall on the red 45-degree line. But as we can see, all countries are far below the 45-degree line. In other words, people in every country underestimated the self-reported happiness of others. The most extreme deviations are in Asia—South Koreans think that 24% of people report being happy, when in reality 90% do.

The highest guesses in this sample (Canada and Norway) are 60%—this is lower than the lowest actual value of self-reported happiness in any country in the sample (corresponding to Hungary at 69%).

Why do people get their guesses so wrong? It’s not as simple as brushing aside these numbers by saying they reflect differences in ‘actual’ vs. reported happiness.

One possible explanation is that people tend to misreport their own happiness, therefore the average guesses might be a correct indicator of true life satisfaction (and an incorrect indicator of reported life satisfaction). However, for this to be true, people would have to commonly misreport their own happiness while assuming that others do not misreport theirs.

And people are not bad at judging the well-being of other people who they know: There is substantial evidence showing that ratings of one’s happiness made by friends correlate with one’s happiness, and that people are generally good at evaluating emotions from simply watching facial expressions.

An alternative explanation is that this mismatch is grounded in the well-established fact that people tend to be positive about themselves, but negative about other people they don’t know. It has been observed in other contexts that people can be optimistic about their own future, while at the same time being deeply pessimistic about the future of their nation or the world. We discuss this phenomenon in more detail in our entry on optimism and pessimism, specifically in a section dedicated to individual optimism and social pessimism.

“Life Expectancy” – What does this actually mean?

Our World in Data presents the empirical evidence on global development in entries dedicated to specific topics.
This blog post draws on data and research discussed in our entry on Life Expectancy.

The interactive chart below shows that life expectancy has increased substantially around the world in the last couple of centuries. As a matter of fact, the data tells us that in the long run life expectancy has increased in all countries around the world.

Life expectancy is one of the key measures of a population’s health, and an indicator used widely by policymakers and researchers to complement economic measures of prosperity, such as GDP per capita. It is easy to see that the trends in the chart below are a fantastic achievement reflecting widespread improvements in global health.

However, despite its importance and prominence in research and policy, it is surprisingly difficult to find a simple yet detailed description of what “life expectancy” actually means. In this blog post, we try to fill this gap.

What is life expectancy and how is it interpreted?

The term “life expectancy” refers to the number of years a person can expect to live. By definition, life expectancy is based on an estimate of the average age that members of a particular population group will be when they die.

In theory, estimating age-at-death is a simple exercise. Suppose we could track a group of people born a given year, many decades ago, and observe the exact date in which each one of them died. Then, we could estimate this cohort’s life expectancy by simply calculating the average of the ages of all members when they died.

In practice, however, things are often more complicated because record-keeping is insufficient, and because we are interested in making inferences about how long a group of people can expect to live in the future. Hence, estimating life expectancy typically requires making assumptions.

One common approach is to track members of a particular cohort (i.e. a group of individuals born in a given year) and predict the average age-at-death for them using a combination of observed mortality rates for past years and projections about mortality rates for future years. This approach leads to what is known as ‘cohort life expectancy‘. By definition, the cohort life expectancy takes into account observed and projected improvements in mortality for the cohort throughout its lifetime.

An alternative approach consists in estimating the average length of life for a hypothetical cohort assumed to be exposed, from birth through death, to the mortality rates observed at a given year. This approach leads to what is known as ‘period life expectancy‘ and is the definition used by most international organizations, including the UN and the World Bank, when reporting ‘life expectancy’ figures. Period life expectancy estimates do not take into account how mortality rates are changing and instead only look at the mortality pattern at one point in time. Because of this, period life expectancy figures are usually different to cohort life expectancy figures.

Since period life expectancy estimates are ubiquitous in research and public debate, it is helpful to use an example to flesh out the concept. Let’s consider the visualization below, which maps life expectancy—specifically period life expectancy—at birth, country by county. You can hover the mouse over a country to display the corresponding estimate.

For the US, we can see that life expectancy in 2005 was 77.6 years. This means that the cohort of infants born in the US in 2005 could expect to live 77.6 years, under the assumption that mortality patterns observed in 2005 remain constant throughout their lifetime. This is clearly a strong assumption—if you move the slider forward in the chart below, you’ll see that in 2010 the period life expectancy in the US was 78.8 years, which means that US mortality patterns did improve in the period 2005-2010.

In general, the commonly-used period life expectancies tend to be lower than the cohort life expectancies, because they do not include any assumptions about future improvements in mortality rates.

An important point to bear in mind when interpreting life expectancy estimates is that very few people will die at precisely the age indicated by life expectancy, even if mortality patterns stay constant.

For example, very few of the infants born in South Africa in 2009 will die at 52.2 years of age, as per the figures in the map above. Most will die much earlier or much later, since the risk of death is not uniform across the lifetime. Life expectancy is the average.

In societies with high infant mortality rates many people die in the first few years of life; but once they survive childhood, people often live much longer. Indeed, this is a common source of confusion in the interpretation of life expectancy figures: It is perfectly possible that a given population has a low life expectancy at birth, and yet has a large proportion of old people.

Given that life expectancy at birth is highly sensitive to the rate of death in the first few years of life, it is common to report life expectancy figures at different ages, both under the period and cohort approaches. For example, the UN estimates that the (period) global life expectancy at age 10 in 2005 was 63.6 years. This means that the group of 10-year-old children alive around the world in 2005 could expect to live another 63.6 years (i.e. until the age of 73.6), provided that mortality patterns observed in 2005 remained constant throughout their lifetime.

Finally, another point to bear in mind is that period and cohort life expectancy estimates are statistical measures, and they do not take into account any person-specific factors such as lifestyle choices. Clearly, the length of life for an average person is not very informative about the predicted length of life for a person living a particularly unhealthy lifestyle.

How is life expectancy calculated?

In practical terms, estimating life expectancy entails predicting the probability of surviving successive years of life, based on observed age-specific mortality rates. How is this actually done?

Age-specific mortality rates are usually estimated by counting (or projecting) the number of age-specific deaths in a time interval (e.g. the number of people aged 10-15 who died in the year 2005), and dividing by the total observed (or projected) population alive at a given point within that interval (e.g. the number of people aged 10-15 alive on 1 July 2015).

To ensure that the resulting estimates of the probabilities of death within each age interval are smooth across the lifetime, it is common to use mathematical formulas, to model how the force of mortality changes within and across age intervals. Specifically, it is often assumed that the proportion of people dying in an age interval starting in year x and ending in year n+x corresponds to  q(n,x)=1-e^{n * m(n,x)}, where m(n,x) is the age-specific mortality rate as measured in the middle of that interval (a term often referred to as the ‘central death rate’ for the age interval).

Once we have estimates of the fraction of people dying across age intervals, it is simple to calculate a ‘life table’ showing the evolving probabilities of survival and the corresponding life expectancies by age. Here is an example of a life table from the US, and this tutorial from MEASURE Evaluation explains how life tables are constructed, step by step (see Section 3.2 ‘The Fergany Method’).

Period life expectancy figures can be obtained from ‘period life tables’ (i.e. life tables that rely on age-specific mortality rates observed from deaths among individuals of different age groups at a fixed point in time). And similarly, cohort life expectancy figures can be obtained from ‘cohort life tables’ (i.e. life tables that rely on age-specific mortality rates observed from tracking and forecasting the death and survival of a group of people as they become older).

For some countries and for some time intervals, it is only possible to reconstruct life tables from either period or cohort mortality data. As a consequence, in some instances—for example in obtaining historical estimates of life expectancy across world regions—it is necessary to combine period and cohort data. In these cases, the resulting life expectancy estimates cannot be simply classified into the ‘period’ or ‘cohort’ categories.

What else can we learn from ‘life tables’?

Life tables are not just instrumental to the production of life expectancy figures (as noted above), they also provide many other perspectives on the mortality of a population. For example, they allow for the production of ‘population survival curves’, which show the share of people who are expected to survive various successive ages. The below chart provides an example, plotting survival curves for individuals born at different points in time, using cohort life tables from England and Wales.

At any age level in the horizontal axis, the curves in the chart below mark the estimated proportion of individuals who are expected to survive that age. As we can see, less than half of the people born in 1851 in England and Wales made it past their 50th birthday. In contrast, more than 95% of the people born in England and Wales today can expect to live longer than 50 years.

Since life expectancy estimates only describe averages, these indicators are complementary, and help us understand how health is distributed across time and space. In our entry on Life Expectancy you can read more about related complementary indicators, such as the median age of a population.

Yields vs. Land Use: How the Green Revolution enabled us to feed a growing population

Our World in Data presents the empirical evidence on global development in entries dedicated to specific topics.
This blog post draws on data and research discussed in our entry on Food per Person, and Yields and Land Use in Agriculture.

Over the last 50 years global population has more than doubled. This factor has inevitably reduced the land available per person to live and grow food. How have we managed to feed a rapidly growing population with ever-shrinking land resources?

There are two key variables we can change to produce more food crops. We can opt for:

  • Expansion: increase the area of land we grow our food over
  • Intensification: increase the yield output (i.e. kilograms of crop produced per unit area of land). This is typically achieved through a combination of chemical inputs (such as fertilizer, pesticides and herbicides); improved water use (e.g. irrigation); mechanization and improved farming practices; and the use of higher-yielding crop strains or seeds

Global cereal production

At the global level, how has crop production changed over the last fifty years? Here, we focus on cereal production: cereals form the base component of energy intake in most diets, comprising more than half of total caloric intake in many countries, and also dominate global arable land use by area. In the chart below we have mapped four variables: total cereal production; average cereal yield; land area used for cereal production; and total population. These are measured as an index relative to their respective values in 1961 (i.e. 1961 is equal to 100).

From 1961 to 2014, global cereal production has increased by 280 percent. If we compare this increase to that of total population (which increased by only 136 percent over the same period), we see that global cereal production has grown at a much faster rate than the population. Cereal production per person has increased despite a growing population.

Have we achieved this through land expansion or improved yields? As we can see in the chart, expansion played a very small role: over the last few decades land use for cereal production has increased only marginally. In 2014, we used 16% more land for cereal production than we did in 1961 (approximately equivalent to double the area of Germany). Overall, this means we use less land per person than we did fifty years ago.

Most of our improvements in cereal production have arisen from improvements in yield. The average cereal yield has increased by 175 percent since 1961. Today, the world can produce almost three times as much cereal from a given area of land than it did in 1961. As we will explain below, this increase has been even more dramatic in particular regions.

Although food is globally traded, the relative distribution of food production is crucial to food security. The evolution of these trends at national and regional levels is therefore critical. Below we have explored a number of varied and interesting examples of these trends across the world. The change in cereal production, population growth, and the relative contribution of yield gains and land expansion are different in each.

The Green Revolutionaries: Mexico, India, China, and Brazil

One of the most critical turning points for global agriculture was the so-called ‘Green Revolution’, which began in the mid-20th century. The Green Revolution is used to describe the large-scale transfer and adoption of new technologies in the agricultural sector, particularly in the developing world. These technologies included chemical inputs (such as fertilizers and pesticides), irrigation technologies, farm mechanization (such as tractors), and high-yielding rice, wheat and maize seed varieties. Overall, this led to a significant shift in agriculture from ‘traditional’ to ‘industrial’ practices across the developing world. However, progress and adoption has not been equal across countries.


In the chart below we see cereal production, land use and yield trends for Mexico over the period 1961 to 2014. Mexico was one of the first countries to be linked to the Green Revolution of the 20th century. As we can see, the increase in cereal production in Mexico has greatly exceeded population growth (meaning output per person has increased). And while agricultural expansion has played some role in this (land use for production increased by 32 percent), the adoption of advanced agricultural technologies and practices has led to impressive gains in yield of 224 percent. Because Mexico’s total output of cereals increased faster than its population, the availability of cereals per capita increased in this period.


In the chart below we see the evolution of India’s agricultural system since 1961. In the 1960s and the decades which followed, India also underwent a Green Revolution in agricultural production and reform. In order to address recurring cycles of severe famine under British rule in the 18th, 19th and first half of the 20th century, independent India made large economic, technological and social investments in improved agricultural practices—particularly in wheat and rice crops. The adoption of higher-yielding cereal varieties, subsidization of fertilizer and irrigation inputs, and investments in agricultural research and development all led to rapid gains in cereal yield.

Shown below, we see that since around 1970, cereal output has grown at a faster rate than India’s population (increasing by 238, and 182 percent, respectively). In contrast to Mexico, this increase in production has been achieved almost completely through agricultural intensification and yield improvements. Throughout this period, the total area of land used for cereal production barely changed. This reduction in area needed per person is particularly important for countries like India, where the total population continues to increase. Over the next 50 years, it is projected that India will have around 400-450 million extra mouths to feed.


China’s transition—shown in the chart below—has several parallels to that of India. China has managed to achieve an impressive increase in cereal output (increasing 420 percent) with almost no expansion of area used for cereal production. As a result, the share of China’s population who are undernourished has fallen from 24 percent to less than 10 percent since 1990 alone.

This growth is almost entirely attributable to improvements in productivity and yield. It has, however, achieved a much greater gap than India in cereal output relative to population (the increase in per capita cereal production, and caloric supply has been much more dramatic). What is this additional cereal crop used for? Per capita caloric supply in China has more than doubled over the last 50 years, however, only 44 percent of cereal production is used domestically for food. Nearly the same amount—40 percent of cereal—was used for animal feed in 2013, with smaller quantities diverted to industrial uses such as biofuel production.


Brazil’s cereal production has increased by a remarkable 574 percent since 1961—well above its population increase of 175 percent.

Unlike India and China, Brazil has achieved this through a combination of both yield improvements and land use expansion. It thus provides a perfect example of the trade-off between extensification and intensification. Over the 1960s and 1970s, Brazil’s land area under cereal production almost doubled—this correlates with a 20-year period of almost stagnant cereal yields. This expansion has been of particular concern from an environmental perspective, with the expansion of agriculture often happening at the expense of forested areas and ecologically important regions such as the Amazon.

From 1980 onwards, however, we see the inverse of this relationship. Since 1980, cereal yields have increased almost three-fold, allowing the land under cereal production to remain almost unchanged. This development aptly highlights the transition from extensification (low yields, large area expansion) in the 1960s and 1970s to intensification (increasing yields, constant land use) from 1980 onwards.

Sub-Saharan Africa

The adoption and success of the Green Revolution has not been consistent across the developing world. Sub-Saharan Africa (SSA) has been a region of particular concern in terms of food security. Despite making significant progress in reducing hunger in recent decades, undernourishment in Sub-Saharan Africa remains the highest in the world (with almost one-in-five people living there defined as undernourished).

In the chart below we see that SSA’s cereal production has been unable to keep pace with population growth. Despite an increase in cereal output of around 300 percent, per capita output has been declining. Overall, we see much greater emphasis on agricultural expansion in SSA, increasing by 120 percent since 1961—approximately equivalent to the total area of Kenya. Relative to Asia and Latin America, SSA’s improvements in yield have been much more modest (increasing by only 80 percent).

If we look at trends on an individual country basis (by selecting ‘Zimbabwe’, for example, using the “change country” wheel on the chart below) we see that the agricultural systems of many countries across SSA suffer from large volatility in production, yield and land allocation. In the years to come, it will be crucial for SSA to effectively adopt new technologies and practices to ensure steady and consistent improvements in yield at a faster rate than it has achieved to date.

United States

Since 1961, the US’s population has been growing modestly. Simultaneously, cereal production has achieved well in excess of this rate of increase. Overall, we see that land allocated to cereal production in the United States has actually declined by around ten percent. As a result, gains in yield have grown at a faster rate than total cereal output—however, since yields were already high in 1961 relative to countries such as Mexico, India, and China, overall improvements have been slightly more modest.

Like China, while some of this additional per capita output is consumed by humans (per capita consumption increased over this period), much—indeed, in this case the overwhelming majority—of cereals produced in the United States are diverted to other uses. Based on data from the UN FAOstat database, in 2013 only 8 percent of cereal crop grown in the USA was eaten by US citizens; 32 percent was fed to animals; and 32 percent was allocated for industrial uses, such as biofuel production.


Here we use Germany as a representative of a high income country with a stable population size—in this regard, its trends are approximate to many countries across Europe. Below we see that despite a roughly constant population size, Germany’s cereal output has continued to increase. Like the United States, the total land used for cereal production has marginally declined over this period. Increases in cereal output have therefore been achieved through yield improvements alone.

Again, some of this increase in per capita cereal output is reflected in higher consumption levels of the population. However, like the US and China, the majority of domestic cereal production is allocated to uses other than domestic food consumption. In 2013, only 19 percent of Germany’s cereal production was eaten by German citizens, in contrast to the 56 percent which was fed to animals. Cereal exports from Germany are also high, reaching around 35 percent in 2013.

How much land has been spared as a result of gains in cereal yields?

Although there are a few exceptions—notably across Sub-Saharan Africa—the continued increase in cereal yields across the world has been the major driver of total cereal production. This has inevitably allowed us to ‘spare’ land we would have otherwise had to convert for cereal production.

In the chart below we see that the global area under cereal production (in blue) has increased from 625 to 721 million hectares from 1961-2014. For context, this difference is approximately equal to one-tenth of the area of the United States. If global average cereal yields were to have remained at their 1961 levels, we can see (in red) the amount of additional land we would have had to convert to arable land to achieve the same levels of cereal production. This ‘spared’ land amounts to 1.26 billion hectares in 2014—close to the area of the United States and India combined.

We currently use approximately 50 percent of global habitable land for agriculture. Without yield increases, this may have risen to 62 percent. Such agricultural expansion would have likely crept into fertile forested land. The UN FAO estimates that since around 1960, we have lost just over 400 million hectares of forest. If land for cereals had therefore replaced forested land, we would have lost four times more forestry than we have in reality.

When did the provision of healthcare first become a public policy priority?

Our World in Data presents the empirical evidence on global development in entries dedicated to specific topics.
This blog post draws on data and research discussed in our entry on Financing Healthcare.

Today, healthcare is commonly considered a ‘merit good’—a commodity an individual should have on the basis of need rather than ability and willingness to pay. This view, mainly grounded in the recognized positive externalities of healthcare, is reflected in the fact that access to healthcare is currently a constitutional right in many countries.

However, a couple of centuries ago, the situation was very different. In fact, during the Middle Ages, personal health was considered a matter of destiny across most of Western Europe.

So, when was it that the provision of healthcare became a public policy priority? To answer this question, we piece together a new dataset and give you a behind-the-scenes look at our approach.

As we discuss below in more detail, the data shows that it was only recently that public healthcare became a policy priority. Up until 1930, public spending on healthcare was less than 1% of national incomes everywhere in the world. For context, the countries with the highest public healthcare spending levels today devote close to 10% of their national incomes to it.

Available sources

Reliable historical data on healthcare expenditure is scant. To our knowledge, the longest series on healthcare expenditure is available from Tanzi and Schuknecht (2000).

Tanzi and Schuknecht compile estimates from various sources, covering the period 1880-1994.

Because the most recent observations in Tanzi and Schuknecht come from the World Bank’s World Development Indicators (WDI), we turn to the same source to try to extend the series forward from 1995 to 2014. Unfortunately, a comparison of the WDI series and the series in Tanzi and Schuknecht reveals that this is not possible since there is a clear discrepancy in levels—the WDI observations for 1995 imply a marked jump with respect to the 1994 observations reported by Tanzi and Schuknecht. This discrepancy arises from the fact that the World Health Organization—which is the underlying source of the WDI series—has revised its methodology and estimates. Hence, the estimates compiled by Tanzi and Schuknecht are not comparable to more recent estimates from the same underlying source.

An alternative source of up-to-date data on healthcare spending is the OECD via the OECD.stat portal. This source publishes estimates for the period 1970-2016. However once again, it is neither consistent with the World Bank series, nor with the estimates compiled by Tanzi and Schuknecht.

The discrepancies between these sources are mainly due to differences in definitions regarding the types of expenditure that are accounted for. In particular, the figures published by OECD.stat do not include capital expenditures as part of total spending on healthcare, public or private.

Whether reporting agencies include capital expenditure as part of healthcare expenditure depends on the ‘System of Health Accounts’ (SHA) that is in place. The OECD figures use the SHA 2011, which is the latest revision. The WHO figure use a previous version (SHA 1.0).

While it is widely accepted that the SHA 2011 is superior to previous versions, many countries still use previous versions. This means there is a trade-off between coverage across countries, and data quality. In the interest of coverage, we have chosen to rely on WHO definitions.

Piecing together a new series

Given these difficulties, we decided to piece together a new series using information from various sources based on a number of assumptions.

Our approach consisted in taking the latest release of the data from the World Health Organization (WHO), and extrapolating backwards using the rates of change implied by the sources underlying the estimates from Tanzi and Schuknecht and others.

In the end, our dataset was constructed by combining four sources. These are Lindert (1994), OECD (1993), OECD.stat, and the WHO Global Health Expenditure Database (WHO GHED).

These four sources were combined as follows.

  • For the period 1995-2015 we report the figures as published in WHO GHED.
  • For the period 1970-1994, we extended the recent figures (from WHO GHED) backwards by using the rates of change as reported in OECD.stat.
  • For the period 1960-1969, we continue extending backwards by using the rates of change as reported in OECD (1993).
  • For the period 1880-1930, we report observations from Lindert (1994).

To be precise, the process of extrapolation consisted in taking the earliest available observation from WHO GHED, and then successively extending the series backwards; first by using the year-by-year rate of change implied by the estimates in OECD.stat, for the period 1970-1994; and then using the year-by-year rate of change in OECD (1993), for the period 1960-1969.

The implicit assumption in our constructed dataset is that the estimates from these three series have different levels, but common trends. Empirically, the validity of this assumption is supported by the trends in the overlapping years.

The following chart plots, country by country, the underlying sources. If you want to compare our constructed dataset against the underlying sources for a specific country, all you need to do is select the series labelled ‘OWID extrapolated series’, at the top of the chart.

The rise of public healthcare

The visualization below plots our new dataset of healthcare estimates for a selection of today’s rich countries.

Public expenditure on healthcare in all of these countries followed roughly similar paths—and this is despite early differences in their healthcare regimes (for a detailed account of the institutional evolution of healthcare regimes in these countries see the report prepared by CESifo DICE).

As we can see, before World War I, public spending on healthcare was negligible, accounting for less than 1% of national incomes in all countries. Yet in the second half of the twentieth century things started changing quickly, and today public healthcare spending is between 5-10% of national incomes.

The same data from the chart above is mapped in the visualization below. As we can see, after half a century of expanding their healthcare systems, OECD countries today spend much more on healthcare than lower income countries where the expansion of healthcare protection started decades later.

And the differences in healthcare spending between countries today are larger yet if we compare per capita figures, rather than shares of GDP. As we show in this interactive chart, per capita healthcare spending in the US is 387 times higher than in the Central African Republic; which means that, on average, Americans spend more on healthcare per day than people in the Central African Republic spend per year.