Gdoc/Admin

AI timelines: What do experts in artificial intelligence expect for the future?

Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.

Artificial intelligence (AI) that surpasses our own intelligence sounds like the stuff from science-fiction books or films. What do experts in the field of AI research think about such scenarios? Do they dismiss these ideas as fantasy, or are they taking such prospects seriously?

A human-level AI would be a machine, or a network of machines, capable of carrying out the same range of tasks that we humans are capable of. It would be a machine that is “able to learn to do anything that a human can do”, as Norvig and Russell put it in their textbook on AI.1

It would be able to choose actions that allow the machine to achieve its goals and then carry out those actions. It would be able to do the work of a translator, a doctor, an illustrator, a teacher, a therapist, a driver, or the work of an investor.

In recent years, several research teams contacted AI experts and asked them about their expectations for the future of machine intelligence. Such expert surveys are one of the pieces of information that we can rely on to form an idea of what the future of AI might look like.

The chart shows the answers of 352 experts. This is from the most recent study by Katja Grace and her colleagues, conducted in the summer of 2022.2

Experts were asked when they believe there is a 50% chance that human-level AI exists.3 Human-level AI was defined as unaided machines being able to accomplish every task better and more cheaply than human workers. More information about the study can be found in the fold-out box at the end of this text.4

Each vertical line in this chart represents the answer of one expert. The fact that there are such large differences in answers makes it clear that experts do not agree on how long it will take until such a system might be developed. A few believe that this level of technology will never be developed. Some think that it’s possible, but it will take a long time. And many believe that it will be developed within the next few decades.

As highlighted in the annotations, half of the experts gave a date before 2061, and 90% gave a date within the next 100 years.

Other surveys of AI experts come to similar conclusions. In the following visualization, I have added the timelines from two earlier surveys conducted in 2018 and 2019. It is helpful to look at different surveys, as they differ in how they asked the question and how they defined human-level AI. You can find more details about these studies at the end of this text.

In all three surveys, we see a large disagreement between experts and they also express large uncertainties about their own individual forecasts.5

What should we make of the timelines of AI experts?

Expert surveys are one piece of information to consider when we think about the future of AI, but we should not overstate the results of these surveys. Experts in a particular technology are not necessarily experts in making predictions about the future of that technology.

Experts in many fields do not have a good track record in making forecasts about their own field, as researchers including Barbara Mellers, Phil Tetlock, and others have shown.6 The history of flight includes a striking example of such failure. Wilbur Wright is quoted as saying, "I confess that in 1901, I said to my brother Orville that man would not fly for 50 years." Two years later, ‘man’ was not only flying, but it was these very men who achieved the feat.7

Additionally these studies often find large ‘framing effects’, two logically identical questions get answered in very different ways depending on how exactly the questions are worded.8

What I do take away from these surveys however, is that the majority of AI experts take the prospect of very powerful AI technology seriously. It is not the case that AI researchers dismiss extremely powerful AI as mere fantasy.

The huge majority thinks that in the coming decades there is an even chance that we will see AI technology which will have a transformative impact on our world. While some have long timelines, many think it is possible that we have very little time before these technologies arrive. Across the three surveys more than half think that there is a 50% chance that a human-level AI would be developed before some point in the 2060s, a time well within the lifetime of today’s young people.

The forecast of the Metaculus community

In the big visualization on AI timelines below, I have included the forecast by the Metaculus forecaster community.

The forecasters on the online platform Metaculus.com are not experts in AI but people who dedicate their energy to making good forecasts. Research on forecasting has documented that groups of people can assign surprisingly accurate probabilities to future events when given the right incentives and good feedback.9 To receive this feedback, the online community at Metaculus tracks how well they perform in their forecasts.

What does this group of forecasters expect for the future of AI?

At the time of writing, in November 2022, the forecasters believe that there is a 50/50-chance for an ‘Artificial General Intelligence’ to be ‘devised, tested, and publicly announced’ by the year 2040, less than 20 years from now.

On their page about this specific question, you can find the precise definition of the AI system in question, how the timeline of their forecasts has changed, and the arguments of individual forecasters for how they arrived at their predictions.10

The timelines of the Metaculus community have become much shorter recently. The expected timelines have shortened by about a decade in the spring of 2022, when several impressive AI breakthroughs happened faster than many had anticipated.11

The forecast by Ajeya Cotra

The last shown forecast stems from the research by Ajeya Cotra, who works for the nonprofit Open Philanthropy.12 In 2020 she published a detailed and influential study asking when the world will see transformative AI. Her timeline is not based on surveys, but on the study of long-term trends in the computation used to train AI systems. I present and discuss the long-run trends in training computation in this companion article.

Cotra estimated that there is a 50% chance that a transformative AI system will become possible and affordable by the year 2050. This is her central estimate in her “median scenario.” Cotra emphasizes that there are substantial uncertainties around this median scenario, and also explored two other, more extreme, scenarios. The timelines for these two scenarios – her “most aggressive plausible” scenario and her “most conservative plausible” scenario – are also shown in the visualization. The span from 2040 to 2090 in Cotra’s “plausible” forecasts highlights that she believes that the uncertainty is large.

The visualization also shows that Cotra updated her forecast two years after its initial publication. In 2022 Cotra published an update in which she shortened her median timeline by a full ten years.13

It is important to note that the definitions of the AI systems in question differ very much across these various studies. For example, the system that Cotra speaks about would have a much more transformative impact on the world than the system that the Metaculus forecasters focus on. More details can be found in the appendix and within the respective studies.

What can we learn from the forecasts?

The visualization shows the forecasts of 1128 people – 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra.

There are two big takeaways from these forecasts on AI timelines:

  1. There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months.There is not just disagreement between experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare.
  2. At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime.

The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world – from the future of our climate, to the future of our economies, to the future of our political institutions – the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote.

We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity’s history is likely to happen within our lifetimes.


Acknowledgements: I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Bastian Herre, Edouard Mathieu, Esteban Ortiz-Ospina and Hannah Ritchie for their helpful comments to drafts of this essay.

And I would like to thank my colleague Charlie Giattino who calculated the timelines for individual experts based on the data from the three survey studies and supported the work on this essay. Charlie is also one of the authors of the cited study by Zhang et al. on timelines of AI experts.


More information about the studies and forecasts discussed in this essay

The three cited AI experts surveys are:

The surveys were conducted during the following times:

  • Grace et al. was completed between 12 June and 3 August 2022.
  • Zhang et al. was completed mainly between 16 September and 13 October 2019; but due to an error some experts completed the survey between 10-14 March 2020.
  • Gruetzemacher et al. was completed in the "summer of 2018.”

The surveys differ in how the question was asked and how the AI system in question was defined. In the following sections we discuss this in detail for all cited studies.

The study by Grace et al. published in 2022

Survey respondents were given the following text regarding the definition of high-level machine intelligence:

“The following questions ask about ‘high-level machine intelligence’ (HLMI). Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g., being accepted as a jury member. Think feasibility, not adoption. For the purposes of this question, assume that human scientific activity continues without major negative disruption.”

Each respondent was randomly assigned to give their forecasts under one of two different framings: “fixed-probability” and “fixed-years.”

Those in the fixed-probability framing were asked, “How many years until you expect: A 10% probability of HLMI existing? A 50% probability of HLMI existing? A 90% probability of HLMI existing?” They responded by giving a number of years from the day they took the survey.

Those in the fixed-years framing were asked, “How likely is it that HLMI exists: In 10 years? In 20 years? In 40 years?” They responded by giving a probability of that happening.

Several studies have shown that the framing affects respondents’ timelines, with the fixed-years framing leading to longer timelines (i.e., that HLMI is further in the future). For example, in the previous edition of this survey (which asked identical questions), respondents who got the fixed-years framing gave a 50% chance of HLMI by 2068; those who got fixed-probability gave the year 2054.14 The framing results from the 2022 edition of the survey have not yet been published.

In addition to this framing effect, there is a larger effect driven by how the concept of HLMI is defined. We can see this in the results from the previous edition of this survey (the result from the 2022 survey hasn’t yet been published). For respondents who were given the HLMI definition above, the average forecast for a 50% chance of HLMI was 2061. A small subset of respondents was instead given another, logically similar question that asked about the full automation of labor; their average forecast for a 50% probability was 2138, a full 77 years later than the first group.

The full automation of labor group was asked: “Say an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g., being accepted as a jury member. Think feasibility, not adoption. Say we have reached ‘full automation of labor’ when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.” This question was asked under both the fixed-probability and fixed-years framings.

The study by Zhang et al. published in 2022

Survey respondents were given the following definition of human-level machine intelligence: “Human-level machine intelligence (HLMI) is reached when machines are collectively able to perform almost all tasks (>90% of all tasks) that are economically relevant better than the median human paid to do that task in 2019. You should ignore tasks that are legally or culturally restricted to humans, such as serving on a jury.”

“Economically relevant” tasks were defined as those included in the Occupational Information Network (O*NET) database. O*NET is a widely used dataset of tasks carried out across a wide range of occupations.

As in Grace et al 2022, each survey respondent was randomly assigned to give their forecasts under one of two different framings: “fixed-probability” and “fixed-years.” As was found before, the fixed-years framing resulted in longer timelines on average: the year 2070 for a 50% chance of HLMI, compared to 2050 under the fixed-probability framing.

The study by Gruetzemacher et al. published in 2019

Survey respondents were asked the following: “These questions will ask your opinion of future AI progress with regard to human tasks. We define human tasks as all unique tasks that humans are currently paid to do. We consider human tasks as different from jobs in that an algorithm may be able to replace humans at some portion of tasks a job requires while not being able to replace humans for all of the job requirements. For example, an AI system(s) may not replace a lawyer entirely but may be able to accomplish 50% of the tasks a lawyer typically performs. In how many years do you expect AI systems to collectively be able to accomplish 99% of human tasks at or above the level of a typical human? Think feasibility.”

We show the results using this definition of AI in the chart, as we judged this definition to be most comparable to the other studies included in the chart.

In addition to this definition, respondents were asked about AI systems that are able to collectively accomplish 50% and 90% of human tasks, as well as “broadly capable AI systems” that are able to accomplish 90% and 99% of human tasks.

All respondents in this survey received a fixed-probability framing.

The study by Ajeya Cotra published in 2020

Cotra’s overall aim was to estimate when we might expect “transformative artificial intelligence” (TAI), defined as “ ‘software’... that has at least as profound an impact on the world’s trajectory as the Industrial Revolution did.”

Cotra focused on “a relatively concrete and easy-to-picture way that TAI could manifest: as a single computer program which performs a large enough diversity of intellectual labor at a high enough level of performance that it alone can drive a transition similar to the Industrial Revolution.”

One intuitive example of such a program is the ‘virtual professional’, “a model that can do roughly everything economically productive that an intelligent and educated human could do remotely from a computer connected to the internet at a hundred-fold speedup, for costs similar to or lower than the costs of employing such a human.”

When might we expect something like a virtual professional to exist?

To answer this, Cotra first estimated the amount of computation that would be required to train such a system using the machine learning architectures and algorithms available to researchers in 2020. She then estimated when that amount of computation would be available at a low enough cost based on extrapolating past trends.

The estimate of training computation relies on an estimate of the amount of computation performed by the human brain each second, combined with different hypotheses for how much training would be required to reach a high enough level of capability.

For example, the “lifetime anchor” hypothesis estimates the total computation performed by the human brain up to age ~32.

Each aspect of these estimates comes with a very high degree of uncertainty. Cotra writes: “The question of whether there is a sensible notion of ‘brain computation’ that can be measured in FLOP/s—and if so, what range of numerical estimates for brain FLOP/s would be reasonable—is conceptually fraught and empirically murky.”

For anyone who is interested in the question of future AI, the study of Cotra is very much worth reading in detail. She lays out good and transparent reasons for her estimates and communicates her reasoning in great detail.

Her research was announced in various places, including the AI Alignment Forum: Ajeya Cotra (2020) – Draft report on AI timelines. As far as I know the report itself always remained a ‘draft report’ and was published here on Google Docs (it is not uncommon in the field of AI research that articles get published in non-standard ways). In 2022 Ajeya Cotra published a Two-year update on my personal AI timelines.

Other studies

A very different kind of forecast that is also relevant here is the work of David Roodman. In his article Modeling the Human Trajectory he studies the history of global economic output to think about the future. He asks whether it is plausible to see economic growth that could be considered ‘transformative’ – an annual growth rate of the world economy higher than 30% – within this century. One of his conclusions is that "if the patterns of long-term history continue, some sort of economic explosion will take place again, the most plausible channel being AI.”

And another very different kind of forecast is Tom Davidson’s Report on Semi-informative Priors published in 2021.

Endnotes

  1. Peter Norvig and Stuart Russell (2021) – Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.

  2. A total of 4,271 AI experts were contacted; 738 responded (a 17% rate), of which 352 provided complete answers to the human-level AI question.It’s possible that the respondents were not representative of all the AI experts contacted – that is, that there was “sample bias.” There is not enough data to rule out all potential sources of sample bias. After all, we don’t know what the people who didn’t respond to the survey, or others who weren’t even contacted, believe about AI. However, there is evidence from similar surveys to suggest that at least some potential sources of bias are minimal.

    In similar surveys (e.g., Zhang et al. 2022; Grace et al. 2018), the researchers compared the group of respondents with a randomly selected, similarly sized group of non-respondents to see if they differed on measurable demographic characteristics, such as where they were educated, their gender, how many citations they had, years in the field, etc.

    In these similar surveys, the researchers found some differences between the respondents and non-respondents, but they were small. So while other, unmeasured sources of sample bias couldn’t be ruled out, large bias due to the demographic characteristics that were measured could be ruled out.

  3. Much of the literature on AI timelines focuses on the 50% probability threshold. I think it would be valuable if this literature would additionally also focus on higher thresholds, say a probability of 80% for the development of a particular technology. In future updates of this article we will aim to broaden the focus and include such higher thresholds.

  4. A discussion of the two most widely used concepts for thinking about the future of powerful AI systems – human-level AI and transformative AI – can be found in this companion article.

  5. The visualization shows when individual experts gave a 50% chance of human-level machine intelligence. The surveys also include data on when these experts gave much lower chances (e.g., ~10%) as well as much higher ones (~90%), and the spread between the respective dates is often considerable, expressing the AI experts range of their individual uncertainty. For example, the average across individual experts in the Zhang et al study gave a 10% chance of human-level machine intelligence by 2035, a 50% chance by 2060, and a 90% chance by 2105.

  6. Mellers, B., Tetlock, P., & Arkes, H. R. (2019). Forecasting tournaments, epistemic humility and attitude depolarization. Cognition, 188, 19-26.

    Tetlock, P. (2005) – Expert political judgment: How good is it? How can we know? Princeton, NJ: Princeton University Press

    Philip E. Tetlock and Dan Gardner (2015) – Superforecasting: The Art and Science of Prediction.

  7. Another example is Ernest Rutherford, father of nuclear physics, calling the possibility of harnessing nuclear energy "moonshine." The research paper by John Jenkin discusses why. John G. Jenkin (2011) – Atomic Energy is ‘‘Moonshine’’: What did Rutherford Really Mean?. Published in Physics in Perspective. DOI 10.1007/s00016-010-0038-1

  8. This is discussed in some more detail for the study by Grace et al. in the Appendix.

  9. See the previously cited literature on forecasting by Barbara Mellers, Phil Tetlock, and others.

  10. There are two other relevant questions on Metaculus. The first one asks for the date when weakly General AI will be publicly known. And the second one is asking for the probability of ‘human/machine intelligence parity’ by 2040.

  11. Metaculus’s community prediction fell from the year 2058 in March 2022 to the year 2040 in July 2022.

  12. Her research was announced in various places, including the AI Alignment Forum: Ajeya Cotra (2020) – Draft report on AI timelines. As far as I know the report itself always remained a ‘draft report’ and was published here on Google Docs.

    In 2022 Ajeya Cotra published a Two-year update on my personal AI timelines.

  13. Ajeya Cotra’s Two-year update on my personal AI timelines.

  14. Grace et al (2018) Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research. We read both of these numbers of the chart in this publication, these years are not directly reported.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

Max Roser (2023) - “AI timelines: What do experts in artificial intelligence expect for the future?” Published online at OurWorldInData.org. Retrieved from: 'https://ourworldindata.org/ai-timelines' [Online Resource]

BibTeX citation

@article{owid-ai-timelines,
    author = {Max Roser},
    title = {AI timelines: What do experts in artificial intelligence expect for the future?},
    journal = {Our World in Data},
    year = {2023},
    note = {https://ourworldindata.org/ai-timelines}
}
Our World in Data logo

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license. You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.