When will we realise artificial general intelligence?

The realisation of artificial general intelligence (AGI), human-built models that can accomplish most human tasks, would be an event of tremendous significance for the future of humanity and all earthly life. AGI systems might possess capabilities vastly exceeding humans and radically different goals, destabilise international power structures, and alter society fundamentally by populating the world with digital minds. If AGI is technologically feasible, there is arguably a good chance that it will be realised within the next 50 years, potentially making this the most important century in the lifespan of humanity.

Some researchers see the human brain as a physical ‘proof-of-concept’ that AGI is possible. The Church-Turing thesis posits that all computable functions can be solved by a Turing machine, an extremely simple computer, with sufficient time and memory. The human brain outwardly appears to operate via physical laws that are, in principle, computable. Modern neural network architectures are Turing-complete and thus can reproduce all computable functions, even if they require an astronomical amount of computational power, memory and training. Many complex cognitive tasks that were once thought impossible for computers have recently been accomplished by AIs composed of deep learning neural networks, such as:

If we accept the computational theory of mind, the question of whether neural networks can accomplish a given human task seems to be contingent on whether we can accomplish it with networks small enough to train with realistic computation and cost.

A baseline estimate of the nearness of AGI can be determined from expert consensus. When asked in 2016 what year an AI would first outperform humans at all tasks, 300 top machine learning researchers estimated on average 50% probability of this occurring by 2060 and 70% probability by 2100. As of today, the median prediction on the online crowdsourced forecasting platform Metaculus for the date of first AGI is 2054. Of course, expert predictions about the nearness of AGI have been famously wrong before. Therefore, we should also consider AGI forecasts grounded in empirical models.

The amount of computation necessary to build a human-level AI system using modern machine learning architectures might be estimable from such biological anchors as: the size of the human genome; the computational power of the human brain; the total computation performed over the life of a 30-year-old human; or the total computation performed over evolution to produce humans. Assuming realistic progress on training algorithms and computational budgets, and judging computational requirements from probabilistically weighted biological anchors, the development of AI that can do most human tasks has been estimated as 50% likely by 2055 and 80% likely by 2100. Assigning a range of ‘reasonable’ probabilities to each of the biological anchor hypotheses bounds the 50% likelihood estimate between 2040 and 2080. It is possible that the computation required for AGI could be significantly higher than that indicated by any of the biological anchors if, for example, human intelligence is highly neural architecture-dependent; but this seems implausible if we consider recent AI capabilities progress using current neural network architectures.

Rather than estimating AGI likelihood per year on the basis of expert opinion or technological requirements, we can make a naive estimate based on the number of years researchers have failed to realise AGI. By choosing a ‘first-trial probability’ of realising AGI in 1957, when the field of AI began, and then decreasing this probability with each ‘trial’ in which AGI is not realised, we can obtain an estimate that AGI will be realised by a given year contingent on our past failures. Using this method of ‘semi-informative priors’, the likelihood of realising AGI has been estimated as 8% (1-18%) by 2036 and 20% (5-35%) by 2100. This method is highly dependent on the choice of first-trial probability and the definition of a ‘trial’ (trial length depends on researcher count and computation growth) and does not otherwise incorporate any technical aspects of AGI development, but does provide evidence against the claim, ‘If AGI was possible, it would have already been realised.’

Gross world product (GWP), the total income of all countries, has historically grown super-exponentially. Extrapolating GWP with a model that accounts for random fluctuations predicts infinite GWP by 2047, with a 95% confidence range of 2031 to 2063. Infinite GWP is clearly impossible, but this model indicates that explosive growth is not contraindicated by macroeconomic trends. AGI is arguably a plausible cause of near-term explosive economic growth and therefore the realisation of AGI this century is consistent with historical GWP. Thus, there is not a strong macroeconomic burden of proof against the realisation of AGI within the next 50 years.

The AGI forecasts discussed generally predict more than 50% probability of AGI within the next 50 years. Even if we are highly pessimistic about the validity of each forecast approach, the enormous ramifications of realising AGI should inspire conservative caution when making important decisions that are contingent on long AGI timelines. No matter how inexact the forecast, the reasonable possibility of AGI this century should inspire significant research into ensuring AGI is aligned with the goals of humanity, developing economic contingencies for explosive growth and establishing robust governance mechanisms for AI technology. Passing through this century may well be the greatest trial humanity will ever face and we owe it to our descendants to exercise caution with the most powerful of technologies: intelligence itself.


This article was written for the final week project of Effective Altruism Cambridge’s AGI Safety Fundamentals course.

Written on September 26, 2021