The notion that expanding existing AI models can lead to artificial general intelligence (AGI)—systems that can match or surpass human capabilities—has been a long-cherished belief among tech companies. However, recent developments suggest that the performance of state-of-the-art models has hit a plateau, leading to skepticism among AI researchers regarding their potential to achieve AGI.

A notable survey conducted among 475 AI researchers reveals that approximately 76% of respondents believe it is “unlikely” or “very unlikely” that merely scaling current AI approaches will lead to AGI. This sentiment represents a significant shift from the earlier “scaling is all you need” optimism that fueled the generative AI boom starting in 2022.

According to the report from the Association for the Advancement of Artificial Intelligence, many of the prominent advancements in AI have emerged from transformer models, which have benefited from larger datasets. Yet, recent iterations have shown only marginal improvements in quality, indicating that the gains from scaling may have diminished.

Stuart Russell from the University of California, Berkeley, who participated in the report, expressed skepticism regarding the effectiveness of scaling without a concurrent understanding of the underlying mechanisms. He remarked, “The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced. About a year ago, it started to become obvious that the benefits of scaling in the conventional sense had plateaued.” This acknowledgment raises critical questions about the future trajectory of AI development.

Despite the skepticism regarding AGI progress, tech industry stakeholders plan to invest an estimated $1 trillion in data centers and chip development in the coming years to bolster their AI ambitions. However, the enduring hype surrounding AI technologies may contribute to a misalignment of expectations. Approximately 80% of survey respondents indicated that popular perceptions of AI capabilities significantly diverge from actual capabilities, with many AI systems still making fundamental errors in tasks like coding or mathematical problem-solving.

According to Thomas Dietterich, a contributor to the report from Oregon State University, although current AI tools can be immensely helpful in various roles, they are not poised to replace human workers. “Systems proclaimed to be matching human performance… still make bone-headed mistakes,” he noted, cautioning against overestimating the current state of AI.

Recent industry focus has shifted toward what is known as inference-time scaling, which allocates more computing power and processing time to AI models for better query responses. However, Princeton University’s Arvind Narayanan cautioned that this approach is unlikely to be the definitive solution for reaching AGI.

The definition and goals surrounding AGI remain unclear. While Google DeepMind defines AGI as a system that can outperform all humans on cognitive tests, Huawei suggests that achieving this milestone necessitates AI capable of interacting with its environment. In contrast, Microsoft and OpenAI’s internal report states they will consider AGI achieved only when OpenAI generates $100 billion in profit with their models.