Geoffrey Hinton’s 50-50 Prediction: AI Surpassing Human Intelligence in 20 Years

by | Sep 12, 2024

In a recent interview, Geoffrey Hinton, often referred to as the ‘godfather of AI’, expressed his belief that there is a ’50-50′ chance artificial intelligence will surpass human intelligence within the next 20 years. Hinton emphasized the rapid pace of AI development, noting that advancements over the past decade have far exceeded expectations, and projecting similar leaps in the coming years could lead to superintelligent systems.

Hinton discussed the potential risks of AI becoming smarter than humans, pointing out that historically, more intelligent entities are not controlled by less intelligent ones. He highlighted the need for AI systems to create sub-goals, such as gaining more control to accomplish tasks, which could lead to self-preservation and self-interest, potentially resulting in competition between AI systems. This competition could evolve, making the most competitive AI systems dominant, leaving humans in a subordinate position.

Hinton stressed the importance of conducting safety experiments while AI systems are still less intelligent than humans to understand how they might evade control. He advocated for governments to mandate that AI companies allocate significant resources—potentially a third of their computational resources—to safety research. He cited the internal debates at OpenAI, where safety advocates like Ilia Sutskever pushed for more resources towards safety, but faced resistance from those prioritizing profit.

Hinton also drew parallels between AI and nuclear threats, noting that while AI has immense potential benefits, it also poses significant risks that need to be managed through stringent regulations. He argued that only government intervention could slow down the competitive race between tech giants to develop more advanced AI systems. Hinton concluded by expressing cautious optimism, estimating a better-than-even chance of humanity managing the transition to superintelligent AI, though he acknowledged the considerable uncertainty and risks involved.

Jon Erlichman
Not Applicable
July 7, 2024
PT14M42S