Geoffrey Hinton, recognized as the ‘Godfather of AI’, has recently increased concerns regarding the existential threat posed by artificial intelligence. During an appearance on BBC Radio 4’s Today programme, Hinton estimated a 10% to 20% chance that AI could lead to human extinction over the next 30 years, highlighting the swift pace of technological advancement as a significant factor.

Previously, Hinton had suggested a lower risk of around 10%, but his reassessment indicates growing apprehension about the potential for AI systems to surpass human intelligence. Responding to queries about this emerging analysis, he remarked, “Not really, 10% to 20%,” signaling a sober acknowledgment of the implications involved.

This rising concern stems from the idea that humans may soon be outmatched by their own creations. Hinton poignantly described the relationship between humans and potentially superintelligent AI as analogous to that of a toddler to an adult, suggesting that humans could be viewed as “three-year-olds” in the presence of machine intelligence that far exceeds our cognitive capabilities.

Hinton’s insights have garnered attention, particularly after he resigned from Google to advocate more assertively for responsible AI development, emphasizing the risks that “bad actors” might exploit such technologies. His concerns resonate with a broader movement advocating for AI safety, particularly regarding the development of artificial general intelligence—systems that could potentially outsmart their creators and evade human control.

Reflecting on his early career and initial expectations of AI development, Hinton noted that he didn’t foresee the technology advancing to its current stage as rapidly as it has. His statements underscore a critical tipping point in the discussion about AI’s role in society and the overarching need for stringent safety measures as this powerful technology evolves.