In a recent series of statements, former safety researcher Steven Adler from OpenAI has expressed deep concerns regarding the accelerated pace of artificial intelligence (AI) development. Adler described his feelings toward this rapid advancement as “pretty terrified,” indicating that the industry is engaging in a “very risky gamble” regarding the future implications of artificial general intelligence (AGI) — the conceptualization of AI systems that can perform any intellectual task that a human can achieve.
Adler, who left OpenAI in November 2024, shared his reflections on social media platform X (formerly Twitter), highlighting the chaotic nature of his experience at OpenAI. Despite enjoying many aspects of his tenure, he voiced unsettling thoughts about the development trajectory of AI technologies. He stated, “I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: will humanity even make it to that point?” This reflection underscores a growing concern among experts regarding the long-term ramifications of unbridled AI advancement.
Adler’s alarmist view resonates with warnings from other significant figures in the field, such as Geoffrey Hinton, a Nobel Prize-winning computer scientist. Hinton has raised fears about the potential for powerful AI systems to escape human oversight, resulting in catastrophic outcomes. Conversely, some, like Yann LeCun from Meta, advocate for a more optimistic outlook, suggesting that AI could provide solutions to existential threats facing humanity.
While reflecting on AGI, the primary goal of OpenAI, Adler remarked that engaging in an AGI race poses considerable risks with potential downsides. He emphasized that no research laboratory has yet devised a foolproof method for AI alignment — a crucial component in ensuring AI systems operate in accordance with human values. Adler warned, “The faster we race, the less likely that anyone finds one in time,” highlighting the urgency for mindful progress amid fast-paced innovations.
Furthermore, Adler called attention to competitive pressures within the AI industry, advocating for the implementation of robust safety regulations. He articulated that even those labs that wish to develop AGI responsibly must contend with others that might resort to shortcuts for competitive advantage. He remarked, “Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously.” This points to a critical juncture in the evolving narrative of AI development, where regulation might play a pivotal role in ensuring safety and ethical standards.
As the dialogue continues about the future of AI and its implications, Adler’s insights illustrate the urgent need for a balance between innovation and safety, cautioning that a premature sprint towards AGI could have far-reaching consequences for humanity.