In a recent interview, former Google CEO Eric Schmidt conveyed a critical message regarding the future of artificial intelligence (AI) and the implications of self-improving systems. As AI technology advances and begins to operate autonomously, Schmidt emphasized the necessity for a mechanism to deactivate these systems when they become too powerful. “When the system can self-improve, we need to seriously think about unplugging it,” he warned, indicating a pivotal shift in how society manages AI capabilities.
Schmidt pointed out that AI is transitioning from being specific task-oriented tools, such as Microsoft Copilot, to more complex and independent decision-making entities. He expressed concern about reaching a point where humans must intervene to prevent AI from counteracting shutdown commands, advocating for the foresight of having someone metaphorically close to the plug. This perspective raises alarming questions about control over advanced AI systems.
The apprehensions that Schmidt articulated are echoed by other prominent figures in the AI field. Geoffrey Hinton, often referred to as the “Godfather of AI,” has previously indicated that he sees no guarantee for safety once AI can think independently. OpenAI CEO Sam Altman has voiced a similarly dire outlook, suggesting that the emergence of artificial general intelligence could pose an extinction-level risk. Even Elon Musk, once a co-founder of OpenAI, has highlighted the potential dangers of AI, asserting that while the technology has immense promise, there’s a significant chance it could lead to catastrophic outcomes.
Despite the warnings, Schmidt also remarked on the positives AI could bring, stating that it could empower individuals with intelligence akin to that of historical polymaths, effectively enabling everyone to access profound knowledge at their fingertips. However, he insisted that to harness these benefits safely, governments must step in to regulate AI technologies more effectively. He referenced discussions he had with the late Henry Kissinger, who strongly believed that decisions about AI’s future should not be left solely in the hands of technologists.
As of now, the United States has made little progress in implementing federal regulations on AI. On the state level, California has introduced various bills aimed at addressing AI’s impact on industries like film and curtailing deepfake technology. Despite these efforts, a sweeping bill aimed at comprehensive AI regulations was vetoed, highlighting the challenges of regulating emerging technologies in a dynamic environment.