In a fascinating video by the YouTube channel AI Explained titled “ChatGPT Can Now Call the Cops, but ‘Wait till 2100 for Full Job Impact’ – Altman,” Sam Altman, CEO of OpenAI, made waves with his announcements about ChatGPT’s new features released on September 16, 2025. Altman revealed that ChatGPT would start assessing users’ ages and, under specific circumstances, flagging conversations for review by parents or authorities to enhance user safety. These measures hint at a future where AI agents might play an even more substantial role in safeguarding individuals and conducting sensitive interactions.
OpenAI is embarking on ambitious goals to provide ChatGPT with the capability to assess whether users are minors. This system, although positively intended, raises concerns about accuracy and potential misuse. Indeed, safeguarding children’s privacy and acting appropriately under distress are challenging tasks, and stakeholders will be keenly observing OpenAI’s ability to implement these features effectively. The suggestion that ChatGPT may contact law enforcement if a minor is at risk highlights the growing intersection between AI technologies and traditional safety mechanisms.
Another intriguing point is OpenAI’s effort to establish AI conversation privacy standards akin to those of a doctor-patient confidentiality. While such privacy provisions sound advantageous, particularly for users sharing sensitive information, they present regulatory challenges that might stifle innovation, especially for smaller firms and startups. OpenAI’s push to secure these standards raises questions about potential monopolistic practices that could limit competition.
The video also cited Sam Altman’s evolving perspective on the impact of AI on jobs. Initially, predictions were severe, with up to 70% of jobs at risk. However, recent insights suggest a more tempered view, with full displacement potentially taking 75 years. Such a revision provokes debate on whether AI’s impact is being downplayed or if previous estimates were overstated. Altman’s evolving stance exemplifies the complexities of forecasting technological disruptions.
Moreover, the video discusses how AI models sometimes provide inaccurate information, a phenomenon known as “hallucination.” Many believe that understanding and improving these models’ outputs will be crucial in achieving true AI reliability.
In our constantly evolving technological landscape, these developments highlight potential opportunities and pitfalls inherent in rapid AI growth. As AI becomes interwoven with societal frameworks, ensuring equitable and ethical implementation will remain a priority.