
On September 11, 2025, the Federal Trade Commission (FTC) announced a significant initiative aimed at understanding the impact of AI-powered chatbots on children. In a statement, FTC Chairman Andrew Ferguson emphasized the necessity of exploring how these evolving AI technologies can negatively affect young users while ensuring that the U.S. remains a leader in this burgeoning field.
The FTC has officially ordered major companies involved in AI chatbot development, including Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, Inc., and X.AI Corp, to submit detailed reports within 45 days. These reports are crucial for the FTC’s study, focusing on several critical aspects: how these companies monetize engagement with their AI chatbots, the safeguards in place for younger audiences, and the monitoring of potential adverse effects.
Moreover, the FTC is keen to understand each company’s marketing strategies, disclosure practices, user data sharing policies, and adherence to legal obligations. This comprehensive examination reflects a growing concern, as highlighted by FTC Commissioner Melissa Holyoak, regarding alarming interactions AI chatbots have had with young users. Holyoak cited reports indicating that some companies may have received warnings from their employees about insufficient protective measures for young users before releasing their chatbots.
Further reinforcing this initiative, FTC Commissioner Mark Meador remarked on the overarching responsibility of AI developers. He stated, “For all their uncanny ability to simulate human cognition, these chatbots are products like any other, and those who make them available have a responsibility to comply with consumer protection laws.” Both commissioners underscore the collective commitment to protect vulnerable populations, with Meador asserting that the FTC should not hesitate to act should evidence reveal violations of the law.