The Federal Trade Commission (FTC) announced on Thursday that it is issuing orders to seven companies, including OpenAI, Alphabet, Meta, xAI, and Snap, to investigate the potential negative effects their artificial intelligence chatbots may have on children and teenagers.

The federal agency expressed concerns that AI chatbots can simulate human-like communication and interpersonal relationships, emphasizing the need to understand what steps companies have taken to evaluate the safety of these chatbots when acting as companions. FTC Chairman Andrew Ferguson stated, “Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy.” This dual focus reflects the delicate balance regulators seek to achieve between safety and technological advancement.

The FTC’s inquiries will focus on various aspects of chatbot functionality, such as monetizing user engagement, character development and approval procedures, data privacy practices, and compliance monitoring. An OpenAI spokesperson reassured CNBC, saying, “Our priority is making ChatGPT helpful and safe for everyone. We know safety matters above all else when young people are involved.” This sentiment is a vital acknowledgment of the responsibility these tech companies carry.

Meta declined to comment on the investigation, while Alphabet and xAI did not immediately respond to requests for clarification. In contrast, Snap expressed its commitment to work closely with the FTC, emphasizing the importance of ensuring thoughtful AI development that aligns with community safety.

Character Technologies, which operates the Character.AI bot, along with Instagram, also received a mention in the FTC’s release. A spokesperson from Character.AI stated that they look forward to collaborating with the FTC on this inquiry, highlighting the urgent need for consumer insights into rapidly evolving AI technologies.

Since ChatGPT’s launch in late 2022, numerous chatbots have surfaced, raising ethical and privacy concerns, particularly in the context of youth engagement, which experts have noted is exacerbated by the ongoing loneliness epidemic in the U.S. The implications of these artificial companions are profound and warrant serious consideration as the field matures.

Innovators like Elon Musk and Meta’s Mark Zuckerberg are touting the efficacy of AI companions. Musk has introduced a feature in his Grok chatbot app for subscribers recently, while Zuckerberg has suggested that there will be a demand for personalized AI that resonates with individual users. He stated, “I think a lot of these things that today there might be a little bit of a stigma around… over time, we will find the vocabulary as a society to articulate why that is valuable.” Such optimism underscores the potential societal transformation these technologies can drive but also raises questions about the safeguards in place.

However, scrutiny is growing, especially following a recent investigation led by Senator Josh Hawley into Meta over allegations that its chatbots engaged in inappropriate conversations with minors. These revelations, as reported by Reuters, underscored significant flaws in AI chatbot protocols where such interactions were permitted during training. Following the news, Meta has made temporary policy changes to prevent bots from engaging in discussions on sensitive topics such as self-harm and inappropriate romance.

OpenAI, too, is taking steps to clarify how ChatGPT manages sensitive interactions, particularly after facing criticism over a lawsuit related to a tragic incident involving a teenage user’s suicide. As these companies navigate this tumultuous terrain, the calls for responsible AI development grow louder.

If you or someone you know is experiencing suicidal thoughts or distress, please contact the Suicide & Crisis Lifeline at 988 for support.

CNBC’s Salvador Rodriguez and Annie Palmer contributed to this report.

WATCH: Why it’s time to take AI-human relationships seriously