In this photo illustration a virtual friend is seen on the screen of an iPhone on April 30, 2020, in Arlington, Virginia.
Olivier Douliery | AFP | Getty Images
The Federal Trade Commission on Thursday announced it is issuing orders to seven companies including OpenAI, Alphabet, Meta, xAI and Snap to understand how their artificial intelligence chatbots potentially negatively affect children and teenagers.
The federal agency said AI chatbots may be used to simulate human-like communication and intrapersonal relationships with users, and that it wants to understand what steps these companies have taken to “evaluate the safety of these chatbots when acting as companions,” according to a release.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” FTC Chairman Andrew Ferguson said in a statement.
Alphabet, Meta, OpenAI, Snap and xAI did not immediately respond to CNBC’s request for comment.
The FTC said it is seeking information about how these companies monetize user engagement, develop and approve characters, use or share personal information, monitor and enforce compliance with company rules and terms of service and mitigate negative impacts, among other subjects.
Character Technologies, which operates the Character.ai bot, and Instagram, which is owned by Meta, were also named in the release.
Since the launch of ChatGPT in late 2022, a host of chatbots have emerged, creating a growing number of ethical and privacy concerns, as CNBC has previously reported.
The societal impacts of companions are already profound, even with the industry in its very early stages, as the U.S. suffers through a loneliness epidemic. Industry experts have said they expect the ethical and safety concerns to intensify once AI technology begins to train itself, creating the potential for increasingly unpredictable outcomes.
But some of the wealthiest people in the world are touting the power of companions and are working to develop the technology at their companies. Elon Musk in July announced a Companions feature for users who pay to subscribe to xAI’s Grok chatbot app. In April, Meta CEO Mark Zuckerberg said people are going to want personalized AI that understands them.
“I think a lot of these things that today there might be a little bit of a stigma around — I would guess that over time, we will find the vocabulary as a society to be able to articulate why that is valuable and why the people who are doing these things, why they are rational for doing it, and how it is actually adding value for their lives,” Zuckerberg said on a podcast.
Last month, Sen. Josh Hawley, R-Mo., announced an investigation into Meta following a Reuters report that the company allowed its chatbots to have romantic and sensual conversation with kids.
The Reuters report detailed an internal Meta document that described permissible AI chatbot behaviors during the development and training of the software. In one example, Reuters reported that a chatbot was allowed to have a romantic conversation with an eight-year-old and could say that “every inch of you is a masterpiece – a treasure I cherish deeply.”
Meta made temporary changes to its AI chatbot policies following the Reuters report so the bots do not discuss subjects like self-harm, suicide, eating disorders and avoiding potentially inappropriate romantic conversations.
If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.
–CNBC’s Salvador Rodriguez contributed to this report
WATCH: Why it’s time to take AI-human relationships seriously

https://www.cnbc.com/2025/09/11/alphabet-meta-openai-x-ai-chatbot-ftc.html