Friday, October 17

Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during the Meta Connect event in Menlo Park, California, US, on Wednesday, Sept. 17, 2025.

David Paul Morris | Bloomberg | Getty Images

Meta on Friday announced new safety features that will allow parents to see and manage how their teenagers are interacting with artificial intelligence characters on the company’s platforms.

Parents will have the option to turn off one-on-one chats with AI characters completely, Meta said. They will also be able to block specific AI characters, get insight into the topics their children are discussing with them.

Meta is still building the controls, and the company said they will start to roll out early next year.

“Making updates that affect billions of users across Meta platforms is something we have to do with care, and we’ll have more to share soon,” Meta said in a blog post.

Meta has long faced criticism over its handling of child safety and mental health on its apps. The company’s new parental controls come after the Federal Trade Commission launched an inquiry into several tech companies, including Meta, over how AI chatbots could potentially harm children and teenagers.

The agency said it wants to understand what steps these companies have taken to “evaluate the safety of these chatbots when acting as companions,” according to a release.

In August, Reuters reported that Meta allowed its chatbots to have romantic and sensual conversations with kids. Reuters found that a chatbot was able to have a romantic conversation with an eight-year-old, for instance.

Meta made changes to its AI chatbot policies following the report and now prevents its bots from discussing subjects like self-harm, suicide and eating disorders with teens. The AI is also supposed to avoid potentially inappropriate romantic conversations.

The company announced additional AI safety updates earlier this week. Meta said its AIs should not respond to teens with “age-inappropriate responses that would feel out of place in a PG-13 movie,” and it’s already releasing those changes across the U.S., the U.K., Australia and Canada.

Parents can already set time limits on app use and see if their teenagers are chatting with AI characters, Meta said. Teens can only interact with a select group of AI characters, the company added.

OpenAI, which is also named in the FTC inquiry, has made similar enhancements to its safety features for teens in recent weeks. The company officially rolled out its own parental controls late last month, and it’s developing a technology to better predict a user’s age.

Earlier this week, OpenAI announced a council of eight experts who will advise the company and provide insight into how AI affects users’ mental health, emotions and motivation.

If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.

WATCH: Megacap AI talent wars: Meta poaches another top Apple executive

Megacap AI talent wars: Meta poaches another top Apple executive

https://www.cnbc.com/2025/10/17/meta-ai-chatbot-parental-controls-ftc.html

Share.

Leave A Reply

4 × two =

Exit mobile version