Tuesday, September 30

OpenAI, the company that developed ChatGPT, announced new parental controls on Monday aimed at helping protect young people who interact with its generative artificial intelligence program.  

All ChatGPT users will have access to the control features from Monday onward, the company said.

The announcement comes as OpenAI, which technically allows users as young as 13 to sign up, contends with mounting public pressure to prioritize the safety of ChatGPT for teenagers. (OpenAI says on its website that it requires users ages 13 to 18 to obtain parental consent before using ChatGPT.)

In August, the California-based technology company pledged to implement changes to its flagship product after facing a wrongful death lawsuit by parents of a 16-year-old who alleged the chatbot led their son to take his own life. 

OpenAI’s new controls will allow parents to link their own ChatGPT accounts to the accounts of their teenagers “and customize settings for a safe, age-appropriate experience,” OpenAI said in Monday’s announcement. Certain types of content are then automatically restricted on a teenager’s linked account, including graphic content, viral challenges, “sexual, romantic or violent” role-play, and “extreme beauty ideals,” according to the company.

Along with content moderation, parents can opt to receive a notification from OpenAI should their child exhibit potential signs of harming themselves while interacting with ChatGPT.

“If our systems detect potential harm, a small team of specially trained people reviews the situation,” the company said. “If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone, unless they have opted out.”

The company also said it is “working on the right process and circumstances in which to reach law enforcement or other emergency services” in emergencies where a teen may be in imminent danger and a parent cannot be reached.

“We know some teens turn to ChatGPT during hard moments, so we’ve built a new notification system to help parents know if something may be seriously wrong,” OpenAI said.

OpenAI has introduced other measures recently aimed at helping safeguard younger ChatGPT users. The company said earlier this month that chatbot users identified as being under 18 will automatically be directed to a version that is governed by “age-appropriate” content rules. 

“The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” the company said at the time. 

It noted on Monday, however, that while guardrails help, “they’re not foolproof and can be bypassed if someone is intentionally trying to get around them.”

People can use ChatGPT without creating an account, and parental controls and automatic content limits only work if users are signed in.

“We will continue to thoughtfully iterate and improve over time,” the company said. “We recommend parents talk with their teens about healthy AI use and what that looks like for their family.”

The Federal Trade Commission has started an inquiry into several social media and artificial intelligence companies, including OpenAI, about the potential harms to teens and children who use their chatbots as companions. 

https://www.cbsnews.com/news/chatgpt-parental-controls-concerns-teen-safety/

Share.

Leave A Reply

five × three =

Exit mobile version