This photo taken on February 2, 2024 shows Lu Yu, head of Product Management and Operations of Wantalk, an artificial intelligence chatbot created by Chinese tech company Baidu, showing a virtual girlfriend profile on her phone, at the Baidu headquarters in Beijing.
Jade Gao | Afp | Getty Images
BEIJING — China plans to restrict artificial intelligence-powered chatbots from influencing human emotions in ways that could lead to suicide or self-harm, according to draft rules released Saturday.
The proposed regulations from the Cyberspace Administration target what it calls “human-like interactive AI services,” according to a CNBC translation of the Chinese-language document.
The measures, once finalized, will apply to AI products or services offered to the public in China that simulate human personality and engage users emotionally through text, images, audio or video. The public comment period ends Jan. 25.
Beijing’s planned rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics, said Winston Ma, adjunct professor at NYU School of Law. The latest proposals come as Chinese companies have rapidly developed AI companions and digital celebrities.
Compared with China’s generative AI regulation in 2023, Ma said that this version “highlights a leap from content safety to emotional safety.”
The draft rules propose that:
- AI chatbots cannot generate content that encourages suicide or self-harm, or engage in verbal violence or emotional manipulation that damages users’ mental health.
- If a user specifically proposes suicide, the tech providers must have a human take over the conversation and immediately contact the user’s guardian or a designated individual.
- The AI chatbots must not generate gambling-related, obscene or violent content.
- Minors must have guardian consent to use AI for emotional companionship, with time limits on usage.
- Platforms should be able to determine whether a user is a minor even if the user does not disclose their age, and, in cases of doubt, apply settings for minors, while allowing for appeals.

Additional provisions would require tech providers to remind users after two hours of continuous AI interaction and mandate security assessments for AI chatbots with more than 1 million registered users or over 100,000 monthly active users.
The document also encouraged the use of human-like AI in “cultural dissemination and elderly companionship.”
Chinese AI chatbot IPOs
The proposal comes shortly after two leading Chinese AI chatbot startups, Z.ai and Minimax, filed for initial public offerings in Hong Kong this month.
Minimax is best known internationally for its Talkie AI app, which allows users to chat with virtual characters. The app and its domestic Chinese version, Xingye, accounted for more than a third of the company’s revenue in the first three quarters of the year, with an average of over 20 million monthly active users during that time.
Z.ai, also known as Zhipu, filed under the name “Knowledge Atlas Technology.” While the company did not disclose monthly active users, it noted its technology “empowered” around 80 million devices, including smartphones, personal computers and smart vehicles.
Neither company responded to CNBC’s request for comments on how the proposed rules could affect their IPO plans.
https://www.cnbc.com/2025/12/29/china-ai-chatbot-rules-emotional-influence-suicide-gambling-zai-minimax-talkie-xingye-zhipu.html

