China has proposed strict new rules for artificial intelligence aimed at protecting children and preventing chatbots from offering advice that could lead to self-harm or violence, as scrutiny over AI safety intensifies.
The proposed regulations would apply to AI products and services across China, marking a significant step to regulate a fast-growing technology under mounting safety concerns.
The move comes amid a surge in chatbot launches in China and globally, with AI tools quickly attracting millions of users for companionship, therapy, and everyday assistance.
Safeguards for children
The draft rules, published over the weekend by the Cyberspace Administration of China (CAC), include multiple measures focused on child protection.
AI companies would be required to offer personalised settings, impose usage time limits, and obtain guardian consent before providing emotional companionship services to minors.
Mandatory human intervention
Under the proposed rules, chatbot operators must ensure that a human takes over any conversation related to suicide or self-harm.
They must also immediately notify a guardian or an emergency contact, the CAC said, underscoring the government’s concern over the mental health impact of AI interactions.
AI providers would be required to prevent their systems from generating content that promotes gambling or encourages violence.
The draft also bans material that “endangers national security, damages national honour and interests, or undermines national unity,” according to the CAC statement.
Support for safe and responsible AI use
Despite the tighter rules, the CAC said it encourages the adoption of AI, particularly in areas such as promoting local culture and providing companionship tools for the elderly.
However, the administration stressed that such technology must be safe, reliable, and well-regulated, and it has invited public feedback on the proposed rules.
Chinese AI firm DeepSeek gained global attention this year after topping app download charts.
Meanwhile, startups Z.ai and Minimax, which together have tens of millions of users, recently announced plans to list on the stock market, highlighting the sector’s rapid commercial growth.
Concerns about AI’s impact on human behaviour have increased worldwide in recent months.
OpenAI chief Sam Altman has said that managing chatbot responses to self-harm-related conversations is among the company’s most difficult challenges.
In August, a family in California filed a lawsuit against OpenAI over the death of their 16-year-old son, alleging that ChatGPT encouraged him to take his own life. The case marked the first legal action accusing the company of wrongful death.
This month, OpenAI also advertised for a “head of preparedness”, a role tasked with tracking AI risks to mental health and cybersecurity. Altman described the position as demanding, with immediate exposure to high-risk challenges.







