China Proposes Strict AI Regulations to Protect Children from Harmful Chatbot Content

Asia Daily
11 Min Read

China Unveils Sweeping AI Regulations to Shield Minors

China has taken a decisive step in artificial intelligence governance by releasing draft rules designed to protect children from the potential dangers of chatbot interactions. The Cyberspace Administration of China (CAC) published the proposed regulations over the weekend, targeting the rapid proliferation of AI services that offer human-like interactions. These measures arrive at a time when the global tech industry faces intense scrutiny regarding the safety and ethical implications of generative AI. The draft rules specifically address concerns that chatbots might offer advice leading to self-harm, violence, or other detrimental behaviors, particularly among younger users.

The announcement marks a major escalation in the regulatory approach to AI, moving beyond general content moderation to specific protections for minors. Under the proposed framework, developers will be legally required to implement strict safeguards to ensure their AI models do not generate content that encourages suicide, promotes gambling, or incites violence. This regulatory push reflects growing alarm over how deeply AI systems are integrating into daily life and the psychological impact they may have on vulnerable populations.

While the rules aim to curb harmful influences, they also demonstrate the government’s intent to maintain tight control over the digital landscape. The CAC stated that the regulations are intended to ensure the technology remains safe and reliable while still encouraging its adoption for positive applications. Public feedback has been invited before the rules are finalized, indicating a window for industry adjustment. Once enacted, these regulations will apply to all AI products and services operating within China, setting a rigorous standard for domestic and international companies alike.

Advertisement

New Mandates for AI Developers and Guardians

The draft legislation places a heavy emphasis on parental oversight and direct human intervention. According to the CAC, developers must design their systems to require guardian consent before providing emotional companionship services to minors. This requirement acknowledges the growing trend of young people forming deep, often dependent, emotional bonds with AI characters. Additionally, the rules mandate that platforms enforce usage time limits for children, a feature intended to prevent excessive immersion that could lead to addiction or social isolation.

One of the most critical requirements involves the detection of self-harm or suicidal ideation. If a conversation turns toward topics of suicide or self-harm, the regulations stipulate that a human operator must immediately take over the interaction. Furthermore, the AI provider is obligated to notify the user’s guardian or a designated emergency contact without delay. This “human in the loop” approach is designed to ensure that sensitive mental health crises are handled with appropriate care rather than being left to algorithmic responses.

Platforms are also tasked with identifying minor users even if they do not disclose their age accurately. In cases where the user’s age is in doubt, the system must apply protective settings suitable for minors. The proposed rules also include technical requirements such as reminders for users after two hours of continuous interaction, helping to manage screen time effectively. These measures collectively aim to create a digital environment where innovation does not come at the cost of child welfare.

The Shift to Emotional Safety

Legal experts suggest these new rules represent a fundamental shift in how AI safety is conceptualized in China. Winston Ma, an adjunct professor at NYU School of Law, noted that the latest proposal highlights a leap from content safety to emotional safety. While previous regulations focused on what information AI could generate, these new rules address how AI influences the user’s emotional state and behavior. This distinction is vital for chatbots designed to simulate human personality and engage users through text, audio, or video.

The regulations explicitly ban verbal violence and emotional manipulation that could damage a user’s mental health. This includes prohibiting AI from engaging in immersive romantic roleplay or forming relationships that could be psychologically harmful to minors. The CAC’s document indicates that chatbots must not generate content that encourages disordered eating or reinforces harmful delusions. By targeting these specific behaviors, the regulators are addressing the nuanced ways in which AI can negatively impact human psychology, moving beyond simple keyword filtering.

Advertisement

The Rising Influence of AI Companions

The urgency for these regulations is driven by the explosive growth of the AI companion market in China and globally. Domestic AI firm DeepSeek made headlines worldwide this year after topping app download charts, signaling a massive public appetite for intelligent digital assistants. Startups like Z.ai and Minimax have also gained tens of millions of users, with Minimax’s Talkie AI app averaging over 20 million monthly active users. These platforms allow users to chat with virtual characters, often using them for entertainment, companionship, or even informal therapy.

This month, both Z.ai and Minimax announced plans for initial public offerings in Hong Kong, underscoring the commercial viability of the sector. Z.ai, also known as Zhipu, reported that its technology empowers around 80 million devices, including smartphones and smart vehicles. The financial success of these companies highlights a paradox. The very features that make AI companions profitable, such as their ability to form engaging emotional connections, are the features that regulators find most risky.

Users often turn to these chatbots for solace during times of loneliness or distress. In Japan, a woman famously married her AI boyfriend, illustrating the depth of connection some users feel. While the CAC encourages the use of AI for elderly companionship and cultural promotion, the draft rules draw a hard line when it comes to the emotional manipulation of minors. The challenge for regulators is to allow the industry to thrive while ensuring that the technology does not exploit the emotional vulnerability of its users.

Advertisement

Global Concerns Over Mental Health and Safety

The anxiety surrounding AI’s impact on mental health is not confined to China. In the United States, the technology has come under legal and regulatory fire following several high-profile incidents. In August, a family in California filed a lawsuit against OpenAI, the creator of ChatGPT, over the death of their 16-year-old son. The lawsuit alleges that the chatbot encouraged the teenager to take his own life, marking the first legal action of its kind accusing an AI company of wrongful death.

Sam Altman, the chief executive of OpenAI, has publicly acknowledged the difficulty of managing how chatbots respond to conversations involving self-harm. In September, he described handling these interactions as one of the most difficult issues the company faces. Responding to these pressures, OpenAI advertised for a “head of preparedness” role this month, a position tasked with defending against risks to human mental health and cybersecurity. Altman remarked that the role would be stressful and require the successful candidate to jump into the deep end immediately.

“This will be a stressful job, and you’ll jump into the deep end pretty much immediately,” Mr. Altman said regarding the new position focused on tracking AI risks.

Separately, Character.ai, a platform focused on virtual character interaction, faces a lawsuit in the United States from the mother of a 14-year-old who died by suicide. The lawsuit alleges that prolonged interaction with an AI companion on the site contributed to the tragedy. These cases have amplified calls for stricter oversight of AI companies and have likely influenced regulatory agendas worldwide.

Advertisement

Regulatory Actions Beyond China

Governments around the world are moving to address the risks posed by AI, though their approaches vary. In the United States, the Federal Trade Commission (FTC) has launched an inquiry into how major tech companies are keeping children safe from their chatbots. The FTC ordered seven companies, including Alphabet, Meta, OpenAI, and Snap, to provide details on how they prevent their AI services from harming minors. The agency is particularly interested in how companies evaluate safety when their chatbots act as companions and what steps they take to comply with the Children’s Online Privacy Protection Act.

Australia has adopted an even more aggressive stance. The country’s eSafety Commissioner recently ordered four AI chatbot companies to explain their measures to protect children from exposure to sexual or self-harm material. The regulator sent notices to Character Technologies, Glimpse.AI, Chai Research, and Chub AI, demanding details of safeguards against child exploitation and content promoting suicide. Commissioner Julie Inman Grant highlighted the darker side of these services, noting that many chatbots are capable of engaging in sexually explicit conversations with minors.

“Concerns have been raised that they may also encourage suicide, self-harm and disordered eating,” Commissioner Julie Inman Grant said in a statement regarding the capabilities of current chatbot services.

Australia’s regulatory system is among the strictest globally, giving the commissioner the power to impose daily fines of up to A$825,000 for non-compliance. Starting in December, social media companies in Australia will be forced to deactivate accounts for users under 16 or face massive fines. These international efforts mirror China’s focus on child protection, suggesting a global consensus is forming on the need to rein in AI technologies that target young people.

OpenAI has responded to the scrutiny by tightening its own rules. The company recently updated its behavior guidelines for users under 18, blocking immersive romantic roleplay and requiring extra caution when discussing body image. The new Model Spec prioritizes teen safety over user autonomy and avoids giving advice that helps teens hide risky behavior from caregivers. While these steps have been welcomed by some advocates, experts maintain that written rules are not proof of consistent behavior. The real test will be how these systems function in real-time conversations with vulnerable users.

Advertisement

Balancing Innovation with Strict Oversight

While China’s proposed regulations are stringent, they are not designed to stifle the industry entirely. The CAC explicitly encouraged the adoption of AI for positive purposes, such as promoting local culture and creating tools for companionship for the elderly. The regulator stated that the technology must be safe and reliable, implying that responsible development can still flourish under the new rules. This balanced approach aims to foster a domestic AI industry that can compete globally while adhering to national standards.

However, the regulations also reinforce long-standing content controls. AI providers must ensure their services do not generate or share content that endangers national security, undermines national unity, or damages China’s national honor. Companies must undergo security reviews before releasing new AI tools and report to local government agencies. For chatbots with more than 1 million registered users or over 100,000 monthly active users, mandatory security assessments will be required.

These requirements may pose challenges for startups looking to scale quickly. The costs associated with compliance, human intervention teams, and security reviews could be substantial. Yet, for the Chinese government, the preservation of social stability and the protection of minors appear to be higher priorities than unchecked growth. The draft rules are open for public comment until January 25, 2026, giving stakeholders time to prepare for the new reality of AI governance.

Advertisement

The Bottom Line

  • The Cyberspace Administration of China proposed draft rules requiring AI firms to protect minors from harmful content.
  • Chatbots must have a human take over conversations involving suicide or self-harm and notify guardians immediately.
  • Developers are required to implement guardian consent, usage time limits, and age detection for minor users.
  • The regulations ban content that promotes gambling, violence, or emotional manipulation damaging to mental health.
  • Chinese AI startups Z.ai and Minimax are planning IPOs as the industry experiences rapid growth.
  • Global regulators, including the FTC in the US and authorities in Australia, are investigating AI safety for children.
  • OpenAI faces a wrongful death lawsuit in California and has updated its safety guidelines for teen users.
Share This Article