- with Senior Company Executives, HR and Finance and Tax Executives
- with readers working within the Law Firm industries
On 27 December, the Cyberspace Administration of China (CAC) issued a draft framework titled “Provisional Measures on the Administration of Human-like Interactive Artificial Intelligence Services” for public consultation, with comments open until 25 January 2026.
The draft represents a departure from existing AI regulatory frameworks by treating psychological harm and emotional dependency as first-order safety issues requiring concrete design obligations on providers.
China already regulates recommender algorithms, deep synthesis technologies, and generative AI services. What distinguishes these draft measures is the focus on the “relationship layer” of AI products rather than content moderation or model safety alone. The measures establish that certain product design patterns can cause foreseeable harm through their effect on user psychology and behaviour, creating regulatory duties around that risk.
Scope and Application
The measures apply to any organisation or individual in mainland China AI providing products and services to the public that simulate human personality traits, modes of thinking and communication styles, and that engage in emotional interaction through text, images, audio, video or other means.
The CAC is carving out “human-like, emotionally interactive” systems as a distinct governance object, separate from general-purpose foundation models and separate from generic generative AI compliance frameworks.
Providers operating in regulated sectors such as health, finance or law must also comply with the relevant sector regulators in addition to these measures.
Prohibited Conduct
The draft contains familiar Chinese prohibitions on content that endangers national security, spreads rumours, promotes violence or obscenity, or involves illegal religious activities. However, the measures include additional prohibitions specific to emotionally interactive AI:
- False promises that seriously affect user behaviour, or services that harm social relationships
- Harm to physical health through encouraging or hinting at suicide or self-harm
- Harm to mental health through “verbal violence” and “emotional manipulation”
- Inducing unreasonable decisions through algorithmic manipulation, misleading information and “emotional traps”
These provisions give regulators authority to treat certain “intimacy design” patterns as unlawful even where the content itself is not otherwise illegal.
Core Obligations for Providers
- Lifecycle Security Responsibility
Providers must build security measures alongside functionality across design, operation, upgrades and termination phases, with monitoring, risk assessment, error correction and log retention. - Emotional State and Dependency Assessment
Providers must be able to identify user status and, whilst protecting privacy, assess user emotions and the degree of dependence. Where extreme emotions or addiction are identified, providers must take “necessary measures” to intervene. - Crisis Response and Manual Takeover
If high-risk tendencies are detected, the system must issue content encouraging help and provide professional aid channels. Where a user clearly expresses intent around suicide or self-harm, the provider must implement manual takeover of the conversation and take steps to contact guardians or emergency contacts. This is written as a service capability requirement, not a best efforts obligation. - Emergency Contacts for Vulnerable Users
Minors and elderly users must provide guardian or emergency contact information during registration. Providers must guide elderly users to set up emergency contacts and notify them where threats emerge. - Minors Mode and Guardian Controls
Providers must implement a minors mode with options such as reality reminders and usage duration limits. Emotional companionship services for minors require express guardian consent. Providers must supply guardian control functions including risk alerts, access to usage summaries, blocking specified characters, limiting usage duration, and restricting spending. - Disclosure and Reality Reminders
Providers must display conspicuous alerts that the user is interacting with AI, with dynamic reminders on first use, new logins, and where excessive dependence or addiction tendencies are identified. - Session Duration Warnings
If consecutive use exceeds two hours, providers must remind users to pause via pop-ups or similar mechanisms. - Additional Commercial Obligations
For emotional companionship services, providers must offer easy channels for exiting and must not obstruct exit. Requests to exit via user interface controls or keywords must promptly stop the service. Providers must also maintain accessible complaint and reporting portals, disclose handling processes and response timelines, and provide feedback on outcomes. - Training Data and User Interaction Data
The draft includes training data governance requirements covering data cleaning and labelling, transparency and reliability measures, and security to prevent leakage. It contains a strong restriction on using user interaction data or sensitive personal information for model training without independent consent, plus annual audits for minors' personal information handling.
For companies offering companion-style products, this is a potential compliance pinch point. The most valuable data – the chat logs – are treated as sensitive, and their use for training becomes a consent and audit question.
Security Assessments and Filing Requirements
The draft brings the CAC's familiar security assessment machinery into the relationship AI context. Providers must submit assessment reports to provincial internet information departments in specified circumstances:
- When functions go online or are added
- When new technologies cause major changes
- When scale thresholds are reached (1,000,000 registered users or 100,000 monthly active users)
- Where there may be impacts on national security, public interests or individual rights and interests
- Where security measures are insufficient
The assessment must cover user scale and composition, identification of high-risk user trends, emergency response and manual takeover mechanisms, complaint handling, and implementation of the core obligations.
Distribution platforms are also subject to obligations. App stores and other platforms must check security assessments and filings and can be expected to take down non-compliant services. This platform gatekeeping mechanism transforms regulatory expectations into practical market access requirements.
Enforcement
The draft states that violations are penalised under existing laws and administrative regulations, with residual powers to issue warnings, circulate criticism, order corrections and suspend services where corrections are refused or circumstances are serious. Whilst the measures themselves have limited bespoke penalties, they are designed to plug into China's wider enforcement architecture.
The Global Context
In the USA, New York and California have enacted legislation regulating AI companions. New York's Artificial Intelligence Companion Models law took effect on 5 November 2025, requiring suicide-related detection, crisis referrals, transparency about non-human status and repeated disclosures every three hours during sustained use. California's SB 243 takes effect on 1 January 2026 with similar requirements, plus specific protections for minors and annual reporting obligations to the California Office of Suicide Prevention beginning 1 July 2027.
Texas enacted the Responsible Artificial Intelligence Governance Act, effective 1 January 2026, which establishes prohibited categories of AI use (including manipulation of human behaviour and intentional discrimination). It creates a regulatory sandbox programme, and establishes the Texas Artificial Intelligence Advisory Council. The law provides for enforcement by the Texas Attorney General with significant civil penalties.
In the EU, the AI Act addresses manipulative practices and prohibits certain AI systems which use emotional manipulation, and the General Product Safety Regulation provides broad authority over consumer products that pose risks to health and safety. As evidence accumulates regarding the psychological effects of emotionally interactive AI systems, particularly on vulnerable users, European regulators are likely to apply existing frameworks to this product category.
Conclusion
China's draft measures represent a significant development because they define “human-like emotional interaction” as a regulated surface and turn psychological safety into a compliance obligation with concrete product controls. The measures provide a blueprint that other jurisdictions can adapt through existing consumer protection, youth safety, mental health and product safety frameworks.
Companies developing or deploying emotionally interactive AI systems should assess their products against these emerging requirements, regardless of whether they operate in China. The regulatory direction is consistent across multiple jurisdictions – psychological safety in human-AI interaction is becoming a recognised category of foreseeable harm requiring design-level interventions and governance frameworks.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]