Five of China’s most powerful regulatory bodies, led by the Cyberspace Administration of China (CAC), have unveiled a pioneering framework to govern 'anthropomorphic' artificial intelligence—algorithms specifically designed to mimic human personality, emotion, and interaction. Effective July 15, 2026, the Provisional Measures represent Beijing’s first comprehensive attempt to tame the burgeoning market for digital companions, virtual influencers, and AI-driven care providers.
The regulatory push comes as China experiences a surge in AI applications across sensitive social sectors, including elder care, early childhood education, and cultural dissemination. While these tools offer a technological balm for China’s demographic challenges and a growing loneliness epidemic, authorities are increasingly concerned about their potential to erode traditional ethics, manipulate vulnerable users, or jeopardize psychological health. The new rules aim to reconcile the 'healthy development' of these services with the overarching necessity of maintaining 'national security and social public interests.'
Central to the new mandate is a dual-track approach that balances 'inclusive and prudent' supervision with strict political redlines. The measures explicitly encourage innovation in 'intelligent' companionship, yet they impose a zero-tolerance policy for any AI-generated content that might subvert state power, harm national honor, or challenge the socialist system. This ensures that even the most lifelike digital persona remains an instrument of the state’s broader social management goals.
Technically, the measures introduce a suite of oversight tools, including mandatory security assessments and algorithm filing procedures for service providers. Perhaps most significantly, the government will oversee the construction of an 'AI safety sandbox' platform. This environment will allow for the testing of anthropomorphic interactions under state supervision, ensuring that 'AI for good' aligns with Beijing’s definition of social stability before these tools reach the masses.
Furthermore, the regulations place a heavy burden of responsibility on developers to protect the rights of minors and the elderly. By mandating clear disclosure of an AI’s non-human nature and strengthening data privacy protections, the framework attempts to prevent the 'emotional hijacking' of users. As AI becomes more human-like, China is signaling that the boundary between biological and digital life will be defined and guarded by the state.
