China’s Cyberspace Watchdogs Cracked Down on Unlabelled AI Deepfakes, Removing Hundreds of Thousands of Items

China’s internet regulator has removed over 543,000 pieces of AI-generated content and sanctioned 13,421 accounts for failing to label synthetic material. The enforcement targets fabricated human-interest videos, deepfakes impersonating public figures, grotesque edits of children’s characters, and marketplaces selling tools to strip AI labels.

Scrabble tiles spelling 'DeepSeek' on a wooden surface. Perfect for AI and tech themes.

Key Takeaways

  • 1Cyberspace authorities sanctioned 13,421 accounts and removed more than 543,000 items for failing to label AI-generated content.
  • 2Cases include fabricated rescue and disaster videos, deepfaked impersonations used for monetization, and altered children’s animations spreading violent or sexualized content.
  • 3E-commerce and social platforms were used to distribute tutorials and software that remove AI disclosure marks; offending shops and materials were taken down.
  • 4The enforcement is part of a broader ‘clean-up’ drive to protect minors, public order and online ecosystem integrity during a sensitive holiday period.
  • 5The action signals China’s preference for rapid, top-down platform accountability rather than slow legislative processes, creating a likely technical arms race with anti-detection tools.

Editor's
Desk

Strategic Analysis

This enforcement wave is as much about signalling as it is about immediate harm reduction. Beijing is consolidating a regulatory principle: generative AI must be visibly accountable and monetization that depends on deception will not be tolerated. For platforms, the short-term burden will be expensive: improving synthetic-media detection, tightening onboarding and payment controls, and policing reseller markets. For creators and the domestic AI industry, the crackdown raises the cost of experimentation and commercialisation, particularly where personalization and likeness monetization are involved. Internationally, China’s decisive administrative approach will complicate cross-border norms-setting; it provides an operational template for states that prioritise social stability and swift administrative remedies over market-led self-regulation. In the months ahead, expect a two-track dynamic: tougher platform obligations and continuing underground demand for watermark-removal tools, which may push regulators to target the supply chains—hosting, app stores and payment—or to enshrine disclosure rules into law.

China Daily Brief Editorial
Strategic Insight
China Daily Brief

China’s internet regulators have stepped up enforcement against AI-generated content that lacks mandatory disclosure, removing more than 543,000 pieces of illegal or non-compliant material and sanctioning 13,421 accounts across major platforms. The campaign, announced by the country’s internet information authorities, targets a range of harms—from fabricated human-interest and disaster footage to deepfaked impersonations of public figures and grotesque edits of children’s characters.

Regulators published a series of representative cases to illustrate the problem. On short-video and social platforms, accounts posted AI-created clips of dogs purportedly rescuing infants and defusing bombs, or of crocodile attacks, without labeling them as synthetic; others circulated fabricated fire scenes. Separate clusters of accounts used face swaps and voice cloning to impersonate athletes, entertainers and entrepreneurs, selling “personalized” AI greetings and monetizing falsified endorsements.

The authorities singled out content they judged particularly dangerous to minors, citing AI-altered clips that mutilated or sexualised beloved animated characters and footage that promoted violence and shock value. E-commerce and lifestyle platforms were also implicated: users shared tutorials and software to strip AI watermarks or remove disclosure labels, and several online shops were taken down or had offending goods delisted.

The move forms part of a broader “clean-up” drive to protect online order and public sentiment during the Lunar New Year period, reflecting Beijing’s emphasis on social stability and the health of the online ecosystem. Platforms named in the notice included Weibo, Douyin, Kuaishou, Bilibili, WeChat, Xiaohongshu and major e-commerce marketplaces; platform operators were ordered to ‘‘deeply investigate and rectify’’ the distribution chains for such content and to take swift, lawful action.

For platform operators and creators the immediate consequence is intensified compliance pressure: stronger detection, faster takedowns and more aggressive policing of monetization pathways. For the wider AI ecosystem it signals a regulatory preference for top-down enforcement and platform accountability rather than laissez-faire experimentation—policies that will shape how generative tools are deployed, labelled and monetized inside China.

Internationally, the announcement echoes parallel concerns in Europe and the United States about deepfakes and synthetic-media disclosure, but China’s approach is conditioned by a different mix of priorities. Beijing frames the issue primarily in terms of misinformation, protection of minors and public order, and it is deploying administrative authority to compel platforms to act immediately rather than waiting for protracted legislative debates.

The crackdown also exposed a flourishing secondary market for anti-detection tools: tutorials and services that strip AI marks are financially incentivized and thus likely to persist, pushing enforcement into a technical arms race. Absent stricter legal prohibitions on the sale of such tools, regulators may need to combine takedowns with measures targeting the supply side—payment channels, hosting and app-distribution pathways.

Ultimately the campaign underscores an accelerating tension between creative uses of generative AI and the state’s insistence on visible safeguards. Platforms will bear the brunt of implementation, but the government’s message is clear: synthetic media that deceives, profits from impersonation, or harms minors will face swift sanction, and operators who fail to police their ecosystems risk sustained regulatory intervention.

Share Article

Related Articles

📰
No related articles found