In the rapidly evolving landscape of China’s digital entertainment industry, the boundary between innovation and infringement has become dangerously blurred. Recent high-profile cases in the Beijing Internet Court have signaled a significant shift in how Chinese authorities view ‘personality rights’ in the age of generative AI. The focus has moved beyond simple copyright to the protection of an actor’s very identity—their face and, increasingly, their voice.
A landmark ruling involving the unauthorized use of actress Dilraba Dilmurat’s likeness in a short-form drama has established a critical precedent. The court rejected the defense of ‘technological neutrality,’ ruling that even if an AI-generated face is not a pixel-perfect match, it constitutes infringement if the general public can identify the person. This decision effectively closes a loophole often exploited by developers who claimed that slight algorithmic variations absolved them of liability.
However, as the visual frontier becomes more regulated, a new conflict is erupting over ‘voice stealing.’ Prominent voice actors, including those behind iconic characters in *Empresses in the Palace* and *Infernal Affairs*, have discovered their voices being cloned for AI-generated dramas and unauthorized advertisements. Unlike a face-swap, which can be identified at a glance, voice cloning presents a more insidious challenge due to the difficulty of proving the source of training data.
Legal experts point out that while the Civil Code protects a person’s voice under the same framework as their image, the practical hurdles for victims are immense. Evidence is ephemeral, and the cost of litigation often outweighs the potential compensation for anyone but the highest-tier celebrities. This has created a ‘low cost, high reward’ environment for unscrupulous producers who use AI to bypass traditional talent fees, threatening the livelihoods of the professional dubbing industry.
The industry is now calling for a ‘full-chain’ regulatory approach that moves beyond reactive lawsuits. Proposed solutions include mandatory digital watermarking for all AI-generated audio and a requirement for platforms to maintain a transparent registry of the data used to train their models. Without such systemic safeguards, analysts warn of a ‘Gresham’s Law’ scenario where low-quality, stolen AI content crowds out authentic creative work.
Ultimately, China’s handling of these cases serves as a global bellwether for AI litigation. As the world’s largest market for short-form video and mobile content, China’s ability to balance the growth of its AI sector with the protection of individual rights will determine the future stability of its creative economy. For now, the ‘face and voice’ defense remains the frontline in the battle for the integrity of human performance.
