In a Beijing suburb and on social feeds alike, the newest threat to reputation and revenue often starts with a single, banal video clip. Chen Yuxuan, a recent dance graduate with hundreds of thousands of followers, discovered this year that a six-month-old performance from her university graduation had been turned into an AI face‑swap template and reused more than 700 times without her knowledge. Her outrage — and inability to make the story stick — encapsulates a mounting problem: synthetic media tools make convincing impersonations almost free to produce, while the work of proving harm remains costly, complex and slow.
The past months have seen a cascade of higher‑profile versions of the same dynamic. Established stars and public figures — from veteran actor Wang Jinsong to talk‑show host Bai Yansong and pop star Jay Chou — have publicly complained that AI clips bearing their likenesses are circulating on short‑video platforms, sometimes voicing scripted endorsements, sometimes used to humiliate or defame. Hollywood studios have not been immune: last autumn a group of twelve major studios sued Chinese model maker Minimax over its “Conch AI” video tool, alleging that the company’s training and output infringed hundreds of film copyrights.
The mechanics are straightforward and troubling. Every large video model needs training data, and the more high‑quality footage of a person or character that exists online, the better an AI can reproduce them. That creates an asymmetry: the marginal cost to produce a fake celebrity endorsement or pornographic hoax is near zero, while the marginal cost to the victim — in legal fees, in time, in reputational damage — is steep. Lawyers point to lost “expectant” commercial value as a way to quantify harm, but placing a price on dignity or diluted IP value is often speculative and uneven.
Practical obstacles compound the theoretical ones. Deepfake outputs often preserve every aspect of an original clip except the face, making it difficult to argue the model replicated a specific protected work rather than generating a “new” artifact. Platforms monetise attention through ads and memberships even when face‑swapped clips are distributed as free templates, complicating claims about direct profits. And the chain of responsibility is long: model trainers, tool developers, template creators, uploaders and hosting platforms can all play a role, turning enforcement into a painstaking hunt for a digital needle in a sprawling haystack.
The social consequences spill beyond celebrity vanity. Scammers use AI‑generated likenesses to impersonate trusted figures and peddle bogus investment tips; criminals have combined illegally obtained personal data with face‑swap technology to bypass payment authentication; courts have begun to see fraud convictions where AI impersonation played a part. A handful of publicised cases in China describe tens of thousands of yuan drained from victims and schemes that used fabricated images to entice victims into money‑pools and courses.
The legal terrain is shifting but unsettled. In China, private litigants and prominent corporations alike are pursuing remedies — from cease‑and‑desist letters to multi‑million‑dollar suits — while lawmakers and industry leaders call for stronger governance. Some companies have responded by imposing production filters that block requests to generate real‑person imagery, and many Western AI firms have reached licensing settlements with newsrooms and media companies after litigation. Yet complete prohibition risks stifling innovation: prior waves of AIGC growth have been driven precisely by the ability to reproduce recognisable public figures and narratives.
That tension frames the strategic choices ahead. One route favoured by rights holders is to replicate the media sector’s approach: licensing and revenue‑sharing deals that monetise access to IP and likenesses. Another is regulatory — defining legal presumptions that make platforms liable or require robust provenance and watermarking of synthetic output. Technological solutions such as detectable watermarks and stronger identity authentication may help, but they are imperfect and can be evaded by bad actors.
For individuals like Chen Yuxuan, the immediate reality is dispiriting. Without a dedicated legal team or a clear path to compensation, many cases founder. Even prominent figures who mobilise legal firepower describe an experience of whack‑a‑mole as copycats and mirror accounts reappear faster than takedowns can be enforced. The broader implication is systemic: as face‑swap models become more realistic and more accessible, the fragile mix of personal privacy, brand value and public trust that underpins commerce and civic life is under stress.
The policy moment is arriving. Chinese industry voices and national delegates have proposed targeted measures to regulate “deep‑fake” and voice‑synthesis harms; international studios are litigating and negotiating licensing accords; and some major AI firms are striking deals with rights owners to legitimise use of protected material. The path the sector takes will determine whether the next era of creative tools is governed by a negotiated marketplace of rights and checks, or by a patchwork of costly litigation and ad hoc platform enforcement.
If regulators and companies manage to strike a workable bargain, the result could be a healthier ecosystem in which creators are paid and consumers can trust synthetic outputs. If they do not, the low cost of producing highly convincing fakes will continue to outpace the high cost of proving and repairing harm, and the collateral damage will broaden from celebrities to ordinary citizens and to the institutions that rely on authentic speech and imagery.
