When Chen Yuxuan, a recent dance graduate with hundreds of thousands of social followers, discovered that a six‑month‑old performance video of hers had been turned into an AI face‑swap template and reused more than 700 times, she encountered a problem familiar to many: the cost of producing convincingly fake video has collapsed, while the cost of proving harm and pursuing redress remains prohibitive.
Chen’s case is not an outlier. In the past few months a cluster of Chinese celebrities and public figures have complained that their images and voices are being pirated by generative video systems. Actors, anchors and musicians have publicly warned that AI videos using their likenesses are being used to hawk goods, spread falsehoods or simply humiliate, while holiday‑themed “celebrity” greeting clips and crude deepfake sketches circulate widely on short‑video platforms.
The commercial dimension has intensified the problem. Last year Hollywood studios including Disney, Warner and Universal jointly sued Minimax over its “Conch AI” video model, accusing it of infringing hundreds of film copyrights. In China and abroad the same legal fault line has surfaced: models are trained on vast quantities of publicly available material, and the more visual material a public figure has online, the more convincing an AI impersonation becomes.
Legal advisers frame the harm in familiar economic and reputational terms. Lawyers point to the fees that celebrities historically command for commercial endorsements as a benchmark for potential damages; a single top‑tier endorsement in China can fetch millions of yuan. At the same time, marketplaces now sell “digital human” services for prices that range from a few yuan to a few thousand, and reporters can buy a ready‑made celebrity endorsement clip for a trivial sum.
The damage is not only pecuniary. Reputation and trust are hard to price, and obscene or fraudulent manipulations can inflict long‑term harm on personal brands and on the value of intellectual property. Large language and image models have been used to produce sexually suggestive images of public figures and to lend credibility to investment scams and fake endorsements, turning entertainment‑style mischief into a vector for economic crime.
Yet practical enforcement is cumbersome. AI face‑swaps typically preserve all original footage except the face, complicating efforts to demonstrate a direct copy of any single work. Templates are reposted, remixed and monetised indirectly—by driving ad views or membership conversions—making it difficult to trace who profited and to what extent. Plaintiffs must navigate a long chain of developers, uploaders and platform operators to establish liability.
The technology companies respond with two tactics: technical constraints and legal arguments. Several leading model makers have temporarily disabled features that generate lifelike human images; others argue, as Minimax did, that their systems are neutral tools and that companies do not directly profit from user prompts. Courts and commentators are increasingly receptive to a hybrid approach: some degree of “fair use” during training may be tolerable, but platforms should implement copyright filters and avoid generating outputs that could substitute for the original.
Market and regulatory fixes are already emerging. One path is transactional: OpenAI and other firms have shown that licensing and revenue‑sharing deals with rights holders are feasible, as when Disney agreed to license characters to OpenAI in exchange for financial and strategic ties. The other path is regulatory: Chinese industry figures and lawmakers have pushed for stricter controls on deepfakes and “face‑swap” technologies, and Beijing is expected to press the issue in the 2026 legislative season.
The policy challenge is to balance two competing public goods: the social and economic opportunities of generative AI, and the protection of personal and intellectual property rights. Technical transparency, provenance metadata, mandatory watermarking and clearer intermediary liability rules are plausible ingredients of a durable solution, but they will impose costs and reshape how AIGC products propagate and grow.
For ordinary users like Chen, the gap between low‑cost harm and high‑cost remedy is the current reality. Unless platforms, rights holders and regulators build a faster, cheaper mechanism for attribution and takedown—and unless AI companies accept licensing or compensation models—the asymmetry will continue to encourage misuse, erode trust in digital media and shift enforcement burdens to individuals least able to shoulder them.
