Unauthorized AI Lunar‑New‑Year Greeting Videos Surge in China, Raising Legal and Trust Questions

AI‑generated Lunar New Year greeting videos are proliferating on Chinese social media without the consent of depicted individuals. While major platforms are adding watermarks, contractual bans and automated detection, many open‑source tools lack safeguards, creating civil, reputational and criminal risks and exposing broader governance gaps around synthetic media.

A robotic hand grasping black keyboard keys in a minimalist setting.

Key Takeaways

  • 1AI‑synthesised short videos featuring unauthorised likenesses have gone viral on Chinese social platforms ahead of the Lunar New Year.
  • 2Leading AI services now add visible AI labels or watermarks, ban unauthorised use of portraits/voices in user agreements, and deploy automated detection tools.
  • 3Open‑source models and smaller applications frequently lack such protections, creating opportunities for misuse.
  • 4Unauthorised synthetic media can lead to civil liability (portrait and reputation rights), defamation claims, and facilitate fraud, complicating enforcement.
  • 5The spread of these clips during a major cultural festival highlights the urgent need for consistent platform controls, legal remedies and public awareness.

Editor's
Desk

Strategic Analysis

Editor’s Take: The surge of unauthorized AI greeting videos is a concentrated example of a wider governance challenge posed by generative technologies. Platforms can and should harden detection, provenance and labelling systems, but technical fixes alone will not suffice. Effective mitigation requires calibrated regulation that clarifies liability, incentives for responsible service design, and a public‑facing campaign to raise digital provenance literacy. If governments move too slowly or unevenly, the cumulative effect will be erosion of trust in everyday digital content — a strategic cost for societies and markets that depends on reliable information and personal reputation.

China Daily Brief Editorial
Strategic Insight
China Daily Brief

As the Lunar New Year approaches, short videos generated by artificial intelligence and circulating widely on Chinese social platforms have become a festal oddity — and a legal headache. Clips that stitch together familiar faces and voices into personalised greetings are striking a chord with audiences hungry for novelty, but many are created without the consent of the people they portray.

Major commercial AI services and professional video‑synthesis tools have begun to adopt basic anti‑abuse measures. Platforms are increasingly embedding visible watermarks or mandatory AI‑generated labels, imposing contractual bans on unauthorised use of another person’s likeness or voice, and deploying automated content filters that leverage the same machine‑learning techniques to spot suspicious deep synthesis.

Those safeguards, however, are uneven. Open‑source models and smaller apps often omit clear labelling or usage constraints, and some creators deliberately strip identifiers to improve realism. That patchwork leaves room for misuse: a cheerful holiday clip today can become a vehicle for harassment, fraud or reputational harm tomorrow.

The legal risks are immediate and multi‑layered. Using someone’s image or voice without permission can trigger civil claims under rights to portrait and reputation; altered or fabricated speech may amount to defamation; and repurposed identities have practical uses for social‑engineering scams that could draw criminal liability. Enforcement is complicated by jurisdictional ambiguity when tools, content and viewers cross administrative or national borders.

The timing amplifies the stakes. New Year greetings travel fast within family groups and community chat channels, a viral pathway that can spread manipulated content before platforms or rights‑holders can react. Cultural resonance gives such clips outsized visibility and increases the chance that an unauthorised synthetic likeness will inflict real social or economic harm.

For platform operators and policymakers, the dilemma is twofold: limit creativity and convenience, or tolerate a permissive environment that enables abuse. Tech firms face pressure to roll out robust provenance systems and tougher verification measures while balancing user experience and commercial incentives. Regulators face the classic trade‑off between technology neutrality and targeted rules that prevent harm without hindering innovation.

For international observers, China’s experience illustrates a universal problem. The rise of easy, inexpensive tools for producing realistic synthetic media exposes gaps in governance and user awareness everywhere. The immediate remedy combines better platform controls, clearer legal remedies for victims and sustained public education about the provenance of digital content.

Absent faster and more consistent safeguards, unauthorised AI greetings will remain a test case for whether societies can preserve trust in everyday digital interactions while embracing generative technologies. The holiday cheer may fade quickly; the legal and social fallout may not.

Share Article

Related Articles

📰
No related articles found