A factory-line supervisor and newly elected national deputy has thrust a mundane but urgent problem onto the two‑session agenda: the rapid spread of misinformation powered by generative artificial intelligence. Li Anrui, 58, a plastics workshop director with more than three decades in manufacturing, told reporters that he frequently encounters short videos and posts whose realism makes truth and falsehood nearly indistinguishable. The consequence, he argues, is not merely online noise but real reputational and social harm to enterprises, public services and government credibility.
Li singled out two vivid patterns that worry him. One is the circulation of dramatic but context-free clips—he mentioned videos purporting to show electric vehicles spontaneously catching fire—that offer no verifiable provenance and nonetheless influence consumer perceptions. The other, a concrete case from 2025 in Hangzhou, involved an anomalous smell in parts of the municipal water supply followed by an internet user’s fabricated “police advisory” claiming sewage contamination. The fake advisory spread widely, triggered panic and required official resources to rebut; the originator was subsequently detained.
Those anecdotes underpin Li’s central argument: tackling AI‑enabled misinformation requires action at the source. He urged creation of an official ‘‘anti‑counterfeit verification platform’’ that would assign every genuine document or public notice a unique, machine‑verifiable identity so that ordinary citizens can authenticate items quickly. He also proposed hard rules for AI‑generated content, insisting that anything mimicking government forms, judicial documents or official bulletins carry mandatory AI‑origin labels.
Beyond provenance and labelling, Li placed responsibility squarely on platforms. He recommended stronger obligations for social networks and content hosts to verify claims purporting to be official before amplifying them, and to penalise repeat offenders through flow restrictions, de‑ranking or account suspension. Crucially, he urged algorithms to prioritise verified official information, a measure aimed at starving falsehoods of attention rather than relying solely on after‑the‑fact takedowns.
Li’s proposals dovetail with broader policy signals coming out of Beijing. The government’s recently circulated draft summary of the 15th Five‑Year Plan flags the need to “promote development and standardised management in tandem,” and specifically mentions strengthening artificial intelligence governance. Premier Li Qiang’s government work report for 2026 likewise names AI governance among priorities for the year. Those references indicate political cover for more assertive regulatory steps at the intersection of technology, public order and industry protection.
Implementing Li’s blueprint would be technically feasible but politically delicate. Technologies such as cryptographic signatures, provenance metadata and robust watermarking can help trace content origin and certify documents, yet they require interoperable standards and buy‑in from device makers, app developers and foreign platforms. Mandatory AI labelling also raises enforcement questions: how to detect deliberate obfuscation, how to handle encrypted or peer‑to‑peer sharing, and how to prevent bad actors from gaming metadata.
There are wider trade‑offs. Tightening platform liability and privileging verified official sources can curb viral falsehoods and protect consumers and companies, but it risks entrenching state actors as primary arbiters of truth and could chill grassroots speech if poorly designed. For China’s manufacturing sector, where reputation is central to exports and domestic demand, however, the balance between rapid counter‑disinformation measures and civil liberties will be debated in both regulatory corridors and factories alike.
The issue is not uniquely Chinese. Governments from Brussels to Washington are wrestling with how to force provenance, mandate labels and allocate responsibility for synthetic media without stifling innovation. What Li’s intervention highlights is the immediacy of the problem for societies where a single viral post can damage public health responses, consumer confidence and institutional trust. How Beijing proceeds will influence not only domestic internet governance but also international norms for managing generative AI.
