A Shanghai municipal deputy and securities‑industry technologist has proposed a comprehensive, city‑level approach to governing generative artificial intelligence that blends technical defences, new local laws and public education. Zhan Tingting, a deputy to the Shanghai People’s Congress and an assistant general manager in Guotai Haitong Securities’ R&D division, warned that the rapid spread of AIGC (AI‑generated content) has lowered barriers to entry while amplifying misuse through industrialised deepfakes, concealed attacks and cross‑sector harms.
Zhan recommends creating a municipal “AI content safety detection centre” in partnership with research institutions and leading firms. The centre would combine monitoring, early warning and provenance tracing, require technical filings for high‑risk scenarios and run periodic, penetration‑style assessments to shift defences from passive blocking to active protection.
On the content side, she proposes strict enforcement of national digital‑watermarking standards and a mandate that AIGC services used or hosted in Shanghai embed non‑removable watermarks to make generated material traceable across platforms. Zhan also urges accelerated local legislation to produce a Shanghai‑tailored AI law that both supports the city’s tech ecosystem and aims to set governance norms internationally, alongside a targeted regulation protecting minors from violent, biased or otherwise harmful generated content.
Financial market integrity and data security are central to her pitch. Zhan suggests a shared “AI false‑information and anomalous‑data feature library” to spot coordinated fabrication used to manipulate stocks or regulatory narratives, plus stricter security audits of third‑party AI tools to block leakage of commercial secrets into public large models. She argues these measures are necessary to raise the city’s ability to detect, attribute and respond to AI‑enabled financial manipulation and data exfiltration.
Finally, Zhan stresses social resilience: AI safety education should be folded into a municipal digital‑literacy campaign aimed at vulnerable groups — the elderly, children and corporate finance staff. Practical tips, such as checking for implausible private details or staged micro‑actions in videos, are proposed to build a “psychological firewall” against deepfakes and misleading predictions.
The proposal frames Shanghai’s dilemma plainly: how to reconcile the city’s role as a global financial and technology hub with the security demands posed by generative AI. If adopted, the package would raise compliance and technical requirements for local AIGC providers, push enterprises to audit their AI supply chains more rigorously, and could position Shanghai as an influential testbed for urban‑scale AI governance inside China and abroad.
