At this year’s national political meetings in Beijing, a senior scholar and member of the Chinese People’s Political Consultative Conference warned that the rapid spread of generative AI and multimodal content is creating a new, subtler wave of risks for children that existing safeguards struggle to contain. Guo Yuanyuan, deputy director of the Institute for Mega-city Development at Capital University of Economics and Business, told the Daily Economic News that AI’s ability to produce lifelike cartoons, videos and audio blurs the line between safe and harmful material and renders traditional keyword-based filtering increasingly ineffective.
Guo outlined three interlocking problems. First, the diversity and ambiguity of multimodal AI outputs make “hidden” harms—soft sexualisation, value distortions and veiled violent cues—harder to recognise and block. Second, platform economics that prioritise engagement and algorithmic amplification can spread harmful content faster than regulators can respond, creating a constant risk of “overspill.” Third, many adolescents turn to AI and online communities for emotional support; that reliance can deepen psychological risks if AI-driven interactions funnel young people toward negative content or echo chambers.
Her prescription runs across technical, regulatory and social fronts. Technically, she urged embedding social responsibility into recommendation and content-generation mechanisms so platforms do not treat safety as a bolt-on. Platforms should deploy smarter, real-time detection and interruption tools that combine machine pre‑warning with immediate human follow-up and links to rescue channels. Regulatory frameworks should be updated proactively to match the pace of innovation rather than lagging behind it.
Guo was also critical of how the current “youth mode” and anti-addiction systems are implemented. While time limits and blunt filters have been widely adopted, she said they are “good at blocking the obvious and poor at policing the subtle.” She called for a deeper human–machine coordination: an always-on AI that flags emotional distress or risky behaviours and triggers instant human intervention and family notification, rather than leaving children to interact with models in isolation.
The lawmaker highlighted the rural dimension of the problem, noting that left‑behind children and communities with weaker infrastructure are more exposed to harms and less likely to benefit from safety tools and digital literacy programmes. Closing that gap, she argued, requires investment in infrastructure, extension of protective services into townships and villages, and explicit inclusion of rural schools and community bodies in digital protection responsibilities.
Her comments reflect a broader policy shift in China and beyond: governments are moving from post hoc content takedowns toward upstream governance that shapes how platforms design algorithms and product features. If regulators heed her advice, tech companies will face stronger incentives to bake child safety and social values into system architecture, with implications for product design, compliance costs and the international governance of AI.
