As generative artificial intelligence moves from novelty to necessity, the digital landscape is shifting from traditional Search Engine Optimization (SEO) to the more complex realm of Generative Engine Optimization (GEO). At a recent high-level seminar in Beijing, Hu Naying, a senior official at the China Academy of Information and Communications Technology (CAICT), warned that this transition brings existential risks. The primary concern is no longer just the quality of training data, but a deeper crisis of 'source governance' where polluted information threatens to undermine the reliability of AI outputs.
Ms. Hu argues that if the foundational data of AI systems is compromised, the resulting content will inevitably drift from factual reality. In the context of China's current economic strategy, this is more than a technical glitch; it is viewed as a direct threat to the development of 'New Quality Productive Forces.' By polluting the intellectual 'soil' from which AI-driven productivity grows, data manipulation could derail the nation's broader digital transformation and undermine public trust in intelligent systems.
To combat the rising tide of GEO abuse, Beijing is advocating for a multi-stakeholder responsibility model. This framework distributes the burden of accountability across five key actors: brand enterprises, service providers, content platforms, generative AI manufacturers, and end-users. The goal is to build a long-term 'cognitive barrier' that prevents malicious data from infiltrating the ecosystem while ensuring that brand growth is achieved through legitimate, positive optimization rather than deceptive practices.
Looking ahead, the Chinese regulatory approach will rely on what Ms. Hu calls 'dual engines': technological innovation and institutional oversight. On the technical side, the CAICT is pushing for standardized data source classification and robust evaluation systems. Institutionally, the government is exploring the implementation of registration systems and 'blacklists' to track and punish malicious data pollution. This proactive stance signals that as AI becomes the new 'intellectual power' for the economy, the state will play a central role in ensuring that the fuel for that power remains untainted.
