China’s flagship consumer-rights broadcast has pulled back the curtain on an emergent industry that pays to manipulate large AI models. Investigations by Chinese media and journalists traced a market of commercial services called “GEO” that create and distribute promotional copy across the internet with the explicit aim of being ingested by mainstream AI systems and presented as authoritative answers to user queries.
Reporters who engaged with GEO operators found a candid sales pitch: for a fee, the service will manufacture and publish product-oriented articles and then continually re-feed those pieces into the ecosystem so search-and-retrieval layers of AI platforms will surface the client’s offer as a top recommendation. Firms offered automated content-generation tools that can spin dozens of soft-advertorials for a fictional product and a distribution network of accounts and websites to amplify those pieces until an AI model cites them as “standard” advice.
A hands-on test described by the investigators illustrates the technique. After buying access to a GEO optimisation package and publishing fabricated product pages for a fake wearable, journalists queried several mainstream Chinese AI models with a consumer-style prompt and received the bogus device as a front-page recommendation. Operators described the method as cost-effective—spending millions on a GEO campaign could substitute for what would otherwise be hundreds of millions of yuan in traditional advertising.
GEO providers say the trick lies not in hacking model weights but in shaping the data that models consume at scale. Many high-profile models combine pretraining with continuous ingestion of web content and retrieval-augmented generation; by flooding the web with coordinated, persistent signals, operators can bias the sources retrieval systems draw on and, by extension, the model outputs that rely on those sources.
The phenomenon is both an old problem in new clothing and a novel risk. Manipulating search results, fake review farms and SEO gaming are long-standing parts of the digital-ad landscape. What is new is the scale and speed at which automated content and coordinated posting can be used to shape generative model behaviour in near real time, and the growing commercial ecosystem—content mills, distribution brokers and automated ‘feeders’—assembled to do it.
For Chinese regulators and platform owners the stakes are high. The CCTV 3·15 programme has historically provoked swift enforcement, and the public exposure will increase pressure on model developers, hosting platforms and the accounts that publish manipulated content. China’s authorities have already been moving quickly to regulate algorithmic recommendation systems and data governance; this story will likely accelerate efforts to require provenance, stronger content-control processes and tougher penalties for coordinated manipulation.
The wider implications are global. Any model that uses large-scale web crawling, public forums or live retrieval is vulnerable to similar manipulation: actors can cheaply fabricate favourable narratives or bury competitors simply by creating enough credible-looking content. That raises questions for enterprises that rely on AI recommendations for commerce and for consumers who treat model outputs as impartial advice.
Mitigations exist but are imperfect. Model-makers can tighten data curation, introduce provenance and source-weighting, watermark synthetic content, or limit reliance on unvetted web material. Platforms can clamp down on coordinated posting networks and require stronger identity and payment trail verification. Yet the adaptability of the actors involved, along with the economics that favour inexpensive manipulation over expensive advertising, means policymakers and firms will have to craft layered, ongoing defences rather than one-off fixes.
The GEO revelations underline a broader truth about the current AI era: model performance and trust are not just technical problems inside a lab, they are social and commercial problems that live in the messy interaction between platforms, publishers and paying customers. As long as market incentives reward eyeballs and sales, actors will seek low-cost ways to capture the signals models use. The question for regulators and technologists is how fast they can close those avenues without stifling legitimate content creation and innovation.
