China’s flagship consumer-rights broadcast has turned a spotlight on a new information‑market scam: companies selling cheap “GEO” (Generative Engine Optimization) services that flood the web with fabricated content so large AI models will recommend bogus products.
Investigations shown on the March 15 315 programme and corroborated by NetEase reporters describe how vendors offering services on e‑commerce platforms can produce and publish dozens or thousands of soft‑advertorials for a few dozen yuan apiece. In a staged demonstration, a consultant bought a Taobao GEO tool, invented a fictional “Apollo‑9” smart band with fantastical claims such as “quantum entanglement sensing” and “no‑blood glucose measurement,” and let the system generate and publish more than ten articles. Hours later multiple Chinese AI assistants and search models surfaced the fabricated product as a recommended item.
The mechanics are straightforward: automated accounts and programmatic publishing seed multiple self‑media channels and technical forums with consistent, corroborating material. Because many deployed AI services rely on live web retrieval or give weight to repeated independent sources when compiling answers, the coordinated volume and variety of posts creates the appearance of corroboration. Vendors boast packages that can produce tens of thousands of articles per year and claim they can push clients into the top results of AI recommendations for an annual fee ranging from a few thousand to tens of thousands of yuan, with individual postings costing only a few dozen yuan.
The consumer risks are acute. The 315 exposé flagged examples where the technique could influence recommendations on sensitive purchases — dietary supplements, medical devices and household chemicals — potentially directing vulnerable users, such as older consumers, to fictitious or unsafe products. The scandal also named multiple domestic models and services that surfaced the faked material, underscoring how tightly consumer trust and model behaviour are now linked to the commercial ecology of online content.
The problem is both technical and economic. In the search era, ranking manipulation was an economics problem (SEO); in the AI era it becomes an epistemic one — who gets to supply the “evidence” that models synthesise? Systems that combine pretrained language models with external retrieval layers or rely on web‑scraped corpora for up‑to‑date facts are especially vulnerable to coordinated low‑cost “poisoning.” The vendors themselves frame GEO as merely a marketing tool, but regulators and state media portray it as a form of deliberate deception that harms consumers.
Regulatory pressure is already mounting. China’s State Administration for Market Regulation listed AI‑generated advertising as a priority for crackdown in its 2026 work plan, and state broadcasters criticised the misuse of GEO as damaging to market order and consumer rights. The exposures signal that Chinese authorities will treat AI‑era advertising fraud as a supervisory frontier, even as tech firms race to improve recommendation quality and data provenance.
The phenomenon has broader implications beyond China. Wherever models ingest live web content or surface answers based on frequency and apparent corroboration, coordinated commercial campaigns can distort outputs. The controversy illustrates why provenance, source weighting, authenticated publishing, and clearer commercial disclosure will be central features of trustworthy AI systems. Platforms, model builders and regulators face a fast‑moving arms race between cheap manipulation techniques and defenses such as stronger provenance metadata, stricter content‑quality signals, and legal liability for systematic deception.
