Small Beijing Firm Behind 'GEO' AI Tool Named on China's 315 Show, Raising Data‑poisoning and Supply‑chain Alarms

China's 315 consumer‑rights broadcast named a product called the "LiQing GEO optimization system", linking it to a small Beijing company with limited staff and modest capital. The appearance of this vendor in a national probe underscores risks from opaque suppliers in AI data and model supply chains and points to mounting regulatory and market pressure for greater transparency and provenance controls.

A colorful and vibrant abstract 3D render featuring intricate geometric shapes and structures.

Key Takeaways

  • 1The 315 consumer‑rights programme highlighted "AI poisoning" and named the "LiQing GEO optimization system".
  • 2Tianyancha records tie the product to Beijing Lisi Culture Media Co., Ltd., incorporated in April 2018 with 1 million RMB capital and a single legal representative.
  • 3The company registered a software copyright in February 2026, has recent trademark applications, opened a Qingdao branch, and recorded zero employees for years before reporting one insured person in 2025.
  • 4The mismatch between the company's small public footprint and the national exposure of its product raises concerns about supplier transparency and the risk of poisoned or low‑quality data entering AI training pipelines.
  • 5The case will increase scrutiny from Chinese regulators and platform operators and underscores the need for stronger provenance, auditing and vendor‑due‑diligence practices in AI supply chains.

Editor's
Desk

Strategic Analysis

The LiQing/GEO episode is symptomatic of a structural vulnerability in the modern AI economy: model behaviour depends on an ecosystem of commercial actors whose scale and practices vary wildly. Small, lightly staffed firms can nonetheless influence the content ecosystems that feed models, either through benign optimisation services or through deliberate manipulation. In China, the 315 spotlight accelerates reputational and regulatory consequences — platforms will face pressure to disclose vendor relationships, and authorities may push for stricter data‑provenance requirements, certification of model inputs, and clearer liability for intermediaries. Internationally, the incident reinforces calls for auditable supply chains and standardised vendor‑assessment frameworks so that enterprises and regulators can manage the risks of data poisoning without stifling legitimate innovation.

NewsWeb Editorial
Strategic Insight
NewsWeb

China's high‑profile 315 consumer rights broadcast this month flagged a category of risks from large artificial‑intelligence models described as "AI poisoning" — manipulations of training data or model inputs that can skew outputs or insert harmful behaviours. One product singled out on the programme was the "LiQing GEO optimization system", which public corporate records link to a little‑known Beijing company, Beijing Lisi Culture Media Co., Ltd.

Tianyancha corporate filings show Beijing Lisi was incorporated in April 2018 with one million renminbi of registered capital and a single legal representative, Li Qianzhong. The company is wholly owned by Li and lists business activities in cultural events and media services; it registered a software copyright this February for a "media publishing and management platform" and has lodged trademark applications for "Yibao Ke" and "LiQing" across website and scientific‑instrument classes.

The publicly available data also show the company opened a Qingdao branch in June last year and that its social‑insurance filings recorded zero employees for several consecutive years, rising to a single insured person in 2025. The contrast between the company's modest footprint on paper and the prominence of its product on national television has prompted questions about how small suppliers plug into the data and model supply chains that feed major AI systems.

The case illustrates two intersecting anxieties. First, AI systems depend on vast, often aggregated data sources that are difficult to audit; adversarial or low‑quality inputs can have outsized effects. Second, the emergence of opaque, lightly staffed vendors offering optimisation, data‑curation or SEO‑style services exposes buyers to reputational and operational risks if those suppliers are used to inject biased or malicious content into data streams.

For international observers and firms buying AI services in China, the episode highlights a broader governance problem: accountability for model safety frequently sits across multiple actors — small suppliers, platform operators, integrators and regulators — with few standardised mechanisms to verify data provenance or vendor practices. China's domestic environment amplifies this: high‑visibility programmes like 315 exert immediate reputational pressure, and Beijing has in recent years tightened oversight of algorithmic services, but practical enforcement of complex supply‑chain problems remains a work in progress.

Whether the product named by 315 actually constitutes deliberate "poisoning" or is being presented as such for illustrative effect will likely be decided by follow‑up investigations by platforms and regulators. What is already clear is that the episode is a reminder that the integrity of AI outputs depends not only on model architectures and compute but on the often invisible market of data and optimisation services that surround them.

Share Article

Related Articles

📰
No related articles found