China's high‑profile 315 consumer rights broadcast this month flagged a category of risks from large artificial‑intelligence models described as "AI poisoning" — manipulations of training data or model inputs that can skew outputs or insert harmful behaviours. One product singled out on the programme was the "LiQing GEO optimization system", which public corporate records link to a little‑known Beijing company, Beijing Lisi Culture Media Co., Ltd.
Tianyancha corporate filings show Beijing Lisi was incorporated in April 2018 with one million renminbi of registered capital and a single legal representative, Li Qianzhong. The company is wholly owned by Li and lists business activities in cultural events and media services; it registered a software copyright this February for a "media publishing and management platform" and has lodged trademark applications for "Yibao Ke" and "LiQing" across website and scientific‑instrument classes.
The publicly available data also show the company opened a Qingdao branch in June last year and that its social‑insurance filings recorded zero employees for several consecutive years, rising to a single insured person in 2025. The contrast between the company's modest footprint on paper and the prominence of its product on national television has prompted questions about how small suppliers plug into the data and model supply chains that feed major AI systems.
The case illustrates two intersecting anxieties. First, AI systems depend on vast, often aggregated data sources that are difficult to audit; adversarial or low‑quality inputs can have outsized effects. Second, the emergence of opaque, lightly staffed vendors offering optimisation, data‑curation or SEO‑style services exposes buyers to reputational and operational risks if those suppliers are used to inject biased or malicious content into data streams.
For international observers and firms buying AI services in China, the episode highlights a broader governance problem: accountability for model safety frequently sits across multiple actors — small suppliers, platform operators, integrators and regulators — with few standardised mechanisms to verify data provenance or vendor practices. China's domestic environment amplifies this: high‑visibility programmes like 315 exert immediate reputational pressure, and Beijing has in recent years tightened oversight of algorithmic services, but practical enforcement of complex supply‑chain problems remains a work in progress.
Whether the product named by 315 actually constitutes deliberate "poisoning" or is being presented as such for illustrative effect will likely be decided by follow‑up investigations by platforms and regulators. What is already clear is that the episode is a reminder that the integrity of AI outputs depends not only on model architectures and compute but on the often invisible market of data and optimisation services that surround them.
