A routine attempt to streamline corporate administration recently turned into a cautionary tale for the AI era when a company manager in China attempted to purchase group insurance via a general-purpose large language model. After following the AI’s advice, the manager scanned a 'payment QR code' provided by the bot and transferred 1,618 yuan. It was later discovered that the AI had scraped a personal QR code from a social media post, inadvertently routing the company's funds into a stranger's private wallet.
This incident has sparked a rigorous debate within China’s fintech circles regarding the limitations of general-purpose AI like ByteDance’s Doubao or Alibaba’s Tongyi Qianwen in high-stakes financial environments. While these models are proficient at summarizing policy details or offering generic advice, their tendency to 'hallucinate'—or generate plausible but false information—becomes a liability when they treat random web-scraped images as legitimate financial gateways.
Industry testing conducted by the National Business Daily suggests a widening gap between these generalists and vertical, industry-specific models like Ant Group’s 'Ant 保' (Ant Insurance). While general models often provide direct but potentially broken or unsafe links, vertical models utilize a structured, interactive approach. These specialized AIs are programmed to gather demographic data and health history before guiding users toward verified, compliant checkout pages rather than providing direct payment triggers within a chat interface.
For China’s insurance sector, the 'payment layer' remains a non-negotiable red line that AI is currently forbidden to cross autonomously. Leading platforms emphasize that for reasons of compliance, anti-money laundering (AML) protocols, and data privacy, AI should only serve as a 'decision reference.' The final transaction must be handled by hardened, licensed payment gateways where human-in-the-loop verification or strict algorithmic guardrails are in place.
Major incumbents are responding to these risks by developing proprietary vertical LLMs. The People’s Insurance Company of China (PICC) has launched its 'Chenling' model, which reportedly achieves an intent recognition accuracy of over 99%. By training on internal, verified datasets rather than the open web, these firms aim to provide the efficiency of generative AI while insulating customers from the erratic 'black box' behavior seen in general-purpose bots.
