Qihoo 360, one of China’s best-known cybersecurity firms, has unveiled a branded intelligent agent it calls “Safe Longxia” — a safety-focused variant of the rapidly proliferating AI assistants that Chinese users and businesses are beginning to adopt. The launch, reportedly marked by a hands-on promotion day outside 360’s offices with founder Zhou Hongyi helping users install devices, comes amid intensifying public and regulatory scrutiny of so-called "longxia" (literally "lobster", a popular nickname for autonomous AI agents in China).
The timing is notable. Over the past weeks regulators, universities and other gatekeepers have issued warnings or outright bans about these agents, citing data leakage and security risks. State-backed notices and university memos have urged institutions and individuals to remove or refuse such software from official devices, framing longxia as a potential vector for privacy breaches and uncontrolled network access.
360’s product pitch is an attempt to turn those security anxieties into a market advantage. By positioning its agent as “secure-by-design,” the company seeks to capture customers unwilling to run third-party agents on corporate or personal devices, offering tighter integration with its existing security stack and claims of safer data handling. The move also places 360 in direct competition with other domestic efforts to commercialise agent technology, from handset makers to startups rolling out their own multimodal models.
Yet technical and regulatory hurdles remain. Autonomous agents derive their value from broad access to tools, accounts and real-time web data; that same access makes them a vector for exfiltration and abuse. Regulators’ interventions point to a deeper unresolved question: who controls agents’ privileges, how are their actions audited, and where is sensitive data processed — on-device or in the cloud?
The market response will likely hinge on those architectural choices. Agents designed to run primarily on-device with restricted privilege escalation will be easier to market to risk-averse customers and institutions. Conversely, cloud-first agents that stitch together multiple APIs and services will retain richer functionality but face heavier regulatory friction and more difficult security proofs.
Strategically, 360’s release is as much a defensive play as an offensive one. It leverages the company’s reputation in security to set a standard and, perhaps, to influence the regulatory framing of acceptable agent behaviour. If regulators treat vendor-certified, security-first agents more leniently, 360 could entrench itself as the de facto provider of compliant agents for enterprises and public-sector clients.
But regulatory favour is not guaranteed. Chinese authorities have shown a willingness to move swiftly when they perceive systemic risk, and the broad, decentralized distribution model many agents use runs counter to traditional perimeter security practices. Widespread adoption of any agent platform will demand clearer rules on data residency, liability for automated actions, and auditability — requirements that could slow product rollouts and invite costly redesigns.
For international observers the episode is illustrative of a broader pattern: governments and incumbent security vendors are racing to shape the architecture and governance of new AI primitives. The question is no longer whether agents will be useful, but whose rules and technical designs will determine how safely they can be used at scale.
