A tidal wave of enthusiasm for personal AI agents that began overseas has swept into China, and with it a scramble by domestic tech groups to offer safer, easier-to-deploy alternatives. OpenClaw — the open-source “claw” or “lobster” agent that went viral on GitHub and drew millions of pageviews — exposed a trade-off between openness and security: its flexibility lets users run powerful automations, but that same openness can grant agents deep system access and the ability to spawn unpredictable sub-agents.
Beijing’s cybersecurity monitors flagged the phenomenon in mid-March: roughly 23,000 OpenClaw internet-exposed assets inside China showed explosive growth and “significant security risks”, making them attractive targets for attackers. The warnings underscored real-world vulnerabilities researchers had already flagged — from token-draining denial-of-service loops in models to remote-command paths created by insufficient interface checks — and they accelerated a domestic rush to offer curated, managed versions of the same idea.
On March 13 Alibaba Cloud launched JVS Claw, a one-click mobile deployment aimed at mainstream users and built around what the company calls a ClawSpace cloud sandbox. In hands-on tests reported by Chinese media, JVS Claw intercepted commands that would otherwise have consumed excessive tokens or opened remote code-execution channels, blocking them before they could run on user devices. Alibaba’s pitch is straightforward: move risky executions off the phone and into managed cloud instances so a misbehaving agent can only damage ephemeral cloud resources, not a user’s handset.
The user experience highlights another axis of competition. JVS Claw arrives preinstalled with about 20 curated skills — from a finance-data assistant to web scrapers and content-operational tools — and a set of “growth” skills that claim self-extension abilities. OpenClaw, by contrast, is a lean, highly permissive platform: power users can add multiple agents, install arbitrary third-party skills, and even allow agents to spawn child agents, which boosts experimentation but multiplies unpredictability and attack surface.
Alibaba has layered technical controls behind its marketing: an isolation gateway to keep instances from being directly Internet-exposed, dynamic API-key proxying and rotation, and centralized communication servers that mediate agent connections. These are familiar cloud-provider mitigations transplanted into the AI-agent era, shifting responsibility for containment from individual users to platform operators. The consequence is a familiar trade-off — convenience and safety in a walled, audited environment versus openness, composability and the riskier autonomy of the open model.
The race is already broadening. Huawei, Xiaomi, Honor and other handset makers have announced their own ‘claw’ projects, and Tencent, Baidu and ByteDance are likewise accelerating agent experiments. The competition is not purely commercial: it is also about control of the next desktop or phone interaction model. Analysts and product leads say AI agents could reconfigure how people operate computers and mobile devices, creating a new battleground for where users first encounter AI-driven workflows.
For enterprises and regulators the implications are immediate. Managed, sandboxed agents reduce many attack vectors but concentrate trust in a few cloud gatekeepers; open agents democratize capability but complicate oversight and increase the likelihood of large-scale abuse if inadequately contained. Security incidents tied to exposed OpenClaw instances have already prompted official advisories, and the trajectory suggests China will push for tighter controls and standards even as vendors race to capture the user base. Interoperability, auditability and secure defaults will determine whether large-scale AI-agent use becomes a productive step forward or a new vector for abuse.
