China’s Ministry of Industry and Information Technology has added an official voice to a growing alarm over OpenClaw, an open‑source AI agent framework that has exploded in popularity. The ministry’s network security threat‑sharing platform published a set of practical “six dos and don’ts” aimed at preventing misuse after monitoring showed many OpenClaw instances running with default or improper configurations that expose users to network attacks and data leakage.
Security researchers say the threat is not hypothetical. Analysts identify four concentrated risks: runaway system‑level privileges and jailbreaks that let agents act beyond intended bounds; vulnerabilities in the Skill supply chain that allow poisoned extensions; exposure of agent endpoints to the public internet enabling remote intrusion; and large‑scale data‑privacy leaks from agents that harvest sensitive information. Industry experts warn that OpenClaw’s native form effectively provides an attack surface combining system privileges, extensible code and user data—an attractive target for both criminal actors and poorly designed integrations.
For ordinary users the immediate advice is simple: isolate and constrain. Practitioners urge “physical isolation” and the principle of least privilege—avoid installing agents on primary machines that hold personal photos, passwords or financial documents; run agents in virtual machines or on expendable hardware; and restrict agent access to specific, non‑sensitive folders. Equally important is sourcing Skills only from trusted repositories and enforcing strict local permission controls so an agent cannot exfiltrate data or escalate rights unobserved.
At the same time, Chinese cloud and device vendors are racing to convert the raw, risky technology into managed services. Tencent, Huawei, Alibaba, Baidu, ByteDance, Xiaomi and specialist model firms such as Zhizhun, Kimi and MiniMax have unveiled class‑OpenClaw offerings that emphasise cloud deployment, sandbox isolation, protocolised interfaces and hardened Skill vetting. Tencent, for example, bills a security product matrix that packages protections as callable Skills, offers a secure OpenClaw deployment architecture on its Lighthouse servers, and provides edge‑side privacy tools including an anonymizer capable of replacing tens of thousands of entity types.
The push to ‘‘engineer safety’’ reflects both commercial opportunity and regulatory pressure. Vendors argue that by cloudifying agents, enforcing permissions, and upgrading Skills pipelines they can turn a “dangerous tool” into a “reliable product” suitable for finance, healthcare and other regulated industries. At the same time, the market response is buoying China’s cybersecurity stocks and creating a new siding for firms selling isolation, auditing and runtime defenses.
The debate is not unique to China. A US federal court recently barred Perplexity AI’s agent from accessing Amazon’s site and ordered destruction of data collected through deceptive browser behaviour, underscoring legal exposure when agents spoof browsers or perform unauthorised automation. That ruling highlights a second front: beyond technical hardening, agents face emerging legal and platform governance constraints that will shape how they can operate and which business models will be permissible.
For policymakers and corporate buyers the message is twofold. First, open‑source agents accelerate capabilities and risks at the same time, so engineering controls—sandboxing, least privilege, supply‑chain auditing and certified Skills ecosystems—must be baked into rollouts. Second, regulatory and platform enforcement will increasingly define the commercial contours of the agent market, rewarding providers that can demonstrate auditable, privacy‑preserving deployments. The next phase of the AI‑agent boom will be decided less by innovation alone than by who can build and certify trustworthy infrastructure at scale.
