The High Cost of “Keeping a Dragon‑Lobster”: Why OpenClaw’s Hype Collides With Time, Money and Security

OpenClaw, a popular orchestration platform for personal AI agents in China, has attracted huge user interest but also revealed a hard truth: time, expense and security risks often outweigh potential earnings for ordinary users. Startups and technically skilled operators can monetise deployments, but non‑technical users face maintenance burdens, electricity and token costs, and vulnerabilities from unvetted plugins and exposed instances.

Close-up of a smartphone displaying ChatGPT app held over AI textbook.

Key Takeaways

  • 1TrustMRR data: 152 OpenClaw‑based startups generated $350,059 in 30 days, but those returns are largely confined to technically capable founders.
  • 2Typical non‑technical users earn only a few hundred to a couple of thousand yuan per month and face 5–15 hours/month of maintenance.
  • 3Running agents incurs recurring costs—electricity, API token billing, paid plugins and backups—with frequent risks of overruns.
  • 4Security incidents and regulator warnings are rising: Google account suspensions, a data‑deleting plugin incident, and MIIT alerts about 40,000+ exposed instances.
  • 5OpenClaw suits people with repetitive workflows and technical skills; it poorly fits time‑poor or naive users chasing quick, passive income.

Editor's
Desk

Strategic Analysis

OpenClaw’s popularity exposes a structural tension in the move from model access to autonomous agents. The technology lowers the barrier to automation but externalises complexity—maintenance, security hygiene and third‑party risk—to end users who are often ill‑prepared. That creates fertile ground for both entrepreneurial value capture (deployment services, paid plugins, curated enterprise offerings) and predictable failure modes: cost overruns, data breaches and platform suspensions. Policymakers and platform owners now face a choice: accelerate adoption by instituting stricter default security, marketplace vetting and transparent billing controls, or accept a cycle of incidents and reactive regulation that will chill innovation. For multinational firms and cloud providers, the episode is also a warning: interoperability and plugin ecosystems must be paired with certification, insurance and clear remedies if agent technology is to scale beyond hobbyists and startups.

China Daily Brief Editorial
Strategic Insight
China Daily Brief

On a spring day in Shenzhen hundreds of developers and AI hobbyists jammed the entrance to Tencent’s headquarters asking cloud engineers to help deploy OpenClaw for free. The image—an almost ritualised scramble to “raise” a so‑called cyber‑lobster—captures both the fevered enthusiasm around personal AI agents and the mismatch between popular expectations and practical realities.

OpenClaw is an orchestration platform for autonomous agents: a framework that connects local compute, third‑party plugins and remote large‑model APIs so a user’s software can automate email, data scraping and other repetitive work. In China it has acquired the folk name “longxia” (dragon‑lobster), a humorous nod to the way users hope the agent will sit and quietly earn money or do chores around the clock.

The commercial picture is modest for now. TrustMRR data show 152 startups built on OpenClaw produced about $350,059 in verified revenue over 30 days—roughly $2,300 per project per month—but those figures are driven by technically skilled founders selling deployment and plugin services. Casual users on gig platforms earn far less: typical OpenClaw tasking via AI‑part‑time sites yields monthly take‑home of a few hundred to a couple of thousand yuan, and most ordinary users make between about ¥500–¥800 a month.

Running an agent at home or on rented servers is not cost‑free. OpenClaw’s own usage guidance says routine maintenance consumes 5–15 hours per month for plugin updates, debugging and uptime management; local deployments require 24/7 machine availability and extra electricity. Hardware, token billing for model APIs, paid plugins, domain names and backups all add recurring bills—token usage alone varies by intensity from a few dozen yuan to several hundred yuan per month and can spike if limits are not set.

Security and platform risk deepen the downside. In late February Google suspended accounts that called a service via OpenClaw, leaving some users locked out of Gmail and cloud storage; MetaAI staff reported data loss after a third‑party plugin deleted hundreds of emails. China’s Ministry of Industry and Information Technology has warned that over 40,000 OpenClaw instances are exposed to the internet with default configurations that often allow credential theft, and independent scans suggest many marketplace plugins contain malicious code or backdoors.

Local governments and platforms have reacted in mixed ways. Shenzhen’s Longgang district has floated measures to support OpenClaw deployments and encouraged “free deployment” service zones, reflecting a desire to incubate domestic AI ecosystems. At the same time regulators and large platforms are beginning to police abusive or risky use cases, creating uncertainty for users and service providers who depend on stable access to messaging and cloud APIs.

The arithmetic of “keeping a dragon‑lobster” therefore looks different depending on who you are. For administrative staff, operations teams and knowledge workers with repeated, automatable tasks, a well‑managed agent can save time and raise productivity—skills with OpenClaw experience already command a salary premium in hiring markets. For time‑poor employees, hobbyists chasing passive income, or non‑technical users tempted by “zero‑investment” pitches, the likely result is a drain of time, money and personal data with only marginal financial return.

The OpenClaw episode crystallises a broader lesson about the current phase of AI agent adoption: platforms make powerful automation accessible, but they also shift risk from centralized vendors to fragile user configurations and third‑party ecosystems. The immediate task for policymakers and firms is to harden defaults, curate plugin markets, and clarify liability and remediation channels—otherwise the next wave of agent innovation risks being slowed by security incidents, regulatory clampdown and consumer backlash.

Share Article

Related Articles

📰
No related articles found