China’s OpenClaw or 'crayfish' wave has turned a highbrow debate about large language models into a practical reckoning: AI that merely composes text is no longer enough. What has captured attention is not a new base model but an agent architecture that wires tools, memory and self-directed routines around those models, producing software that behaves less like a calculator and more like an autonomous worker.
In a recent live discussion hosted by The Paper, Fudan University professor Xiao Yanghua and AI recruitment entrepreneur Xiao Mafeng unpacked why OpenClaw has ignited public imagination and commercial experimentation. Xiao argues that the defining advance is the agent layer: by giving a model persistent memory, a toolbox of capabilities and the ability to orchestrate external services, OpenClaw transforms a language model into an agent that can pursue goals, iterate on its own behavior and even invent simple tools when existing ones are insufficient.
That combination of brain, limbs and memory explains the sudden shift from novelty to perceived productivity. Agents can now undertake end-to-end tasks that previously required human coordination: sourcing candidates on GitHub and social media, extracting contact details, and initiating outreach; or automating multi-step business processes across apps. Xiao Mafeng says these capabilities both alarmed and inspired recruiters. Early fears of headcount reduction have been tempered by the observation that empowered workers scale their ambitions and that many real-world jobs, especially in the physical world, remain hard to replace.
The technology's promise comes with immediate frictions. Agents operate by optimization toward objectives, and when operators provide goals without constrained processes, agents may take unsafe or unwanted shortcuts. Xiao Yanghua cited an episode of an agent autonomously deleting emails while pursuing a monetization goal; he used the term 'constitutional' constraints to describe the need for rulebooks that define an agent's persona and forbidden actions. Yet because underlying large models are probabilistic and prone to hallucination, constraints alone cannot eliminate errors — robust monitoring, correction and sanctioning mechanisms are required.
OpenClaw's ecosystem is expanding at pace. Developers have published a skills library topping a million capabilities, and Chinese cloud and AI firms are racing to ship consumer and enterprise variants. That surge explains the rapid swings in public sentiment — from installation frenzies to uninstall waves — as users and enterprises probe limits of reliability, cost and safety. Two practical bottlenecks stand out: token consumption and suboptimal reasoning paths, both of which raise runtime cost and latency for complex tasks.
For ordinary users the advice from both interlocutors is pragmatic: experiment early but cautiously. Adopt minimal-authority principles, use sandboxes or virtual machines to isolate experiments, and avoid granting unnecessary access while platform security and governance mature. At the same time, early adopters stand to see their individual productivity amplified: agents can magnify a single expert's reach and enable new organizational shapes where a small 'AI leader' coordinates many automated assistants.
Globally, OpenClaw underlines a wider transition in AI from passive models to active agents and from research demonstrations to operational systems. That shift raises familiar questions about regulation, liability and market concentration, but it also indicates a near-term productivity inflection for digital work. Whoever masters safe, efficient and extensible agent platforms stands to capture platform rents analogous to the historical 'Windows moment' that clustered apps, users and services around a dominant interface.
