A sudden consumer craze for desktop AI agents has rippled into the boardrooms of China’s largest companies, compressing timetables and refocusing procurement around rapid, demonstrable results. OpenClaw — popularly called “Longxia” (literally “lobster”) — has been the latest catalytic product that made autonomous, task‑capable agents feel immediate and plausible on ordinary laptops, prompting executives to demand proof of value within weeks rather than years.
Fabarta (枫清科技), a Shenzhen start‑up founded by former IBM and Alibaba Cloud engineer Gao Xuefeng, has ridden that wave with a packaged personal agent it calls the “Longxia” edition. The product is sold as an "out‑of‑the‑box" local application for Mac and Windows that combines preconfigured workflows, sandboxed safety gates for risky actions, persistent multimodal memory drawn from local files, and cloud‑edge integration for enterprise knowledge platforms.
The company’s pitch is pragmatic: lower adoption friction on the desktop to seed enterprise interest, then use those individual installations to connect back to private models and corporate knowledge bases. Gao says that approach eases a chronic problem in China’s AI projects—employees’ reluctance to upload personal or sensitive files to central repositories—and creates a voluntary pathway for high‑quality content to flow into group knowledge stores.
Fabarta’s financials and fundraising reflect rapid early traction. The start‑up has completed a Pre‑A+ round at a reported valuation near $100m, has raised more than RMB 200m cumulatively, and says revenue growth topped 300 percent in 2025. Gao told reporters that first‑quarter 2026 revenue nearly equalled 2025’s total and set an aggressive target of RMB 200m for the full year as the company pursues an A round and eyes a 2027 Hong Kong listing.
The product strategy blends consumer and enterprise elements: a low‑cost RMB 9.9 monthly tier that subsidises token usage and cultivates desktop users, partnerships with OS vendors including Apple China and Kirin, and bespoke co‑development with “chain‑leader” industrial customers. Fabarta’s marquee collaborators include Sinochem, TCL Zhonghuan and China Resources Pharma — industry leaders that Gao describes as proving grounds for “ready‑to‑run” industry agents.
Gao frames Fabarta’s go‑to‑market as the opposite of the conventional empty platform sell. He calls it the “cup theory”: clients want a filled, immediately usable solution rather than an empty platform that needs lengthy configuration. Fabarta therefore co‑designs agents in core production scenarios, validates ROI with a chain leader, and then replicates the resulting product across the industry ecosystem — a model that reduces decision friction for conservative, heavily regulated customers.
That strategy speaks to four endemic obstacles Gao identifies in enterprise AI adoption. First, the “big‑model omnipotence” fallacy: top‑line models matter, but they rarely replace careful scenario design. Second, “chimney” architectures: disparate point solutions that cannot be integrated. Third, the twin difficulties of platform building — either overly rigid top‑down designs or reluctance from subsidiaries to share private data. Fourth, narrow thinking about use cases, with buyers fixated on document Q&A rather than value‑rich R&D, production or supply‑chain scenarios.
Technically, Fabarta emphasises a two‑pronged engine: a multimodal knowledge store that maps cross‑document relations and smaller, distilled industry models tailored to domain tasks. That choice reflects a common industry trade‑off — use heavyweight general models where necessary but lean on compact, specialised models for routine, high‑volume operations to control inference cost and latency.
Those cost considerations matter. Desktop agents that plan and act can burn large volumes of model tokens; Fabarta says it has implemented token‑use optimisation and packaged bulk token allowances to keep the user price low. Equally important for enterprise clients are the safety features: local processing, sandboxed execution, human confirmation for high‑risk operations such as emailing or financial transactions, and audit logs to trace agent actions.
Fabarta’s roadmap extends into AI4S — AI for science — and materials R&D, where Gao says joint labs with universities and industry partners aim to accelerate materials discovery and cut development costs by orders of magnitude. If successful, that work would position agents not just as productivity tools but as transformers of capital‑intensive R&D cycles.
Yet the firm faces real tests. Scaling bespoke co‑creation beyond a few chain leaders requires platformised tooling that preserves replicability without turning every deployment into a bespoke engineering engagement. The market also includes cloud incumbents and major Chinese tech groups that can leverage broader enterprise bundles, deeper pockets and existing host relationships.
Finally, the OpenClaw phenomenon that jump‑started this wave has produced a policy and safety backlash. Central authorities and industry regulators have issued warnings about agents’ security risks, and local governments have begun to draft incentives and rules for “raising” agents responsibly. That tension between rapid commercial uptake and regulatory caution will shape both product design and customer adoption curves in the near term.
For international observers, the episode is a reminder that China’s AI transition is increasingly pragmatic and enterprise‑led. The combination of consumer‑grade desktop agents and industrial co‑creation compresses decision cycles, amplifies competitive pressure within heavy industries, and makes data governance and model strategy core strategic issues for companies that previously treated AI as experimental.
Whether Fabarta can convert its fast early growth into a defensible, scalable business model will depend on its ability to productise replication, navigate regulatory scrutiny, and fend off larger rivals. For now, the company embodies a broader shift: enterprises that once postponed AI investments now demand immediate, demonstrable outputs — and vendors who can deliver ready‑to‑run agents stand to win the first rounds.
