China’s Internet Finance Association has issued a formal warning about OpenClaw, a growing class of AI “agent” applications that automate tasks on users’ devices. The association says that while such agents can boost productivity, OpenClaw’s default settings grant high system privileges and rely on weak security configurations, creating an easy vector for attackers to exfiltrate sensitive data or manipulate financial transactions.
The advisory tells consumers to be extremely cautious about installing OpenClaw on terminals used for online banking, securities trading or payment services. If users deem installation necessary, the association recommends refusing any permissions that allow the agent to operate financial-service systems, applying security patches promptly, limiting plugin installs and avoiding input of identity numbers, bank-card details or payment passwords while the application is installed.
Beyond data theft and account takeovers, the association highlighted a less obvious risk: OpenClaw’s continuous calls to large language model (LLM) APIs can generate significant token costs. That raises both a consumer-protection issue—unexpected charges on personal accounts—and an operational-cost concern for firms that embed these agents into customer-facing services without transparent billing or controls.
The warning dovetails with a broader trend in China and globally: rapid adoption of AI agents that interact across apps and services has outpaced established security practices. Mobile agents frequently request accessibility or automation privileges that, if abused, allow them to read screens, intercept inputs or trigger actions inside other financial apps—capabilities that are especially dangerous on devices used for money management.
For banks, fintech firms and regulators the advisory is a practical call to action. Financial institutions will need stronger endpoint protections, stricter guidance for customers, and vendor governance that demands safer default permissions, code audits and clearer cost models from agent developers. For consumers and enterprises alike, the message is that convenience from autonomous AI comes with novel, systemic risks that must be mitigated at the device, application and policy levels.
