China’s 2026 government work report elevated “AI+” from a policy aim into an operational agenda: deepen and broaden AI integration, accelerate rollout of intelligent terminals and agents, and push for commercial-scale applications across key industries. The document combines infrastructure commitments — ultra-large-scale computing clusters, coordinated power-and-compute projects, national-level compute monitoring and support for public cloud — with promises to cultivate open-source ecosystems and build high-quality data sets.
That policy scaffold has already rippled through the two sessions, where industry executives and lawmakers pressed for concrete measures. Delegates from leading firms argued that the emphasis should shift from raw training horsepower to usable inference capacity: specialized inference chips, regional low-latency compute clusters, and platforms that let enterprises and smaller developers access compute on demand. Proposals ranged from national guidance on inference-located hubs to an AI training compute open platform with tiered pricing and subsidies.
The debate is as much about economics as it is about architecture. Several participants warned that China’s rapid build-out of data centers and model training capacity risks low utilization and weak commercial returns unless matched by scenario-driven application projects. Voices from chip designers and cloud practitioners called for measures that lower marginal cost for inference, standardize compute scheduling, and direct subsidies toward demonstrable industry use cases rather than scattershot infrastructure spending.
Data and datasets surfaced as a central lever. The national data authority signalled that 2026 will be a “data value release year,” prioritising AI-ready, multi‑modal, high-knowledge-density industry datasets, privacy-preserving computation, and platforms such as data labs to accelerate model specialization. Officials and delegates stressed the need to settle institutional foundations for data as an economic factor — from pricing and cross-border flow rules to clearer public-data licensing boundaries.
Security and governance were threaded throughout the discussion. Policymakers urged beefed-up regulation for biometric data, standards for face-recognition robotics and digital humans, and a step change in cyber-defence to meet what industry leaders call “AI superhuman” risks. Proposals included certified security intelligent agents for critical infrastructure, supplier ecosystems for security innovation, and mandatory identity and provenance disclosure for AI-generated digital personalities used in livestreaming and commerce.
Practical obstacles to scaling intelligent agents and “digital employees” were also highlighted: integration costs, the need for business-process fit, acute talent shortages in cross-disciplinary roles, and unresolved safety controls that make hands-off delegation risky. Delegates recommended a two-track response of technology and talent support — public-purpose platforms offering affordable, interoperable agent services, plus curriculum and training initiatives to produce blended AI-plus-domain specialists.
Taken together, the measures outlined in the work report and the accompanying proposals frame China’s next phase of AI policy: building an end-to-end stack that pairs compute and data with targeted industrial pilots and stronger governance. The approach is deliberately holistic — infrastructure, market incentives, standards, security and human capital — aimed at turning the country’s earlier sprint on model scale into sustained, measurable economic and social impact.
For international observers, the priorities offer a clear signal: China is pivoting to inference optimization, industrial application, and institutional control of data and safety standards. This will influence supply chains for chips and cloud services, shape competition over open-source ecosystems and standards, and complicate the global debate about AI governance, cross-border data flows and technology decoupling.
