Conversation about AI and employment in China has moved from abstract concern to concrete policy discussion. High-profile figures have offered contrasting reassurances and warnings: Gree chair Dong Mingzhu has shrugged off fears, saying that even at her age she is not afraid of being replaced and that workers should simply learn to be smarter than AI. That bravado sits alongside proposals from legislators and academics calling for systemic public responses to a rapidly changing labour market.
A deputy to the National People's Congress, Liu Qingfeng, has urged the government to build a national AI employment risk monitoring, early‑warning and policy evaluation mechanism, and to strengthen public support for retraining and social protection during the transition. Liu frames the problem as structural: large language and foundation models are quickly reshaping industry and job structures, raising the pace of technological impact, the risk of employment polarisation, and mismatches between skills supply and industrial demand. He also highlights that AI is creating new human–machine collaborative roles, which implies job reconfiguration rather than simple one‑to‑one displacement.
The urgency is not theoretical. International consultancies such as McKinsey have estimated that by 2030 roughly 57 percent of global work hours could be automated, and workers in creative support roles, visual effects, translation and other white‑collar service tasks report acute disruption today. Chinese localities are beginning to pilot responses: Shanghai’s Minhang district monitored automation updates in some 300 manufacturing firms and issued multiple job‑loss warnings for electronics manufacturing in 2025, coordinating retraining that helped a couple of thousand employees transition across sectors.
Experts argue that a national monitoring architecture is feasible because central agencies already hold the necessary administrative data. Proposals put forward include a high‑frequency monitoring system covering job changes, skills supply and employment quality, standardised statistical definitions, sectoral and regional warning thresholds, and clear policy triggers for early intervention. Complementary measures would expand public support for stabilising and shifting employment and extend social insurance to increasingly flexible and remote forms of work.
Not all voices urge incremental fixes. Jin Weigang, a senior social‑security scholar, warns that AI’s disruption could be profoundly disruptive across economic and social life and recommends elevating AI policy to a national strategic level. He suggests legally codified, tiered restrictions on certain AI applications and proactive state intervention to prevent capital from driving unregulated, socially harmful deployment — especially in employment‑intensive sectors where stability is politically sensitive.
For international observers, China’s debate is significant in two ways. First, Beijing’s capacity to centralise data and mobilise coordinated, top‑down policy responses means that a national early‑warning system is plausible in a way it may not be elsewhere; this could blunt short‑term social shocks but also institutionalise a state‑led model of AI governance. Second, policy choices made in Beijing — ranging from retraining programmes and expanded social insurance to application‑specific restrictions — will shape not only Chinese labour markets but also global supply chains and standards for AI regulation.
The immediate policy story is about risk management: how to detect displacement early, whom to prioritise for protection, and how to finance transitions without stifling technological progress. The broader strategic story is about governance: whether China will lean on its organisational strengths to smooth the employment impacts while constraining private sector excesses, and how that approach will influence international debates about balancing innovation with social protection.
