The Bank of England is preparing a scenario exercise to estimate the economic and financial consequences of a sudden, widespread shock from artificial intelligence. Worried that rapid AI adoption could trigger large-scale job losses and sharply weaken many firms, the central bank will test how such an event might ripple through household and corporate balance sheets and strain lenders. Officials are also considering folding the exercise’s results into the wider bank stress-testing framework to capture a potential spike in loan defaults and losses at financial institutions.
The move signals a shift in central‑bank thinking. Stress tests have long modelled macroeconomic shocks — recessions, housing busts, rate shocks — but not a technology‑driven disruption of this kind. AI is distinct because it could combine very rapid productivity gains in some sectors with abrupt obsolescence in others, creating concentrated industry shocks, swift shifts in employment and incomes, and asymmetric credit losses across portfolios.
There are multiple channels by which an "AI shock" could destabilise the financial system. Mass worker displacement would depress consumer spending and raise mortgage and consumer‑credit delinquencies. Business model disruption could bankrupt firms that fail to adapt, hitting corporate loan books and commercial real estate. At the same time, uneven gains — large profits and market power accruing to a few tech incumbents — could distort asset prices and collateral values, complicating loss‑given‑default estimates.
Incorporating an AI scenario into stress tests would have practical consequences for banks and regulators. It could lead to higher capital buffers, targeted supervisory scrutiny of lenders with heavy exposure to vulnerable sectors, and tighter lending standards for at‑risk borrowers. For banks it would also expose modelling challenges: scenario design must grapple with uncertain timing, sectoral granularity, and second‑round effects such as sovereign fiscal responses and labour‑market policies.
The Bank of England’s initiative comes as policymakers globally start to confront the macroeconomic side‑effects of large language models and automation. Financial authorities face dilemmas: too timid a scenario risks underestimating threats; an implausibly severe one could trigger unnecessary market tightening. The testing process itself is therefore as much about improving data, modelling and interagency coordination as it is about capital outcomes.
Whatever the results, the exercise has broader policy implications. If stress tests reveal material system‑wide vulnerabilities, governments may be pushed toward stronger labour‑market interventions, more aggressive retraining programmes, and fiscal cushions for displaced workers. For banks, clearer visibility of AI risks could prompt earlier provisioning and a reallocation of credit — with knock‑on effects for economic growth and the political debate over how to share the gains from AI.
