Caitlin Kalinowski, who led OpenAI’s robotics and consumer hardware efforts, has resigned, citing unease with the company’s recently announced agreement to deploy its systems on Pentagon computing infrastructure. Kalinowski said key policy safeguards were not sufficiently defined before the deal was publicised and warned that limits on domestic surveillance and on granting lethal autonomy to machines deserved fuller public and internal scrutiny.
Her departure, announced days after OpenAI disclosed the cooperation with the US Department of Defense, crystallises an internal and industry-wide fault line over the commercial development of dual‑use artificial intelligence. OpenAI has framed the pact as a responsible path for applying AI to national security problems, while asserting explicit red lines — no domestic surveillance and no creation of autonomous weapons — but the abruptness of the agreement exposed procedural concerns for some employees.
The episode also highlights the strategic importance of hardware to the AI race. Kalinowski, recruited from Meta in late 2024 after leading augmented‑reality hardware projects there, had been expected to help OpenAI build the compute and device ecosystem needed to train and run ever‑larger models. As firms scramble to design systems that support next‑generation models, a senior hardware departure weakens OpenAI’s push into the physical infrastructure that underpins the software it sells.
The resignation matters commercially as well as ethically. The controversy has already produced consumer and market ripples: ChatGPT experienced a sharp surge in uninstallations even as Anthropic’s Claude rose to the top of the US App Store free apps chart. Anthropic’s leadership had taken a public stand against using its models for domestic mass surveillance or autonomous weapons, a stance that led to friction with Pentagon officials and to the company being labelled a supply‑chain risk — a development that apparently redirected some government business toward OpenAI and other vendors.
Beyond reputational costs, the episode underlines how government procurement and national‑security needs are reshaping corporate strategy and regulatory debates. For companies building both models and the hardware that runs them, decisions about whom to partner with — and how to codify operational and ethical limits — will determine talent retention, customer trust and future access to lucrative defense contracts.
OpenAI’s public response framed the deal as a pragmatic, bounded approach to national‑security use cases, but the departure of a recognised hardware expert spotlights the difficulties of balancing rapid commercial expansion, employee concerns about mission and morals, and the strategic pressure exerted by large public buyers.
