Caitlin Kalinowski, head of robotics at OpenAI, has resigned in protest of the company’s recently announced partnership with the U.S. Department of Defense. The move highlights growing internal and external concerns about the ethical implications of deploying advanced AI in national security contexts.
The Core Issue: Speed vs. Safeguards
Kalinowski, who previously led AR development at Meta, joined OpenAI in late 2024. She stated her departure was due to the rushed nature of the Pentagon agreement, specifically the lack of clearly defined safeguards against domestic surveillance and fully autonomous weapons systems.
“AI has an important role in national security, but surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” – Caitlin Kalinowski
The key concern isn’t necessarily the agreement itself, but how it was executed. Kalinowski emphasizes this is a “governance concern,” meaning the process lacked sufficient oversight and planning before being publicly announced.
Why This Matters: The AI Arms Race
The Pentagon initially pursued a deal with Anthropic before turning to OpenAI. Anthropic reportedly hesitated, attempting to negotiate stronger protections against misuse – specifically, preventing their AI from being used for mass domestic surveillance or in completely autonomous weapons. The Pentagon then designated Anthropic a “supply-chain risk.”
This underscores a broader trend: governments are aggressively seeking AI partnerships, even if it means pressuring companies to compromise on ethical boundaries. OpenAI’s quick acceptance of the deal, while claiming to have “red lines,” raises questions about whether those protections are robust enough in practice.
OpenAI’s Response and Future Implications
OpenAI insists its agreement includes safeguards against domestic surveillance and autonomous weapons. However, Kalinowski’s resignation suggests internal skepticism about whether these measures will hold in the long run. The company’s reliance on both contractual language and technical safeguards is seen by some as an attempt to appear more responsible without making substantial changes to its core practices.
The situation raises a critical question: Can AI development be responsibly integrated with national security without sacrificing fundamental rights and ethical principles? The rapid pace of these deals suggests that the answer remains uncertain, and further scrutiny from regulators, civil society, and even OpenAI’s own workforce will be crucial going forward.
The departure of a key executive serves as a stark warning: the race to militarize AI is accelerating, and the safeguards may not keep pace.


























