The Deal

The Pentagon struck classified AI agreements with OpenAI, Google, Microsoft, Amazon, Nvidia, xAI, and Reflection on Friday. The stated goal: build an "AI-first fighting force."

Anthropic — previously a $200M Pentagon partner for classified work — was deliberately excluded. The reason? Anthropic refused to loosen "red lines" around mass domestic surveillance and fully autonomous weapons.

The Split

| Side | Stance | |------|--------| | The Seven | Agreed to "lawful operational use" of their systems in classified settings | | Anthropic | Maintained its "Constitutional AI" boundaries; sued the federal government after being banned; won a temporary injunction |

Why It Matters

This is the first major friction point where an AI lab's stated safety principles directly cost it a massive government contract. Anthropic's CTO previously called the situation a "separate national security moment" — acknowledging the tension between commercial growth and ethical boundaries.

The Open Question

If the US military is indeed becoming an "AI-first fighting force," which labs get to supply the weapons? And what happens to the ones that say no?


Source: The Verge