U. S. Military Cuts Ties With AI Firm Over Safety Rules

Washington DC, USASat Mar 07 2026
The U. S. Department of Defense has officially labeled the AI company Anthropic PBC a “supply chain risk. ” This move means Anthropic can no longer work on government contracts, and other businesses that deal with the military may also drop them. The decision follows a long‑standing disagreement about how the Pentagon can use Anthropic’s AI tool, called Claude. Anthropic wants the military to agree that Claude will not be used for spying on U. S. citizens or in autonomous weapons that act without human control. The company says these limits protect people and keep technology safe. But the Pentagon insists it must be free to use its tools for any lawful purpose. The defense department worries that allowing restrictions could set a precedent that limits future military options. The dispute escalated when Anthropic’s CEO, Dario Amodei, sent a memo to employees that was leaked. In the message he criticized the Pentagon’s stance and mentioned past political pressures. He later apologized for the wording and said the company would challenge the “supply chain risk” label in court.
Politicians have weighed in. New York Senator Kirsten Gillibrand, a 2020 presidential hopeful, called the Pentagon’s action reckless and said it would help enemies. She argued that attacking an American company for standing up to the government is a tactic more common in China than in the United States. The Pentagon’s statement stresses that allowing a vendor to dictate how its technology can be used would put soldiers at risk. The agency says it will not accept any constraints that could limit its ability to employ AI for lawful missions. The outcome of the legal challenge remains unclear. If Anthropic wins, it could set a new standard for how defense agencies negotiate with tech firms. If the court sides with the Pentagon, it may reinforce the military’s right to use AI tools without external limits.
https://localnews.ai/article/u-s-military-cuts-ties-with-ai-firm-over-safety-rules-5ce5ddac

actions