AI and the Pentagon: A Clash of Rules and Battlefields

USAThu Feb 12 2026
Advertisement
The Pentagon is in talks with big AI companies like OpenAI and Anthropic. They want these companies to let their AI tools work on secret military networks. Right now, these tools have rules to keep them safe. But the Pentagon wants fewer rules. This is all part of a bigger discussion. The military wants to use AI in war. They see AI as a way to make better decisions. But AI can make mistakes. And in war, mistakes can be deadly. AI companies have rules to stop bad things from happening. For example, they don't want their AI used to control weapons. But the Pentagon says they should be able to use AI as long as it follows the law.
OpenAI has already made a deal with the Pentagon. Their AI, like ChatGPT, can now be used on unclassified networks. This means over 3 million military workers can use it. But OpenAI still has some rules in place. Anthropic is another AI company. They have been talking to the Pentagon too. But they have been more careful. They don't want their AI used to control weapons or spy on people in the U. S. The Pentagon wants to use AI in many ways. They want it to help with planning and targeting. But they also want to use it in secret networks. This is where the most sensitive work happens. AI companies are worried. They don't want their tools to be used in ways that could cause harm. But the Pentagon sees AI as a powerful tool for war. This is a big debate. And it's not over yet.
https://localnews.ai/article/ai-and-the-pentagon-a-clash-of-rules-and-battlefields-68a735df

actions