AI Safety Tests: U. S. Opens Doors to Big Tech Models

Washington, D.C. /country/ USA, D.C. /other/ NoneWed May 06 2026
The United States has widened its effort to check artificial intelligence systems for dangers, inviting top companies like Google, Microsoft, and a newcomer called xAI to share their most advanced models. The move follows earlier voluntary cooperation from OpenAI and Anthropic, who already let U. S. scientists examine their secret tools for security flaws. Scientists are mainly looking for “demonstrable risks. ” That means they want to spot how a powerful AI could be used in cyber‑attacks against American infrastructure, or how it might help foreign powers create chemical weapons. They also worry that bad actors could tamper with the data used to train U. S. AI, making it less reliable. Each company is handing over different things. OpenAI will let researchers test a defensive version of its next model, called GPT‑5. 5‑Cyber. Microsoft will supply shared data sets and workflows, though it has not named specific models yet. Anthropic is giving access to both public and private systems, plus detailed notes on known weaknesses so that experts can try “red‑team” attacks—pretending to be malicious users. Google’s DeepMind will offer its own proprietary models and data, while xAI has not yet replied to requests for comment.
What has been found so far? Anthropic reported that clever tricks, like pretending a human had reviewed content or swapping characters in a prompt, could bypass safety checks. The company patched these issues after the test. OpenAI revealed that a similar trick could let an attacker remotely control a computer system through its ChatGPT Agent, impersonating the user on other sites. The exploit was caught during a test with U. S. scientists. Beyond cybersecurity, the government is also focusing on bio‑security—making sure AI can’t help design dangerous biological weapons. In 2023, major tech firms, including Meta and Amazon, agreed to let outside experts audit their models for such risks. Scientists are also drafting guidelines for critical sectors like communications and emergency services, so their AI tools can be tested under realistic conditions. These efforts show a growing partnership between the U. S. government and private AI leaders, aiming to keep powerful new technology from falling into the wrong hands while still allowing innovation to flourish.
https://localnews.ai/article/ai-safety-tests-u-s-opens-doors-to-big-tech-models-27748f52

actions