Globalytic
GlobalyticPoliticsConflictsTechScienceHealthBusinessWorld

Globalytic

Independent world coverage — geopolitics, conflicts, science, and health — with AI-assisted editing and verification.

Sections

  • World
  • Politics
  • Conflicts
  • Tech
  • Science
  • Health
  • Business
  • World
  • All news
  • Search

Resources

  • About
  • RSS Feed
  • Search

Summaries and analysis may be AI-assisted. Content is for informational purposes only.

Not professional advice.

© 2026 Globalytic. All rights reserved.

  1. Home
  2. /News
  3. /US announces deals with tech firms for national security review of AI models before release
TechBreakingneutral

US announces deals with tech firms for national security review of AI models before release

The Guardian World2h ago3 min readOriginal source →
US announces deals with tech firms for national security review of AI models before release

TL;DR

The US government has partnered with Google DeepMind, Microsoft, and xAI to review AI models before their public release. This initiative aims to assess national security risks associated with advanced AI technologies.

Key points

  • US government partners with tech firms for AI model reviews
  • Focus on national security risks related to AI technologies
  • CAISI facilitates collaboration between tech industry and government

Mentioned in this story

Google DeepMindMicrosoftxAICenter for AI Standards and Innovation

Why it matters

This initiative is crucial for ensuring that advanced AI technologies do not pose risks to national security.

The US government has struck deals with Google DeepMind, Microsoft and xAI to review early versions of their new AI models before they are released to the public.

The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, announced the agreements on Tuesday, saying the review process would be key to understanding the capabilities of new and powerful AI models as well as to protecting US national security. These collaborations will help the federal government “scale (its) work in the public interest at a critical moment”, the agency said in a press release.

“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” said Chris Fall, CAISI director.

CAISI is an agency meant to facilitate collaboration between the tech industry and the federal government in developing standards and assessing risks for commercial AI systems. The agreement between the agency and the AI firms is focused largely on identifying national security risks tied to cybersecurity, biosecurity and chemical weapons.

OpenAI and Anthropic inked similar deals with the Biden administration two years ago and CAISI notes the agency has already completed more than 40 such evaluations, including on unreleased models. It is common for developers to share unreleased AI models with the government that have reduced or removed safety guardrails, CAISI said in its press release. This helps the government “thoroughly evaluate national security-related capabilities and risks”, the agency noted.

The new agreements come as fears grow that the newest and most powerful AI models – such as Anthropic’s Mythos – could be dangerous to release to the public; AI safety experts, government officials and tech companies fear the expansive capabilities of these models could help hackers exploit cybersecurity vulnerabilities at an unprecedented scale. Anthropic limited its rollout of Mythos to a few companies, and initiated the collaborative Project Glasswing to bring together tech companies “to secure the world’s most critical software”.

The New York Times and Wall Street Journal reported Monday the Trump administration was mulling over a potential executive order to create a government oversight process for these AI tools; the Administration has characterized this reporting as “speculation”.

Google andxAI did not immediately respond to a request for comment.

Microsoft announced a similar agreement in the UK on Tuesday with the government-backed AI Security Institute, which also focuses on safe AI development.

“While Microsoft regularly undertakes many types of AI testing on its own, testing for national security and large-scale public safety risks necessarily must be a collaborative endeavor with governments,” Microsoft wrote in a blog post about the two deals.

Q&A

What is the purpose of the US government's agreements with tech firms regarding AI models?

The agreements aim to review AI models to understand their capabilities and assess national security risks before they are released to the public.

Which companies are involved in the US national security review of AI models?

The companies involved are Google DeepMind, Microsoft, and xAI.

What agency is responsible for the AI standards and innovation agreements in the US?

The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, is responsible for these agreements.

People also ask

  • US government AI model security review agreements
  • tech firms involved in US AI national security
  • CAISI role in AI standards and innovation
Load next article

Related Articles

MoD has no system to detect civilian harm caused by military, study shows
Politics

MoD has no system to detect civilian harm caused by military, study shows

A new study shows the UK Ministry of Defence has no system to detect civilian harm from military actions.

The Guardian World·1h ago·1 min read
NPR went looking for Polymarket's Panama headquarters. It's elusive
Business

NPR went looking for Polymarket's Panama headquarters. It's elusive

NPR investigates Polymarket's elusive headquarters in Panama City.

NPR Topics: News·2h ago·1 min read
US push for Lebanon and Israel leaders to meet could inflame tensions
Politics

US push for Lebanon and Israel leaders to meet could inflame tensions

Pressure is mounting for Lebanese President Joseph Aoun to meet Israeli Prime Minister Benjamin Netanyahu, which could increase internal tensions in Lebanon. Aoun is expected to visit the White House in May amid ongoing battles in southern Lebanon and divided public opinion.

Al Jazeera English·2h ago·1 min read
A legal scholar and 'Backtalker' defends critical race theory -- a term she helped coin
Politics

A legal scholar and 'Backtalker' defends critical race theory -- a term she helped coin

Kimberlé Williams Crenshaw, coiner of 'critical race theory,' defends its significance.

NPR Topics: News·2h ago·1 min read
The Iran war sent jet fuel prices sky-high. Here's what air travelers should know
Business

The Iran war sent jet fuel prices sky-high. Here's what air travelers should know

Jet fuel prices soar due to the Iran war; travelers should be aware of rising costs and service changes.

NPR Topics: News·2h ago·1 min read
Prosecutors to ‘fast-track’ hate crime cases in England and Wales after spate of attacks
Politics

Prosecutors to ‘fast-track’ hate crime cases in England and Wales after spate of attacks

Prosecutors in England and Wales fast-track hate crime cases amid antisemitic attacks.

The Guardian World·2h ago·1 min read

More from News

View all →

See every story in News — including breaking news and analysis.

At a glance

  • US government partners with tech firms for AI model reviews
  • Focus on national security risks related to AI technologies
  • CAISI facilitates collaboration between tech industry and government

Advertisement

Placeholder