Trump Administration Weighs Mandatory Safety Reviews for New AI Models in Wake of Mythos Incident

The White House has informed Anthropic, Google, and OpenAI that it is considering requiring frontier AI models to pass a government safety review before public release — a dramatic reversal from the administration's deregulatory posture just months ago.

The Trump administration has begun discussions with Anthropic, Google, and OpenAI about creating new oversight procedures that would potentially require frontier AI models to pass a federal safety review before being cleared for release, as @AndrewCurran_ first reported. The conversations mark a sharp policy reversal for a president who dismantled Biden-era AI mandates on his first day back in office. The catalyst, per multiple accounts, is Mythos — an incident whose details remain partially classified but whose policy aftershocks are now unmistakable.

The irony is difficult to overstate. As cybersecurity journalist @ericgeller noted, Trump eliminated the previous administration's AI executive order almost immediately upon taking office, framing it as regulatory overreach that would kneecap American competitiveness. Now, in the aftermath of Mythos, the White House is reportedly considering measures that are 'much more onerous' than anything Biden proposed. The reversal suggests that whatever Mythos entailed — details the administration has not fully disclosed — it was severe enough to override deeply held ideological commitments to deregulation.

Get our free daily newsletter

Get this article free — plus the lead story every day — delivered to your inbox.

Want every article and the full archive? Upgrade anytime.

No spam. Unsubscribe anytime.