arrow_back Back to App

AI Safety Review Office Established: Controlling Extreme National Security Risks

This Act establishes the Artificial Intelligence Safety Review Office within the Department of Commerce to oversee and mitigate extreme risks posed by the most powerful AI models. The goal is to protect national security and citizens from potential misuse of AI, such as enabling the development of chemical, biological, or cyber weapons. It mandates strict pre-deployment safety testing for advanced AI developers and introduces severe penalties, including fines and imprisonment, for deploying prohibited models.
Key points
Creation of a new Office to monitor and mitigate extreme risks from advanced AI, specifically focusing on chemical, biological, radiological, nuclear, and cyber threats (CBRN-C).
Mandatory safety testing ("red-teaming") required for developers of the most powerful AI models before they can be deployed to the public or market.
The government gains the authority to prohibit the deployment of an AI model if it poses insufficiently mitigated national security risks.
Cloud service providers and sellers of high-end chips must implement "Know-Your-Customer" (KYC) standards for transactions involving foreign persons to prevent misuse of computing infrastructure.
article Official text account_balance Process page
Expired
Citizen Poll
No votes cast
Additional Information
Print number: 118_S_5616
Sponsor: Sen. Romney, Mitt [R-UT]
Process start date: 2024-12-19