arrow_back Back to App

AI Risk Evaluation Act: Ensuring Safety and Oversight

This Act establishes a program to evaluate advanced artificial intelligence systems, aiming to protect citizens from potential risks like loss of control, threats to critical infrastructure, or erosion of civil liberties. Companies developing AI will be required to participate in testing and provide data, with significant financial penalties for non-compliance. The goal is to ensure safe and controlled AI development for the benefit of society.
Key points
Mandatory safety testing for advanced AI systems to protect citizens from unforeseen risks.
AI developers must provide code and data for testing; non-compliance incurs daily fines of at least $1,000,000.
The program will develop standards, guidelines, and risk management strategies to ensure safe AI development and prevent its misuse against humanity.
The Act mandates a plan for permanent federal oversight of AI, including potential nationalization or other strategic measures if superintelligence poses a threat.
article Official text account_balance Process page
Introduced
Citizen Poll
No votes cast
Additional Information
Print number: 119_S_2938
Sponsor: Sen. Hawley, Josh [R-MO]
Process start date: 2025-09-29