arrow_back Back to App

AI Safety and Transparency Act: New Voluntary Standards for Technology Developers.

This law directs the National Institute of Standards and Technology (NIST) to develop voluntary guidelines for making Artificial Intelligence systems safer, more transparent, and less biased. The goal is to protect citizens' constitutional rights, economic opportunities, and security from potential AI risks. While the guidelines are voluntary, they aim to establish industry best practices for trustworthy AI development.
Key points
NIST must create voluntary standards for assessing AI risks, including threats to security, the economy, and civil liberties.
The guidelines encourage companies to disclose details about AI training data, model capabilities, and security testing methods (like "AI red teaming").
The focus is on improving the fairness, privacy, reliability, and accountability of AI systems used by the public.
article Official text account_balance Process page
Expired
Citizen Poll
No votes cast
Additional Information
Print number: 118_HR_9466
Sponsor: Rep. Baird, James R. [R-IN-4]
Process start date: 2024-09-06