1. It is prohibited to issue decisions producing legal effects or significantly affecting a person's life, finances, or health solely based on automated data processing by AI systems.
2. Meaningful Human Oversight: Any decision in High-Risk Areas must be verified and approved by a qualified human operator. "Qualified" implies documented expertise in the domain and the authority to override the AI without penalty. Rubber-stamping decisions based on speed metrics is prohibited. High-Risk Areas include: judicial sentencing, medical diagnosis, employment recruitment and termination, credit scoring, and access to essential public services.
3. Ultimate Human Responsibility: The human operator bears full legal and professional liability for the final decision. Ignorance of the AI system's operation or an algorithmic error does not exempt the operator from liability.
4. Independent Audit: In cases of alleged algorithmic discrimination, the deployer must provide an audit from an independent, certified expert demonstrating that the system satisfies at least one of the following fairness criteria across protected groups (race, gender, age, disability):
a) Demographic Parity: Approval/selection rates differ by no more than 10 percentage points;
b) Equalized Odds: True positive and false positive rates each differ by no more than 10 percentage points;
c) Predictive Parity: Precision (positive predictive value) differs by no more than 10 percentage points.
The deployer may choose which metric to satisfy, but the choice must be disclosed and justified.