Civic Legislative Initiative | Draft No. 001
THE HUMAN PRESERVATION ACT
Model Law on the Protection of Fundamental Rights in the Age of Artificial Intelligence
Version 1.0
Recognizing that technology must augment human capability while preserving human dignity, agency, and truth; Refusing a future where human judgment is replaced by opaque algorithmic probability; Hereby establishes the fundamental rights of the human being in the digital era.
CHAPTER I: SOVEREIGNTY OF DECISION (HUMAN IN THE LOOP)
Art. 1.
1. It is prohibited to issue decisions producing legal effects or significantly affecting a person's life, finances, or health solely based on automated data processing by AI systems.
2. Meaningful Human Oversight: Any decision in High-Risk Areas must be verified and approved by a qualified human operator. "Qualified" implies documented expertise in the domain and the authority to override the AI without penalty. Rubber-stamping decisions based on speed metrics is prohibited. High-Risk Areas include: judicial sentencing, medical diagnosis, employment recruitment and termination, credit scoring, and access to essential public services.
3. Ultimate Human Responsibility: The human operator bears full legal and professional liability for the final decision. Ignorance of the AI system's operation or an algorithmic error does not exempt the operator from liability.
4. Independent Audit: In cases of alleged algorithmic discrimination, the deployer must provide an audit from an independent, certified expert demonstrating that the system satisfies at least one of the following fairness criteria across protected groups (race, gender, age, disability):
a) Demographic Parity: Approval/selection rates differ by no more than 10 percentage points;
b) Equalized Odds: True positive and false positive rates each differ by no more than 10 percentage points;
c) Predictive Parity: Precision (positive predictive value) differs by no more than 10 percentage points.
The deployer may choose which metric to satisfy, but the choice must be disclosed and justified.
CHAPTER II: DIGNITY OF WORK & ANTI-BOSSWARE
Art. 2.
1. The use of AI systems to automatically terminate employment, modify shifts, or impose disciplinary measures without human review is prohibited.
2. Ban on AI Behavioral Surveillance: It is prohibited to use AI systems to analyze the physiological or behavioral parameters of employees (e.g., eye tracking, mouse movement dynamics, keystroke rhythm analysis, voice tone analysis, facial micro-expressions) for the purpose of productivity evaluation or performance scoring. Standard software metrics not involving AI-based behavioral analysis (e.g., task completion tracking, time logging) are not prohibited by this provision.
3. Digital Physiognomy: The use of AI for personality profiling of job candidates based on biometric data or automated video interviews is prohibited. Candidates have the right to demand a non-automated recruitment process.
CHAPTER III: COGNITIVE LIBERTY & CHILD PROTECTION
Art. 3.
1. Protection of Minors: It is prohibited to provide Relational AI systems (systems designed to simulate emotional bonding, friendship, or therapeutic relationships) to individuals under the age of 16. Providers must implement robust, non-declarative age verification.
2. Identity Disclosure: Any AI system interacting with a human (text, voice, video) must clearly and permanently disclose its artificial nature at the beginning of and during the interaction. Impersonating a human being is prohibited.
3. Ban on Political Profiling: It is strictly prohibited to use AI systems to infer psychological profiles, personality traits, emotional vulnerabilities, or cognitive biases from user behavior (including content consumption, interaction patterns, or engagement data) for the purpose of political micro-targeting or manipulation. The use of such inferred profiles or their behavioral proxies for political advertising or content curation is prohibited. This ban applies permanently, not only during election periods.
4. Sponsored Content: AI-generated responses containing recommendations for a product, service, or entity for which the provider received compensation must be explicitly labeled as "SPONSORED".
CHAPTER IV: CULTURAL & BIOMETRIC SOVEREIGNTY
Art. 4.
1. Deepfakes: The creation and distribution of synthetic media depicting a real person without their explicit consent is prohibited.
Exception: Works of evident satire, parody, or artistic fiction are exempt ONLY if clearly and permanently watermarked (e.g., "AI GENERATED" / "SATIRE"). Unlabeled satire is treated as forgery.
2. Mass Surveillance Ban: The use of Real-Time Remote Biometric Identification (e.g., facial recognition via public cameras) in public spaces is prohibited.
Exception: Strictly limited to judicial warrants for the prevention or investigation of mass casualty events (attacks causing or threatening death or serious injury to multiple persons), kidnapping, or hostage situations. Warrants must specify: (a) the identity of the sought individual(s), (b) geographic limitation (specific location, not entire districts), and (c) time limitation (maximum 48 hours, renewable once). Surveillance of political assemblies, protests, or demonstrations is strictly prohibited.
3. Social Scoring Ban: The use of AI systems by public authorities to evaluate or score the trustworthiness of natural persons is prohibited.
4. Medical Data Sovereignty: Consent for the use of medical, genetic, or biometric data for AI training must be legally separate from consent for medical treatment. Access to healthcare cannot be conditional upon data sharing.
5. Right to Opt-Out: Citizens have the right to object to the use of their data for AI training. Upon objection, providers must: (a) remove data from the training set before the next iteration, and (b) immediately implement output suppression to prevent the generation of content based on that data.
6. Public Domain: Works where AI systems contributed to the creative expression, regardless of the extent of human involvement, are not eligible for copyright protection. If a human cannot demonstrate that the work would exist in substantially the same form without AI contribution, it enters the Public Domain.
CHAPTER V: SAFETY & WARFARE
Art. 5.
1. Offensive vs. Defensive:
a) Offensive systems (selecting targets) require meaningful human control over every specific attack initiation. "Meaningful control" requires sufficient time and situational context for the operator to evaluate the legality and proportionality of the action.
b) Defensive systems (intercepting projectiles) may operate automatically solely for the protection of life.
2. Swarm Ban: Autonomous weapon systems designed to operate in "swarm" configurations (collaborative autonomy without individual human oversight) are prohibited..
3. Safe Stop: Critical AI systems controlling physical infrastructure or financial markets must be equipped with an independent mechanism for immediate safe shutdown or transition to manual control, accessible only to authorized humans.
4. Energy Priority: In the event of a power grid deficit, energy supply for households and critical infrastructure takes legal priority over data centers used for AI model training.
CHAPTER VI: WHISTLEBLOWER PROTECTION
Art. 6.
1. Right to Report: Employees, contractors, and researchers working on AI systems have the right to report violations of this Act to competent authorities without fear of retaliation.
2. State Protection: Upon filing a report with the regulatory authority, the whistleblower is immediately placed under legal protection. All civil and criminal proceedings related to the disclosure are automatically stayed pending investigation. If the report is substantiated, the whistleblower is granted full immunity from liability and the state may provide legal representation and financial support. The burden of proving bad faith lies with the employer.
3. Anti-SLAPP Provision: If a whistleblower is sued by an employer for revealing information in good faith and the employer loses, the employer must cover 100% of the whistleblower's legal fees and pay punitive damages.
4. Non-Disclosure Agreements (NDAs) obstructing the reporting of public safety risks or fundamental rights violations are null and void.
CHAPTER VII: GENERAL PROVISIONS & SANCTIONS
Art. 7.
1. Human Governance: Corporate entities developing High-Risk AI systems must be governed by a board of directors composed of natural persons exercising independent judgment. It is prohibited to delegate strategic decision-making authority to AI systems (AI-CEOs, algorithmic governance). Board decisions must result from deliberation among natural persons, not systematic approval of AI recommendations.
2. Supremacy of Law: No AI system, regardless of intelligence level (AGI), can possess legal personhood. Responsibility always rests with the human or corporate entity deploying the system.
3. Sanctions: Violation of this Act shall be punishable by administrative fines of up to 30,000,000 EUR or up to 6% of the enterprise's total worldwide annual turnover, whichever is higher. This does not preclude criminal liability under national laws.
EXPLANATORY MEMORANDUM (EXPOSÉ)
1. THE PROBLEM
Humanity stands at a crossroads. The unregulated development of Artificial Intelligence, driven primarily by corporate profit, has begun to threaten fundamental human rights. Citizens are profiled and fired by algorithms. Children form emotional bonds with machines, eroding their capacity for empathy. The information space is poisoned by unmarked bots and subliminal advertising, while creators are stripped of their life's work under the guise of "machine learning." Furthermore, the prospect of autonomous weapons and mass biometric surveillance threatens the very fabric of free societies. The legal system lags behind, leaving citizens defenseless against automated power.
2. THE OBJECTIVE
The goal of this Act is to establish the "Iron Rules" of the digital age. We do not fight progress; we civilize it. We enforce the principle that in human matters, a human decides, and the machine is merely a tool. We ban the algorithm's license to kill and the corporation's license to spy. We protect workers from digital serfdom, creators from theft, and whistleblowers from silence. We ensure that in critical moments—in a hospital, a court, or on a battlefield—a human being is always responsible for the final decision.
3. EXPECTED IMPACT
This Act restores the balance of power. It empowers workers against digital surveillance, protects children from predatory algorithms, and gives citizens control over their own data. By imposing strict liability and significant sanctions, we force the tech industry to build safety and ethics into their systems by design, not as an afterthought. It ensures that regardless of how powerful AI becomes, it remains a tool subject to human law, not a new entity above it.