As artificial intelligence and automated decision-making systems become embedded in public administration and key private sectors, these guidelines empower equality bodies to identify, prevent and address discrimination, ensuring AI deployment complies with fundamental rights across Europe.
Public administrations across Europe are using artificial intelligence (AI) and/or automated decision-making (ADM) systems in a wide range of policy areas, including migration, welfare, justice, education, employment, tax, law enforcement or healthcare. Such systems are also deployed in critical areas of the private sector, such as banking and insurance. Although AI and ADM systems present significant risks of discrimination, challenges remain in identifying and mitigating these risks. Thus, equality bodies and other national human rights structures have a key role in promoting fundamental rights-compliant deployment of AI/ADM systems by public sector organisations. The guidelines aim to equip equality bodies and other national human rights structures, especially in the European Union, to tackle discrimination in AI/ADM systems. They update them on their responsibilities regarding the changing regulatory environment – including the European Union’s AI Act and the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law – offer recommendations and examples for applying new regulations and serve as a resource to assist and advise national stakeholders, such as policy makers and regulators on human rights, equality and non-discrimination.
ABBREVIATIONS ACKNOWLEDGEMENTS EXECUTIVE SUMMARY INTRODUCTION
Context of the guidelines
Aim of the guidelines
Methodology
Structure of the guidelines
PART I 1. GENERAL CONTEXT OF THE AI ACT 2. PROHIBITIONS
2.1. Introduction to prohibited AI practices
2.2. AI systems that manipulate, deceive or exploit vulnerabilities of people
2.3. Social scoring
2.4. Risk assessment of committing a criminal offence
2.5. Scraping to build or expand facial recognition databases
2.6. Emotion recognition
2.7. Biometric categorisation
2.8. Remote biometric identification
3. HIGH-RISK AI SYSTEMS
3.1. Classification of high-risk AI systems
3.2. Amending the list of high-risk use-cases
3.3. Risk management system requirements
3.4. Data governance requirements
3.5. Fundamental rights impact assessment (FRIA)
3.6. EU database for high-risk AI systems listed in Annex III
4. TRANSPARENCY OF AI SYSTEMS REQUIREMENTS
4.1. Context and significance
5. ENFORCEMENT
5.1. Powers of bodies protecting fundamental rights
5.2. Remedies
5.3. Co-operation mechanisms
PART II 6. STANDARDS DIRECTIVES
6.1. General context
6.2. Changes to mandate and resourcing
6.3. Changes to powers
7. THEMATIC FOCUS
7.1. Thematic focus: Law enforcement, migration, asylum and border control
7.2. Thematic focus: Education
7.3. Thematic focus: Employment
7.4. Thematic focus: Social security and employment support services
REFERENCES