Artificial Intelligence («AI») based systems, software and devices provide new and valuable solutions to tackle needs and address challenges in a variety of fields, such as smart homes, smart cities, the industrial sector, healthcare and crime prevention. AI applications may represent a useful tool for decision making in particular for supporting evidence-based and inclusive policies.
However, as may be the case with other technological innovations, these applications may have adverse consequences for individuals and society.
In order to prevent this, it is important that AI development and use respect the rights to privacy and data protection (Article 8 of the European Convention on Human Rights), thereby enhancing human rights and fundamental freedoms.
These Guidelines provide a set of baseline measures that governments, AI developers, manufacturers, and service providers should follow to ensure that AI applications do not undermine the human dignity and the human rights and fundamental freedoms of every individual, in particular with regard to the right to data protection.
They are based on a report by Alessandro Mantelero, associate Professor of Private Law at the Polytechnic University of Turin (Politecnico di Torino), also published here.
GUIDELINES ON ARTIFICIAL INTELLIGENCE AND DATA PROTECTION REPORT ON ARTIFICIAL INTELLIGENCE
The State of the Art
Challenges and Possible Remedies
References