The report discusses the potential implications for fundamental rights and analyses how such rights are taken into account when using or developing AI applications. In so doing, it aims to help ensure that the future EU regulatory framework for AI is firmly grounded in respect for human and fundamental rights.
In this report:
"What you see now is that everyone doing something with machine learning is labelling this as ‘AI’." (Public administration, Netherlands)
"[AI] is mostly used to save time […] when you have to go through a lot of material." (Public administration, Netherlands)
"The most important is to deal with cases more efficiently. It’s about making use of your workforce, the people who handle cases, as effectively as possible." (Public administration, Netherlands)
"[Our use of AI] does not impact [human rights] in any way. In terms of the decision process, it does not matter whether the decision is made by machine or a human." (Interviewee working for public administration, Estonia)
"Internally we can explain the decisions of the machine learning models and we have several means to do that." (Private sector, Estonia)
"If the systems do not have black boxes of information or processes, we already take a step forward in the defence of human rights." (Public administration, Spain)
"We are strongly attached to the idea that AI has to be explainable." (Public administration, France)
"Once all the rights related to data protection are ensured, I do not see how human rights are of relevance here." (Private company, Spain)
"We did not touch the topic because we assume that there are no human rights issues involved: all the activities are within the legal framework, all the activities are compliant with data protection and good practices, and therefore we assume that there are no human rights issues related to these systems." (Public administration, Spain)
"I do not think that we should regulate specific technology like AI. It is sufficient to have general principles and technology-neutral rules." (Private sector, Estonia)
"We were a little anxious when the GDPR was implemented, but in the end it meant managing datasets and access rights […] It is a good reminder that not everything can be or should be done." (Public administration, Finland)
"Actually, I’m concerned that the GDPR might hinder AI research. I’m afraid that some large databases that we have used previously cannot be used for our research anymore." (Private company, Netherlands)
"There is the GDPR but it does not give you specific rules. It gives principles but it comes down to ethical issues and interpretation." (Private company, Estonia)
"There is a huge tension surrounding the GDPR. So we want to do well, but might in fact be worse off, because interpretation of the data then turns out to be impossible." (Public administration, Netherlands)
"If we had to explain the model, we wouldn’t be able to. The model is statistical and not very explainable." (Public administration, France)
"The number of the complaints about data use is miniscule, rather people may have asked to delete some information about them." (Private company, Estonia)
"Yes, we assess the legality of personal data protection and the conformity with their specific legal acts." (Public administration, Estonia)
"If you want the machine not to discriminate on the basis of sex, do not put the variable of sex, as easy as that, or make the examples symmetrical if you notice that sex has certain relevance." (Public administration, Spain)
"For discrimination, it’s complicated because some diseases are more present in certain ethnic groups. Predictions take into account the sexual, ethnic, genetic character. But it is not discriminatory or a violation of human rights." (Private sector, France)
"We try to look into the future. We will automate more and more." (Private company, Estonia)
"The next steps are related to transparency and open data: that is to say, publish not only information in pdf, but also information in reusable formatting so that it could be reused internally and by the private sector." (Public administration, Spain)
"AI is a great thing but we must learn to use it." (Private company, Spain)
"When testing the system, we did not really look at the legal aspects, we looked at whether the system is profitable." (Private company, Estonia)
"There is a risk of having too much trust in the machine." (Public administration, France)
In doing so, the EU and its Member States should rely on robust evidence concerning AI’s impact on fundamental rights to ensure that any restrictions of certain fundamental rights respect the principles of necessity and proportionality.
Relevant safeguards need to be provided for by law to effectively protect against arbitrary interference with fundamental rights and to give legal certainty to both AI developers and users. Voluntary schemes for observing and safeguarding fundamental rights in the development and use of AI can further help mitigate rights violations. In line with the minimum requirements of legal clarity – as a basic principle of the rule of law and a prerequisite for securing fundamental rights – the legislator has to take due care when defining the scope of any such AI law.
Given the variety of technology subsumed under the term AI and the lack of knowledge about the full scope of its potential fundamental rights impact, the legal definition of AI-related terms might need to be assessed on a regular basis.
Impact assessments should draw on established good practice from other fields and be regularly repeated during deployment, where appropriate. These assessments should be conducted in a transparent manner. Their outcomes and recommendations should be in the public domain, to the extent possible. To aid the impact assessment process, companies and public administration should be required to collect the information needed for thoroughly assessing the potential fundamental rights impact.
The EU and Member States should consider targeted actions to support those developing, using or planning to use AI systems, to ensure effective compliance with their fundamental rights impact assessment obligations. Such actions could include funding, guidelines, training or awareness raising. They should particularly – but not exclusively – target the private sector.
The EU and Member States should consider using existing tools, such as checklists or self-evaluation tools, developed at European and international level. These include those developed by the EU High-Level Group on Artificial Intelligence.
The European Commission and Member States should consider providing funding for targeted research on potentially discriminatory impacts of the use of AI and algorithms. Such research would benefit from the adaptation of established research methodologies, from the social sciences, that are employed to identify potential discrimination in different areas – ranging from recruitment to customer profiling.
Building on the results of such research, guidance and tools to support those using AI to detect possible discriminatory outcomes should be developed.
There is a high level of uncertainty concerning the meaning of automated decision making and the right to human review linked to the use of AI and automated decision making. Thus, the EDPB and the EDPS should also consider further clarifying the concepts of ‘automated decision making’ and ‘human review’, where they are mentioned in EU law.
In addition, national data protection bodies should provide practical guidance on how data protection provisions apply to the use of AI. Such guidance could include recommendations and checklists, based on concrete use cases of AI, to support compliance with data protection provisions.
To ensure that available remedies are accessible in practice, the EU legislator and Member States could consider introducing a legal duty for public administration and private companies using AI systems to provide those seeking redress information about the operation of their AI systems. This includes information on how these AI systems arrive at automated decisions. This obligation would help achieve equality of arms in cases of individuals seeking justice. It would also support the effectiveness of external monitoring and human rights oversight of AI systems (see FRA opinion 3).
In view of the difficulty of explaining complex AI systems, the EU, jointly with the Member States, should consider developing guidelines to support transparency efforts in this area. In so doing, they should draw on the expertise of national human rights bodies and civil society organisations active in this field.
These documents constitute background material for a comparative analysis for the project “Artificial Intelligence, Big Data and Fundamental Rights”. The country research was commissioned under contract D-SE-19-T02. The information and views contained in the documents do not necessarily reflect the views or the official position of the FRA. The documents are made available for transparency and information purposes only and do not constitute legal advice or legal opinion.
EN (pdf 2 MB)
EN (pdf 2.8 MB)
EN (pdf 3.3 MB)
EN (pdf 3 MB)
EN (pdf 2.9 MB)