Assessing high-risk artificial intelligence

Legislation to regulate artificial intelligence (AI) is developing at a fast pace for European Union Member States. In April 2021, the European Commission proposed a regulation governing the use of artificial intelligence (the AI Act). Among other specifications, the proposed law defines a list of “high-risk AI systems”, such as the use of AI for recruitment purposes. High-risk AI is subject to certain requirements, including assessments and documentation relevant for the protection of fundamental rights. In addition, the Council of Europe started negotiations on an international (framework) Convention on AI in April 2022. This project will provide empirical analysis and guidance on how to assess high-risk AI in relation to fundamental rights, which will be done by focusing on selected use cases and a combination of desk-research and fieldwork.
Project Status
Ongoing
Project start date
janvier
2023

What?

Legislation to regulate artificial intelligence (AI) is developing at a fast pace for European Union Member States. In April 2021, the European Commission proposed a regulation governing the use of artificial intelligence (the AI Act). Among other specifications, the proposed law defines a list of “high-risk AI systems”, such as the use of AI for recruitment purposes. High-risk AI is subject to certain requirements, including assessments and documentation relevant for the protection of fundamental rights. In addition, the Council of Europe started negotiations on an international (framework) Convention on AI in April 2022.

Why?

FRA’s research on AI has shown that developers and users of AI need clear guidance on how to carry out assessments of AI in relation to fundamental rights. Since fundamental rights concerns vary according to the purpose and area of use of AI, such guidance needs to consider the specificities of different use cases. For example, the use of algorithms for recruitment purposes is different to the use of algorithms for accessing public services or assessing students in the area of education. The data needed for assessing those systems depend on the pre-identified groups at risk and availability of data, for example on protected characteristics – such as ethnic origin, gender, or disability. The project will also address the data needs for such documentation and assessments of fundamental rights.

How?

This project will provide empirical analysis and guidance on how to assess high-risk AI in relation to fundamental rights, which will be done by focusing on selected use cases and a combination of desk-research and fieldwork.

See also