Artificial intelligence (AI) continues to be high on the policy and political agenda in 2025. The term captures a variety of technological developments, mainly based on the use of data to make predictions or create desired outputs, such as risk scores, images or text.
It is well known that the use of AI does not come without risks, especially to fundamental rights. The use of AI may reveal private information about people and can put vulnerable groups at further disadvantage. It may also be used without fully understanding its risks, which in turn presents challenges for remedying any adverse impacts. Most notably, such risks vary depending on the area and context of AI use.
To react to such risks and promote the development and use of trustworthy AI, the EU adopted the Artificial Intelligence Act (AI Act) in 2024. As an EU regulation, it is directly applicable in the EU Member States. One of the purposes of the AI Act is to ensure a high level of fundamental rights protection, which is enabled through several of its provisions. This report focuses on the key provisions of the AI Act and how it can be used for effective fundamental rights protection.
In this report:
- Key findings and FRA opinions
- Introduction
- Artificial intelligence systems and their classification as high-risk
- Assessing high-risk artificial intelligence with respect to fundamental rights
- How to assess fundamental rights risks of high-risk artificial intelligence systems
- Conclusions
- Annex: Methodology