Use of artificial intelligence in asylum and immigration procedures – fundamental rights implications
What?
FRA’s research will cover AI-powered technologies to support decision-making in the context of asylum and migration management. It will focus on asylum, visa, residence permits, and return procedures.
Some of these systems influence decision-making and qualify as ‘high-risk’ AI systems under the EU Artificial Intelligence Act. The research will analyses the actual deployment of the technologies, the safeguards in place, and measures implemented to mitigate potential fundamental rights impacts.
The research will cover those EU Member States which, to FRA’s knowledge, are more advanced in testing, piloting, or deploying AI-powered technologies: Austria, Belgium, Bulgaria, Denmark, Estonia, Germany, Greece, Hungary, Ireland, Latvia, the Netherlands, and Sweden.
This research builds on FRA’s recent report on high-risk AI systems, which provides an empirical basis and guidance for assessing fundamental rights risks in AI systems.
Why?
The project responds to the growing trend among EU Member States to explore AI-driven technologies in asylum and migration management. Some of these technologies are expected to assist national authorities in assessing irregular migration, health, or security threats posed by third-country nationals. There is limited knowledge and awareness about what Member States are testing, piloting or deploying.
FRA’s research aims to provide practical guidance to EU Member States, ensuring that AI-driven technologies comply with fundamental rights. For this, the research will be primarily guided by the safeguards embedded in the EU Artificial Intelligence Act, the EU Charter of Fundamental Rights, and the EU data protection legislation.
How?
The project collects information via desk research and interviews with practitioners and other experts in EU Member States. Key stakeholders include representatives of asylum and immigration authorities, statutory public human rights bodies and data protection authorities, legal and technical experts, and relevant technology providers.
Research and interviews will explore the actual use of AI-driven technologies, their classification as high-risk systems, and the measures in place to mitigate fundamental rights risks. The FRA report, along with possible guidance material for national authorities, is scheduled for publication in 2027.