The project identifies concrete examples of fundamental rights challenges arising from the use of algorithms for decision-making (i.e. machine learning and AI). It aims to contribute to the development of laws, policies, guidelines and recommendations in relation to AI from a fundamental rights perspective.
Rapid advances in new technologies have demonstrated the need to identify and raise awareness about emerging fundamental rights challenges. Such challenges include infringements of privacy, data protection and equality, stemming from the growing use of big data and algorithms.
Public administrations and businesses using technologies or engaging in digitally innovative projects need concrete solutions to ensure fundamental rights compliance. They also need clear guidelines and recommendations for processing and using data. To that end, various developments by EU institutions and the Council of Europe are already underway to safeguard fundamental rights as technology advances.
Among these developments are the European Parliament’s resolutions on AI and robotics, and on big data. Additionally, the European Commission has published a communication on AI for Europe and created a high-level expert group on AI, which FRA was assigned to. The group’s mandate ended in 2020, having published ethics guidelines and policy and investment recommendations. In April 2021, the European Commission published a proposal for a regulation laying down harmonised rules on artificial intelligence and a communication on fostering a European approach to artificial intelligence.
The Council of Europe is also working on AI and human rights in relation to possible standard setting instruments concerning AI in new digital technologies and services. As an active participant in these developments, FRA aims to bring fundamental rights more strongly into the development of new technologies and provide data for implementing and developing policies.
FRA’s work in the area of AI and big data began in 2017. In February 2018, it held an expert meeting including participants from academia, business, NGOs, policy makers and other relevant actors. This was followed by the publication of a focus paper on the discriminatory potential of algorithms in May 2018.
Work continued in 2019 with FRA participating in several conferences related to the development of AI policies to ensure the inclusion of fundamental rights in all discussions and processes. A second focus paper was published in June 2019 on data quality in artificial intelligence. This was followed by another paper dealing with facial recognition technology published in November 2019.
All of these activities form part of a wider project on the impact of AI on fundamental rights in the EU, running from 2019-2021. The main component of the project was the assessment of the fundamental rights implications of new technologies based on research in Estonia, Finland, France, the Netherlands and Spain across various policy areas such as public administration, healthcare, law enforcement, and retailing. Qualitative interviews and case studies were carried out in these five EU Member States with representatives of businesses and local authorities. The main results of the project were published in December 2020 in a report titled: Getting the future right: Artificial intelligence and fundamental rights. The report was launched at a conference in cooperation with the German Ministry of Justice and Consumer Protection in the framework of Germany’s presidency of the EU Council. Watch the recordings of the event here.
As an additional component of the project, FRA is also further analysing selected issues linked to bias and algorithms, based on computer simulations and analysis of algorithms in relation to feedback loops and bias in speech detection.
Results of this analysis are expected in 2022.