Data

Künstliche Intelligenz und Big Data

Highlights

  • Report / Paper / Summary
    4
    Dezember
    2025
    Artificial Intelligence comes with both benefits and risks. Safe AI use that accounts for fundamental rights is thus crucial. While the 2024 EU AI Act was a milestone in this regard, its broad definitions regarding AI systems and high-risk AI could introduce loopholes for fundamental rights compliance. This report offers an empirical basis for much-needed practical guidance on the Act’s implementation. Based on interviews with AI developers, sellers, and users, FRA addresses challenges of its use in critical domains, like asylum, education, and employment. Our findings help guide next steps in realising the AI Act’s potential to
    ensure responsible innovation.
  • Video
    In an increasingly digital world, tech advances affect almost all aspects of our lives and our rights. This FRF theme tackles topics such as regulating digitalisation without stifling innovation or surveillance-based advertising. Notable speakers include Catherine De Bolle, Executive Director at EUROPOL, Daniel Howden, Founder and Director of Lighthouse Reports, Nanna-Louise Linde, Vice-President for European Government Affairs at Microsoft, Alexandria Walden, Global Head of Human Rights at Google, among others.
  • Report / Paper / Summary
    8
    Dezember
    2022
    Artificial intelligence is everywhere and affects everyone – from deciding what content people see on their social media feeds to determining who will receive state benefits. AI technologies are typically based on algorithms that make predictions to support or even fully automate decision-making.
  • Report / Paper / Summary
    14
    Dezember
    2020
    Artificial intelligence (AI) already plays a role in deciding what unemployment benefits someone gets, where a burglary is likely to take place, whether someone is at risk of cancer, or who sees that catchy advertisement for low mortgage rates. Its use keeps growing, presenting seemingly endless possibilities. But we need to make sure to fully uphold fundamental rights standards when using AI. This report presents concrete examples of how companies and public administrations in the EU are using, or trying to use, AI. It focuses on four core areas – social benefits, predictive policing, health services and targeted advertising.
    Produkte
    The report ‘Assessing high-risk artificial intelligence’ examines the development and use of artificial intelligence (AI) in five areas defined as high-risk under the AI Act: asylum, education, employment, law enforcement and public benefits. This video provides information about the findings of the report, underlining the need for a responsible use of AI which in turn earns public trust, fuels innovation and drives sustainable technological progress.
    4
    Dezember
    2025
    Artificial Intelligence comes with both benefits and risks. Safe AI use that accounts for fundamental rights is thus crucial. While the 2024 EU AI Act was a milestone in this regard, its broad definitions regarding AI systems and high-risk AI could introduce loopholes for fundamental rights compliance. This report offers an empirical basis for much-needed practical guidance on the Act’s implementation. Based on interviews with AI developers, sellers, and users, FRA addresses challenges of its use in critical domains, like asylum, education, and employment. Our findings help guide next steps in realising the AI Act’s potential to
    ensure responsible innovation.
    In an increasingly digital world, tech advances affect almost all aspects of our lives and our rights. This FRF theme tackles topics such as regulating digitalisation without stifling innovation or surveillance-based advertising. Notable speakers include Catherine De Bolle, Executive Director at EUROPOL, Daniel Howden, Founder and Director of Lighthouse Reports, Nanna-Louise Linde, Vice-President for European Government Affairs at Microsoft, Alexandria Walden, Global Head of Human Rights at Google, among others.
    Algorithms in predictive policing can lead to discrimination, as FRA's new bias in AI report reveals. Watch the clip to find out more.
    8
    Dezember
    2022
    Artificial intelligence is everywhere and affects everyone – from deciding what content people see on their social media feeds to determining who will receive state benefits. AI technologies are typically based on algorithms that make predictions to support or even fully automate decision-making.
    In this vlog, FRA Director Michael O'Flaherty talks about artificial intelligence and algorithms. While AI can be a powerful force for good, he points out that humans must supervise very closely the application of AI and a permanent testing of every possible application is needed. On 8 December, FRA is publishing a new report on bias in algorithms.
    Automation and AI have radically transformed how we work, live and play. In this video, Director of the EU Agency for Fundamental Rights, Michael O'Flaherty, discusses the implications of AI for our most basic of human rights.
    Artificial intelligence (AI) already plays a role in many decisions that affect our daily lives. From deciding what unemployment benefits someone gets to where a burglary is likely to take place. But we need to make sure to fully uphold fundamental rights standards when using AI. Drawing on the ‘Getting the future right – Artificial intelligence and fundamental rights’ report, FRA explores the potential benefits and possible errors that can occur focusing on four core areas – social benefits, predictive policing, health services and targeted advertising.
    29
    Januar
    2021
    Der Bericht der FRA über künstliche Intelligenz und Grundrechte präsentiert konkrete Beispiele dafür, wie Unternehmen und Behörden in der EU künstliche Intelligenz einsetzen oder einzusetzen versuchen. Diese Zusammenfassung stellt die wichtigsten Erkenntnisse aus dem Bericht vor, die sowohl auf Unionsebene als auch auf nationaler Ebene für die Politikgestaltung auf dem Gebiet des menschen- und grundrechtskonformen Einsatzes von KI-Tools genutzt werden können.
    Artificial intelligence (AI) already plays a role in many decisions that affect our daily lives. From deciding what unemployment benefits someone gets to where a burglary is likely to take place. But we need to make sure to fully uphold fundamental rights standards when using AI. Drawing on the ‘Getting the future right – Artificial intelligence and fundamental rights’ report, FRA presents a number of key considerations to help businesses and administrations respect fundamental rights when using AI.
    This is a recording from the morning session of the high-level virtual event "Doing Artificial Intelligence the European way" which took place on 14 December 2020.
    This is a recording from the afternoon session of the high-level virtual event "Doing Artificial Intelligence the European way" which took place on 14 December 2020.
    Artificial intelligence is here. It’s not going away. It can be a force for good, but it needs to be watched so carefully in terms of respect for our human fundamental rights. The EU Fundamental Rights Agency is deeply committed to this work.Our ambition is not just to ensure that AI respects our rights, but also that it protects and promotes them.
    Will AI revolutionise the delivery of our public services? And what's the right balance? How is the private sector using AI to automate decisions — and what implications
    might that have? Are some form of binding rules necessary to monitor and regulate the use of AI technology - and what should these rules look like?
    How do we embrace progress while protecting our fundamental rights? As data-driven decision making increasingly touches our daily lives, what does this mean
    for our fundamental rights? A step into the dark? Or the next giant leap? The time to answer these questions is here and now. Let’s seize the opportunities, but understand the challenges. Let’s make AI work for everyone in Europe…And get the future right.
    14
    Dezember
    2020
    Slovenian version now available
    20 December 2024
    Artificial intelligence (AI) already plays a role in deciding what unemployment benefits someone gets, where a burglary is likely to take place, whether someone is at risk of cancer, or who sees that catchy advertisement for low mortgage rates. Its use keeps growing, presenting seemingly endless possibilities. But we need to make sure to fully uphold fundamental rights standards when using AI. This report presents concrete examples of how companies and public administrations in the EU are using, or trying to use, AI. It focuses on four core areas – social benefits, predictive policing, health services and targeted advertising.
    In the latest edition of his video blog, FRA Director Michael O'Flaherty speaks about the human rights challenges, but also the opportunities, that come along with the development of artificial intelligence technology.
    27
    November
    2019
    Mithilfe von Gesichtserkennungstechnologie können digitale Gesichtsbilder verglichen werden, um festzustellen, ob sie dieselbe Person zeigen. Der Vergleich von Aufnahmen von Videokameras mit Bildern in Datenbanken wird als Live-Gesichtserkennungstechnologie bezeichnet. Nur wenige nationale Strafverfolgungsbehörden in der EU nutzen derzeit eine solche Technologie – doch mehrere erproben ihr Potenzial. Im vorliegenden Fokuspapier werden daher die Auswirkungen der Live-Gesichtserkennungstechnologie im Hinblick auf die Grundrechte untersucht, wobei der Schwerpunkt auf der Nutzung für die Zwecke der Strafverfolgung und des Grenzmanagements liegt.
    As part of the background research for the Agency’s project on ‘Artificial intelligence (AI), Big Data and Fundamental Rights’, FRA has collected information on AI-related policy initiatives in EU Member States and beyond in the period 2016-2020. The collection currently includes about 350 initiatives.
    11
    Juni
    2019
    Algorithms used in machine learning systems and artificial intelligence (AI) can only be as good as the data used for their development. High quality data are essential for high quality algorithms. Yet, the call for high quality data in discussions around AI often remains without any further specifications and guidance as to what this actually means.