Data

Kunstig intelligens og big data

Highlights

  • Report / Paper / Summary
    8
    December
    2022
    Artificial intelligence is everywhere and affects everyone – from deciding what content people see on their social media feeds to determining who will receive state benefits. AI technologies are typically based on algorithms that make predictions to support or even fully automate decision-making.
  • Report / Paper / Summary
    14
    December
    2020
    Artificial intelligence (AI) already plays a role in deciding what unemployment benefits someone gets, where a burglary is likely to take place, whether someone is at risk of cancer, or who sees that catchy advertisement for low mortgage rates. Its use keeps growing, presenting seemingly endless possibilities. But we need to make sure to fully uphold fundamental rights standards when using AI. This report presents concrete examples of how companies and public administrations in the EU are using, or trying to use, AI. It focuses on four core areas – social benefits, predictive policing, health services and targeted advertising.
  • Video
    In the latest edition of his video blog, FRA Director Michael O'Flaherty speaks about the human rights challenges, but also the opportunities, that come along with the development of artificial intelligence technology.
  • Report / Paper / Summary
    30
    May
    2018
    We live in a world of big data, where technological developments in the area of machine learning and artificial intelligence have changed the way we live. Decisions and processes concerning everyday life are increasingly automated, based on data. This affects fundamental rights in various ways. This focus paper specifically deals with discrimination, a fundamental rights area particularly affected by technological developments.
    Produkter
    Algorithms in predictive policing can lead to discrimination, as FRA's new bias in AI report reveals. Watch the clip to find out more.
    8
    December
    2022
    Artificial intelligence is everywhere and affects everyone – from deciding what content people see on their social media feeds to determining who will receive state benefits. AI technologies are typically based on algorithms that make predictions to support or even fully automate decision-making.
    In this vlog, FRA Director Michael O'Flaherty talks about artificial intelligence and algorithms. While AI can be a powerful force for good, he points out that humans must supervise very closely the application of AI and a permanent testing of every possible application is needed. On 8 December, FRA is publishing a new report on bias in algorithms.
    Automation and AI have radically transformed how we work, live and play. In this video, Director of the EU Agency for Fundamental Rights, Michael O'Flaherty, discusses the implications of AI for our most basic of human rights.
    Artificial intelligence (AI) already plays a role in many decisions that affect our daily lives. From deciding what unemployment benefits someone gets to where a burglary is likely to take place. But we need to make sure to fully uphold fundamental rights standards when using AI. Drawing on the ‘Getting the future right – Artificial intelligence and fundamental rights’ report, FRA explores the potential benefits and possible errors that can occur focusing on four core areas – social benefits, predictive policing, health services and targeted advertising.
    29
    January
    2021
    I FRA’s rapport om kunstig intelligens og grundlæggende rettigheder gives der konkrete eksempler på, hvordan virksomheder og offentlige forvaltninger i EU bruger eller forsøger at bruge kunstig intelligens. Dette resumé indeholder hovedresultaterne fra rapporten. Disse kan danne grundlag for EU’s og medlemsstaternes politiske bestræbelser på at regulere brugen af AI-værktøjer i overensstemmelse med menneskerettighederne og de grundlæggende rettigheder.
    Artificial intelligence (AI) already plays a role in many decisions that affect our daily lives. From deciding what unemployment benefits someone gets to where a burglary is likely to take place. But we need to make sure to fully uphold fundamental rights standards when using AI. Drawing on the ‘Getting the future right – Artificial intelligence and fundamental rights’ report, FRA presents a number of key considerations to help businesses and administrations respect fundamental rights when using AI.
    This is a recording from the morning session of the high-level virtual event "Doing Artificial Intelligence the European way" which took place on 14 December 2020.
    This is a recording from the afternoon session of the high-level virtual event "Doing Artificial Intelligence the European way" which took place on 14 December 2020.
    Artificial intelligence is here. It’s not going away. It can be a force for good, but it needs to be watched so carefully in terms of respect for our human fundamental rights. The EU Fundamental Rights Agency is deeply committed to this work.Our ambition is not just to ensure that AI respects our rights, but also that it protects and promotes them.
    Will AI revolutionise the delivery of our public services? And what's the right balance? How is the private sector using AI to automate decisions — and what implications
    might that have? Are some form of binding rules necessary to monitor and regulate the use of AI technology - and what should these rules look like?
    How do we embrace progress while protecting our fundamental rights? As data-driven decision making increasingly touches our daily lives, what does this mean
    for our fundamental rights? A step into the dark? Or the next giant leap? The time to answer these questions is here and now. Let’s seize the opportunities, but understand the challenges. Let’s make AI work for everyone in Europe…And get the future right.
    14
    December
    2020
    Artificial intelligence (AI) already plays a role in deciding what unemployment benefits someone gets, where a burglary is likely to take place, whether someone is at risk of cancer, or who sees that catchy advertisement for low mortgage rates. Its use keeps growing, presenting seemingly endless possibilities. But we need to make sure to fully uphold fundamental rights standards when using AI. This report presents concrete examples of how companies and public administrations in the EU are using, or trying to use, AI. It focuses on four core areas – social benefits, predictive policing, health services and targeted advertising.
    In the latest edition of his video blog, FRA Director Michael O'Flaherty speaks about the human rights challenges, but also the opportunities, that come along with the development of artificial intelligence technology.
    27
    November
    2019
    French and German versions now available
    01 March 2022
    Facial recognition technology (FRT) makes it possible to compare digital facial images to determine whether they are of the same person. Comparing footage obtained from video cameras (CCTV) with images in databases is referred to as ‘live facial recognition technology’. Examples of national law enforcement authorities in the EU using such technology are sparse – but several are testing its potential. This paper therefore looks at the fundamental rights implications of relying on live FRT, focusing on its use for law enforcement and border-management purposes.
    As part of the background research for the Agency’s project on ‘Artificial intelligence (AI), Big Data and Fundamental Rights’, FRA has collected information on AI-related policy initiatives in EU Member States and beyond in the period 2016-2020. The collection currently includes about 350 initiatives.
    11
    June
    2019
    Algorithms used in machine learning systems and artificial intelligence (AI) can only be as good as the data used for their development. High quality data are essential for high quality algorithms. Yet, the call for high quality data in discussions around AI often remains without any further specifications and guidance as to what this actually means.
    30
    May
    2018
    We live in a world of big data, where technological developments in the area of machine learning and artificial intelligence have changed the way we live. Decisions and processes concerning everyday life are increasingly automated, based on data. This affects fundamental rights in various ways. This focus paper specifically deals with discrimination, a fundamental rights area particularly affected by technological developments.