“Well developed and tested algorithms can bring a lot of improvements. But without appropriate checks, developers and users run a high risk of negatively impacting people’s lives,” says FRA Director Michael O’Flaherty. “There is no quick fix. But we need a system for assessing and mitigating bias before and while using algorithms to protect people from discrimination.”
For its new report ‘Bias in algorithms – Artificial intelligence and discrimination’, FRA developed two case studies to test for potential bias in algorithms:
These results call for a comprehensive assessment of algorithms. FRA thus calls on the EU institutions and EU countries to:
These findings aim to contribute to ongoing regulatory developments by informing policymakers, human rights practitioners, tech industry, and the public about the risk of bias in AI.
They are part of FRA’s work on artificial intelligence and big data. Previous research identified pitfalls in the use of AI and called on the EU and Member States to ensure that AI protects all fundamental rights.
For more, please contact: media@fra.europa.eu / Tel.: +43 1 580 30 653