8
December
2022

Bias in algorithms - Artificial intelligence and discrimination

Artificial intelligence is everywhere and affects everyone – from deciding what content people see on their social media feeds to determining who will receive state benefits. AI technologies are typically based on algorithms that make predictions to support or even fully automate decision-making.
Overview

This report looks at the use of artificial intelligence in predictive policing and offensive speech detection. It demonstrates how bias in algorithms appears, can amplify over time and affect people’s lives, potentially leading to discrimination. It corroborates the need for more comprehensive and thorough assessments of algorithms in terms of bias before such algorithms are used for decision-making that can have an impact on people.

In this report:

Key findings and opinions

  1. Artificial intelligence and bias: what is the problem?
  2. Feedback loops: how algorithms can influence algorithms
  3. Ethnic and gender bias in offensive speech detection
  4. Looking forward: Sharpening the fundamental rights focus on artificial intelligence to mitigate bias and discrimination
FRA Opinions
FRA opinion 1

Users of predictive algorithms need to assess the quality of training data and other sources that influence bias and may lead to discrimination. Such bias and potential discrimination may be developed or amplified over time, when data based on outputs of algorithmic systems become the basis for updated algorithms. Consequently, algorithms that are used to make or support decisions about people, such as predictive policing, need to be assessed before and regularly after deployment. Special attention needs to be paid to the use of machine learning algorithms and automated decision-making.

The EU legislator should make sure that regular assessments by providers and users are mandatory and part of the risk assessment and management requirements for high-risk algorithms.

FRA opinion 2

To better understand how bias can lead to discrimination, data on protected characteristics may need to be collected by users of AI systems to enable assessment of potential discrimination. This data collection needs to be justified, based on strict necessity and should include safeguards in relation to the protection and use of these data. Article 10 (5) of the proposed AIA can provide clarity on the lawful processing of sensitive data that are strictly necessary to detect, monitor and, potentially, mitigate or prevent bias and discrimination. Such a clear legal basis can contribute to better detection, monitoring, prevention and mitigation efforts when using algorithms, but it should be accompanied by appropriate safeguards, including aspects such as anonymisation, pseudonymisation and appropriate limitations with respect to collection, storage, accessibility and retention. Additional implementing guidance on the collection of sensitive data under Article 10 (5) should be considered, notably with respect to the use of proxies and outlining the protected grounds (such as ethnic origin or sexual orientation) that need to be covered.

FRA opinion 3

The EU legislator should ensure that assessments of discrimination are mandatory when deploying NLP-based systems such as hate speech detection systems. A context-sensitive and gender-based assessment of potential discrimination is necessary, highlighting potential under- and over-flagging of content. An evidence-driven assessment is needed when testing for bias in algorithms.

The implementation of EU law, such as the Digital Services Act (DSA) and the proposed AIA, should safeguard against discrimination, for example through provisions requiring providers and users of algorithms to provide documentation and carry out assessments in relation to discrimination. With the requirement for increased transparency and assessments of algorithms being the first step towards safeguarding against discrimination, companies and public bodies using speech detection should be required to share the information necessary to assess bias with relevant oversight bodies and – to the extent possible – publicly.

Oversight bodies relevant for protecting fundamental rights, such as equality bodies and data protection authorities, should pay close attention to the potential discrimination in language-based prediction models when exercising their mandates.

FRA opinion 4

The EU’s anti-discrimination legislation is crucial for safeguarding a high level of equality in the EU. The present analysis shows that speech algorithms include strong bias against people based on many different characteristics, such as ethnic origin, gender, religion and sexual orientation. As a consequence, the EU legislator and Member States should strive to ensure consistent and high levels of protection against discrimination on all grounds, including (at a minimum) sex, racial or ethnic origin, religion or belief, disability, age, sexual orientation, gender identity and gender expression in different areas of life. This discrimination should be tackled using various existing laws that safeguard fundamental rights. In addition to non-discrimination legislation, existing data protection laws should also be used to address non-discrimination regarding the use of algorithms for decision-making.

The requirements for high-risk AI use cases – as included in the proposed AIA – should increase transparency and allow for the assessment of discrimination of algorithms. This information, with respect to AI use cases, can be used to enforce existing non-discrimination and data protection laws.

Finally, equality bodies should step up their efforts to address discrimination complaints and cases linked to the use of algorithms. In order to do this effectively, they should employ specialised staff and cooperate with data protection authorities and other relevant oversight bodies.

FRA opinion 5

The EU and its Member States should consider measures to foster more language diversity in NLP tools as a way of mitigating bias in algorithms and improving the accuracy of data. As a first step, this should include promoting and funding NLP research on a range of EU languages other than English in order to promote the use of properly tested, documented and maintained language tools for all official EU languages.

The EU and its Member States should also consider building a repository of data for bias testing in NLP. Such a repository should conform to EU standards of data protection, contain high-quality data in all EU languages to enable testing for biases and be continually updated and maintained.

FRA opinion 6

To increase the application of trustworthy AI, compliant with fundamental rights, more EU and national funding for fundamental rights assessments of existing software and algorithms is needed to support studies of available general purpose algorithms. This would help deployers and users of AI tools to more easily conduct their own fundamental rights impact assessments before and during the use of certain AI systems.

The EU and its Member States should improve access to data and data infrastructures for identifying and combating the risk of bias in algorithmic systems. This includes ensuring access to data infrastructures for EU-based researchers. This could be achieved through investment in cloud computing and storage infrastructures, designed in accordance with EU standards for data protection, software safety and energy efficiency. EU-based researchers should be granted access to such infrastructure to foster public scrutiny.

In this respect, Article 31 DSA allows for researchers to better access data from online platforms. This article should be used to the extent possible, without bureaucratic obstacles, to allow easy and widespread access to data needed for – the sole purpose of – bias- and discrimination-related research on online platforms’ conduct.

To further improve availability of evidence of bias, the European Commission, the European Data Protection Board and the European Data Protection Supervisor should look at the need to address issues of correctly implementing data protection law in relation to data sharing with respect to sensitive data for the purpose of researching and monitoring discrimination. Without clearer guidance, misinterpretation of data protection law may unnecessarily stand in the way of independent evidence-based oversight of the risk of bias in algorithms.