4 possible ways to avoid big data bias

A new focus paper from the EU Agency for Fundamental Rights (FRA) outlines the potential for discrimination in using big data for automated decision making. It also suggests potential ways of minimising this risk.

Rapid technological advances have led to vast amounts of data being amassed. Algorithms, based on machine learning and artificial intelligence, are increasingly harnessing and analysing such data. This leads to automated decision making and faster data processing.

FRA’s focus paper on #BigData: Discrimination in data-supported decision making starts to explore the fundamental rights impact of developments in big data and how it is being used as a prelude to wider research in this area.

It acknowledges how big data can deliver better, more personalised services. Big data can also contribute to more informed and objective decisions, minimising existing prejudices that may arise when humans make decisions.

However, the paper also points to serious misgivings that need addressing.

There is a potential for in-built bias that leads to discrimination in applications and services, for example when calculating insurance premiums or assessing job applications. This can result from the assumptions or biased datasets used to build the algorithms. Poor, incomplete, incorrect or outdated data may also further reinforce bias. It may not be possible to generalise or predict outcomes for one group of people based on information about another group. For instance, ethnic minorities may be less likely to be called to interview because the algorithm was written using data where ethnic minorities performed worse.

To help improve fundamental rights compliance, the paper gives examples of what could be done:

  1. Being transparent about how algorithms were built so others can detect and rectify discriminatory applications.
  2. Assessing the impact of potential biases and abuses resulting from algorithms.
  3. Assessing the quality of all data collected and used for building algorithms.
  4. Ensuring how algorithms are built and operate can be meaningfully explained so people can challenge data-supported decisions.

The use of data and automated decision-making can improve the way decisions are made, as long as the thinking behind those decisions is fully understood and does not impact negatively on fundamental rights.

Although data protection principles provide some guidance on the use of algorithms, there is more that needs to be considered. This requires strong collaboration between statisticians, lawyers, social scientists, computer scientists and subject area experts. In this way, a truly fundamental rights-compliant approach can be developed.