20
septembre
2018

In Brief - Big data, algorithms and discrimination

With enormous volumes of data generated every day, more and more decisions are based on data analysis and algorithms. This can bring welcome benefits, such as consistency and objectivity, but algorithms also entail great risks. A FRA focus paper looks at how the use of automation in decision making can result in, or exacerbate, discrimination.

Put simply, algorithms are sequences of commands that allow a computer to take inputs and produce outputs. Using them can speed up processes and produce more consistent results. But risks abound.

Making algorithms fair and non-discriminatory is a daunting exercise. But several steps can help move us in the right direction. These include:

  • checking the quality of the data being used to build algorithms to avoid faulty algorithm ‘training’;
  • promoting transparency – being open about the data and code used to build the algorithm, as well as the logic underlying the algorithm, and providing meaningful explanations of how it is being used. Among others, this will help individuals looking to challenge data-based decisions pursue their claims;
  • carrying out impact assessments that focus on the implications for fundamental rights, including whether they may discriminate based on protected grounds, and seeing how proxy information can produce biased results;
  • involving experts in oversight: to be effective, reviews need to involve statisticians, lawyers, social scientists, computer scientists, mathematicians and experts in the subject at issue.