27
November
2019
French and German versions now available
01 March 2022

Facial recognition technology: fundamental rights considerations in the context of law enforcement

Facial recognition technology (FRT) makes it possible to compare digital facial images to determine whether they are of the same person. Comparing footage obtained from video cameras (CCTV) with images in databases is referred to as ‘live facial recognition technology’. Examples of national law enforcement authorities in the EU using such technology are sparse – but several are testing its potential. This paper therefore looks at the fundamental rights implications of relying on live FRT, focusing on its use for law enforcement and border-management purposes.

EU law recognises as ‘sensitive data’ people’s facial images, which are a form of biometric data. But such images are also quite easy to capture in public places. Although the accuracy of matches is improving, the risk of errors remains real – particularly for certain minority groups.

Moreover, people whose images are captured and processed might not know this is happening – and so cannot challenge possible misuses. The paper outlines and analyses these and other fundamental rights challenges that are triggered when public authorities deploy live FRT for law enforcement purposes. It also briefly presents steps to take to help avoid rights violations.

The paper covers the following topics:

  1. Facial recognition technology and fundamental rights: setting the scene
  2. Facial images as a unique biometric identifier in EU law
  3. What is facial recognition technology?
  4. Accuracy of facial recognition technology: assessing the risks of wrong identification
  5. Use of facial recognition technology by public authorities in the EU
  6. Fundamental rights implications of using live facial recognition: general points
  7. Fundamental rights most affected