Speech

20th Anniversary of the European Commission for the Efficiency of Justice (CEPEJ)

Speaker
Michael O’Flaherty
FRA Director, Michael O'Flaherty delivers his speech during the 20th Anniversary of the European Commission for the Efficiency of Justice (CEPEJ). The event took place in Valetta on 27 June 2022.

Bonjour, chers collègues, Excellences,

I am very grateful to be here. I appreciate the invitation. We value the cooperation with you, which we have had now over many years. We congratulate you on the 20th anniversary.

We express appreciation for this topic. You were first, globally to address the issues of the digitalisation of justice. The tools you have developed, the codes of standards on ethics, on good practice, are being used worldwide, and we deeply appreciate the leadership.

We were speaking quite appropriately of the Covid times and how it accelerated the digitalisation of justice in so many different ways. We at the Fundamental Rights Agency, at least with regard to the EU 27 Member States, followed the story very closely indeed and periodically published an analysis of the interplay of Covid and fundamental rights.

We would agree with much of what was said about the positive mention of the digitalisation of justice in that period – the extent to which it allowed the very basis of access to justice, it speeded up proceedings, it cut costs, it even enhanced protection for vulnerable victims and witnesses.

But at the same time, we observed across the EU 27 Member States a number of problems.

One had to do with access to the necessary skills, which then triggered issues of equality of arms; one side would have better use of the technology than others. There were issues of the missing of deadlines by inexperienced practitioners. There was the concern to ensure that justice be delivered in public. This of course, a concern, triggered some very positive good practice, such as allowing journalists to have privileged online access to proceedings, including in the criminal context.

There were concerns of the rights of defendants who did not have physical access to a lawyer. Then, worryingly, in a lot of EU Member States, there was a lack of guidance about how to use the technology or, at least in some places, plenty of guidance, but at the local level; there were not joined up national tools in place.

So, there is a lot we can learn from the Covid time going forward, including in the context of which bits of the technology we are going to maintain as standard practice.

Of course, then, looking to what we will do in the future, we have to locate the whole discussion in the bigger frame of artificial intelligence. The issues of the application of artificial intelligence in the context of the administration of justice is potentially vast and is subject right now to some quite remarkable experimentation globally.

Just to take the criminal justice context, we see ideas, initiatives, experimentation in the context of predictive policing, prosecution, sentencing and probation.

I am conscious that it is for you, States, courts, to make difficult decisions as to which of these applications is or is not appropriate in the context of respect for human rights.

The role of my agency, on the other hand, is to deliver the applied research to help you, governments, courts, to take such decisions.

We have been invested in the broad issues of artificial intelligence and human rights for a number of years now. We have developed a lot of insight, to help us all go forward. We have, for instance, generated an understanding of the environment in which we have to make our choices. Let me just give you three elements of the environment that we have detected.

The first is – and this is empirically verifiable –the primary driver of AI in any context in which it is applied is speed and efficiency, not quality. There is nothing inherently wrong with that, but nevertheless there is a warning light, and that as we apply the AI solutions to speed things up, to make things more efficient, we have to be extremely vigilant that we do not do so at the expense of what we call human rights quality.

The second element we have to keep in mind in terms of the topography for AI is the extent to which we keep remembering that AI is driven by data and we need to remember the extent to which our data is flawed. Again, it is empirically verifiable that an astonishing amount of the primary data that feeds the algorithms that lead to the outcome is wrong. We have researched this in numerous contexts.

Then the third of these topographical dimensions is the self-evident one that the technology does not stay still. It is ever evolving, and this means that our decisions, our analysis, must never stop evolving. We have been working on facial recognition technology for the last 7 or 8 years, and every time we think we are finished, with clear guidance for our Member States, we have to start again because the technology is moving on. That is why, for instance, right now, we are about to publish new findings on facial recognition technology in the context of the role played by feedback loops, something nobody had even heard of back when we started this work years ago.

Finally, in terms of the topography, I would want to echo what Ambassador Schneider said just now, which is with regard to the need for strong regulation. I applaud the work of the Council of Europe and the work of CAHAI and CAI and the direction in which they are leading us. I also acknowledge that the EU itself is making major advances in terms of moving towards regulation. We already have the Data Protection Regulation, but we now have the proposed AI regulation, which is currently in negotiation.

One important dimension of the current draft of the AI regulation is that it identifies the application of AI for the administration of justice as being high risk. We at the Fundamental Rights Agency think this is a very wise and correct decision, a recognition that every application of the technology in the justice sector – and it can bring enormous good – must be watched with great vigilance.

To conclude my remarks: what does watching with great vigilance mean? I would very briefly propose to you six elements that we derive from law, from ethics and from empirical research in terms of how we watch the application of high-risk technologies, including in the justice sector.

The first is that we must never let go of human oversight. There is no room in a high-risk application for autonomous machine outcomes. There must always be a person.

Second, every application we apply in a high-risk sector must be subject to an ex-ante human rights compliance check. We need to know in advance what could go wrong, what needs to be watched.

Third, we need the application of strong regulatory oversight systems for the application of such AI uses. You might say that is rather obvious, but from the point of view of human rights oversight, it is not. We see repeatedly how oversight systems engage only part of the human rights context, privacy for example. Privacy should not be overlooked but many other areas of human rights, such as protection from discrimination, can be neglected.

The fourth of my six has to do with what happens when things go wrong. Again, typically, this can be overlooked. It is the issue of the right to a remedy, an individualised remedy for a person who is wronged by a misapplication of technology, and again, it is important to build such rights into whatever regulatory instruments we develop.

The fifth – and this is so central to everything, it should really be more like an overarching principle – and that is honouring the principle of transparency. We can achieve nothing if we do not have transparency, and here I am particularly referring to the transparency of the AI technologies. We need to know what is in the algorithms, we need to know what training data was applied. We need to have all the information so that we can make our necessary assessments.

The sixth and final of these principles – again, it is rather obvious, but it is still very challenging – and that is that as we roll out the digitalisation of justice, we have to invest in the related training. It is typically under-invested. And there is a dimension to it which we have not yet a smart solution for; it is not just about training judges, lawyers, court personnel, it is about training litigants, including lay litigants. This would be a fascinating discussion for you going forward, how you inform the general public about the application of AI in the justice context so that you can ensure that justice is done for them.

I want to thank you again for your attention and I want to assure you that the Fundamental Rights Agency of the European Union is at your service. We will continue to support our Member States as they make these very tough choices in the path forward. We will continue to cooperate very closely with the Council of Europe so that we can go forward in a strict complementarity.

Thank you.

See also