Speech

Doing AI the European way

Speaker
Michael O’Flaherty
FRA Director, Michael O’Flaherty, delivers his opening remarks during the joint FRA / German Presidency of the EU online conference on 'Doing AI the European way: Protecting fundamental rights in an era of artificial intelligence' on 14 December 2020.

Minister Lambrecht, Commissioner Reynders, State Secretary, dear friends,  

Thank you so much for joining us today. We greatly welcome that you're giving us your time.  

Personally, I also would like to most warmly thank the German Presidency of the Council for the wonderful partnership for this event, and so many others over the past months.  

Dear friends,  

We're in the middle of a very critical period for artificial intelligence in Europe.  

We're well advanced on the pathway towards what I might call ‘taming’ artificial intelligence. There is the upcoming EU regulation. These months have been filled with debate, examination, critique and analysis.  

Much of that debate is marked by a dialectic. On the one hand, artificial intelligence is a powerful force for good that can transform our world in a most positive manner. What’s more, it’s here to stay.  

But on the other side of the dialectic, artificial intelligence poses great risks that need to be managed with extreme care and caution.  

Both the positive and the negative have been marked by our experience of Covid-19 and our responses to it.  

AI has played and is continuing to play a most positive role for the identification and the distribution of vaccines.  

But on the other hand, some AI applications have been troublesome. Take, for example, the use of predictive AI techniques for grading students who couldn’t attend physical examinations.  

Both the positive and the potentially risky sides of AI have been also demonstrated in specific individual technologies, such as contact tracing applications.  

Now, given this reality – the dialectic, if you will, of AI – I greatly welcome the extent to which the EU and Member States, including Germany, have focused attention on the role of ethics, on the one hand, and human and fundamental rights, on the other, in the regulation, in the management of artificial intelligence.  

That, of course, is where we come in. That is where the Fundamental Rights Agency has a role to play.  

We have taken the human and fundamental rights map, and applied it to the technologies within which artificial intelligence is used, so that we could test the applicability of human and fundamental rights for specific uses.  

We’ve also identified the gaps where attention needs to be focused to strengthen oversight on human and fundamental rights. That, of course, is the research we’re presenting today. It is highly applied: a socio-legal examination.  

We focus on some specific use cases of artificial intelligence in the research we publish today. We look at its application in both the private and the public sectors. We assess the fundamental rights sensitivities and sensibilities, and identify the systemic actions that are needed to support fundamental rights compliance.  

For the research we’ll present to you today, there are three fundamental underlying assumptions.  

The first of these is that AI is not magic. AI is not some technology beyond our comprehension, somehow driven by machines. That is simply  not the case. AI is human-made, human-fed and human-driven.  

Given that humans are at the very heart of the application and use of AI, it makes sense to claim that the existing human and fundamental rights infrastructure is directly applicable. Indeed, we could apply here the adage we’ve used now for many years, that human rights are as binding online as offline.  

The third of these underlying assumptions is that, since human rights and fundamental rights are directly applicable to the sector and to the applications, then the duty of the state comes centre stage.  

For sure, much of AI is driven by the private sector, but this cannot lead us to avoid the reality that, ultimately, the application of human rights for AI is a duty of the duty-bearer in international human rights and fundamental rights law.  

Now, my colleagues will present the findings in just a moment.  

Allow me to conclude my words this morning with just seven very brief conclusions. These are, by nature, horizontal conclusions which are applicable right across the diversity of the applications of AI.  

The first of these is that we learned from some 100 interviews that the strongest perceived value of AI, out there in industry, is efficiency. Now, that matters for our discussion, because the strongest value is not efficacy, but efficiency. In a sense, it’s all about speed. This also means that mistakes can happen, and we have to be very careful in monitoring the application of AI.  

Secondly, in terms of monitoring the application for wrongs and for harms, we have to acknowledge that all human rights are engaged, not just this right or that right, but the entire gamut of civil and political, social, economic and cultural rights. In our study, we concentrated on three clusters of rights: privacy rights, the application of non-discrimination guarantees, and the right of access to a remedy.  

The third of my seven conclusions has to do with the fact that even though all human rights are engaged, there is very little awareness of this in the industry. At least in the private sector, we see a preoccupation with privacy, which is important, but not enough. There is a sense that the only human right engaged is privacy. When it comes to non-discrimination, we also saw what I might call naïveté regarding the perceived neutrality of technology.  

Fourth, we conclude that there is a need for mandatory risk assessment by the AI sector across all the applications, albeit the assessment of risk will vary very much according to the applications and the challenge they pose for human well-being.  

Fifth of my seven, we strongly recommend empowering the existing human and fundamental rights monitoring bodies, including at the national level, to undertake risk assessments, or at least to monitor the application of risk assessments. Here, I’m referring to the important potential role of national human rights institutions, equality bodies, data protection authorities.  

Sixth, we believe it’s critically important to do a better job of delivering remedies for violations of fundamental rights involving AI. This area is little developed for now, and to the extent that remedies are in place, they are difficult to access, and indeed, to negotiate. Let me recall here that access to remedies is not just about getting justice for the violation of all manner of rights: it, itself, is also a fundamental right.  

Seventh and finally, friends, it’s imperative for the delivery of all of the good outcomes that we ensure transparency in the application of artificial intelligence. We need to know the manner of its operation. We need to know the content of algorithms. We need explanations of the pathways to a decision on the part of automated processes.  

Let me wrap up then, colleagues, by quoting Pope Francis. Just a few weeks ago, he spoke about artificial intelligence and he said, and I quote: ‘AI can make a better world possible, if it is tied to the common good.’  

We, at the Fundamental Rights Agency, believe strongly that that thread, that tie, between AI and the common good, is at least in large part the application of human and fundamental rights.  

It’s our challenge, going forward, to make sure that we bring that application and that use, to life.  

Thank you.