Speech

AI and the law

Speaker
Michael O’Flaherty
FRA Director Michael O'Flaherty delivers a speech at the Vatican Artificial Intelligence Symposium on the Challenge of AI for Human Society and the Idea of the Human Person. The symposium took place in Rome on 21 October 2021.

Thank you for the invitation to this event, I am delighted to be here.

I am also very happy with the topic I was given - to look at the issue of legal personality and regulation.

Let me start with the issue of legal personality by putting out at the very beginning of our conversation that there is nothing impossible about giving legal personality to an artificial intelligence entity.

The law has proved itself very adaptable over history in addressing exactly this issue: corporations have legal personality, many forms of non-human entities have them. In fact, it is so recognizably achievable that the European Parliament back in 2017 anticipated that eventually such a personality might be given.

The question, then, is not whether AI can be given legal personality. It is actually a much more interesting, much deeper question, of whether it is a good idea.

Now, in terms of addressing whether it is a good idea, if you do not mind, I would like to start by talking about a lawn mower.

A few weeks ago, I was sitting on a hotel veranda looking out over the lawn and I gradually became aware of the little machine that was going up and down, up and down, up and down the grass, cutting it.

It was a robot lawn mower.

Now I am probably the last human in this room to have seen a robot lawn mower, but it was my first experience. As I sat there for about an hour as this little thing was constantly doing its work without a break, I found myself feeling sorry for it.

I thought it is such a thankless job. There is nobody to encourage it, there is nobody to express appreciation. You know, I am not exaggerating when I say that I had to resist the urge to pet the lawnmower.

I told this story to a colleague who told me how he and his wife have a robot lawn mower and how they had a serious discussion about whether they should describe it as he or she.

Our friend from NATO, this morning echoed something very similar when he described how soldiers in battle become very fond of and develop some elements of a relationship with the automated weapons systems that they are using.

What was I doing? What were those soldiers doing? I suggest that we were anthropomorphizing an AI application. And I wonder whether today we are in danger of behaving similarly when we discuss the issue of legal personality for artificial intelligence. The question is whether any interest to achieve the goal of legal personality is ultimately driven by a sense of how human-like is a robot or another such AI device.

The legal scholar Simon Chesterman is of this view. Simon Chesterman is a Singapore-based legal scholar and he argues, in a really good article from this year, that this is, based on his analysis of the literature across the world, the primary driver of the legal personality debate.  

He argues that it is preposterous, that it is an entirely fake appreciation that the machine can be humanlike and to therefore invoke that basis for legal personality.

It makes no sense at all. But of course, even if we were to expose such a strange and false motivation for the attribution of legal personality, there are some benefits that allegedly would come with legal personality if it were to be accorded.

Chesterman takes them one by one and shows how the benefits can still be achieved, but they don't need any legal personality. There are other ways to achieve the benefits. He talked about issues of the protection of copyright, of taxation. But also of such issues as the attribution of liability for harm and even the imposition of punishment for the doing of such harm, to the extent that there is virtue in delivering on those objectives.

He demonstrates, as a matter of law, that you simply do not need to give legal personality to achieve each and every one of them. In fact, he concludes the piece, as do a number of other writers in the area, by arguing that not only legal personality would be a waste of time, but it would actually be very dangerous indeed.

Some examples that you would find would include the following. The first is that, if you give legal personality to a machine, it is only a short step from giving rights to a machine and then putting those rights in competition with the rights of humans, with all of the consequences.

He also reminds us of the extent to which the according of rights, and therefore liability, let's say to a machine, would mean that the human architect behind the machine that is doing the bad stuff could actually escape responsibility and accountability.

It is reflections such as these that confirm for me that the conferring of legal personality on AI or any AI device would be a very bad idea indeed.

Instead, the focus should be on the regulation of artificial intelligence and the humans that design it, that build it, that apply it and that benefit from it.

This is where my own agency, the EU Fundamental Rights Agency, comes into the picture. Our job is to advise the EU institutions so that in their law and policy they can be human and fundamental rights compliant. We are very heavily invested in delivering advice of that form in the context of artificial intelligence because artificial intelligence can be and is a powerful force for good and can also be very dangerous indeed.

Therefore, the implications for the protection and the undermining of human rights is very profound.

We are delivering advice in the particular context of the EU legislative initiatives. There is a draft AI Regulation that has begun its long pathway through the legislative process in Brussels. It will take another couple of years and there are some related instruments in play as well, such as a regulation for digital services. We are also involved in another legislative initiative, that of the Council of Europe, which currently has an interstate working group looking at the elements necessary towards developing a convention, an international treaty, in this area.

Moreover, I learned when I was in Rome this week, that under the Italian presidency of the Council of Europe, which begins in a few days’ time, there is the ambition to actually start the drafting process for such a convention.

But how do we do our work? We have as our normative base the international legal human rights standards. Those of the United Nations, of the Council of Europe and of course, of the EU itself.

But on top of those norms, we are deeply applied in the work. We are heavily invested in empirical research on what the use cases of AI look in practice and what are the human rights challenges and what the possible ways to resolve them.

We are extremely context-specific in a deliberate effort to help move the discussion beyond theory and generalities, right down to how this application impacts my life or your life and what that says about the way in which we engage with it and the way we regulate it.

Out of our work, we have developed a number of principles which we are using as a frame for our engagement in the EU and the Council of Europe contexts. Now, we have a lot of principles and I am just going to name seven of them. Some of them are blindingly obvious and for those who work in the area of promoting and protecting human rights, they are very familiar. But we have seen that they need particular attention in the particular context of the AI discussions. .

The first really important one is that we are not in a legal terra nullius. Not at all. We have a lot of existing regulatory law for which we must demand attention, compliance and accountability.

The human rights treaties of the UN and the Council of Europe do not just exist and apply offline. They are as relevant and applicable online as offline. So the existing guarantees must be delivered, also in the design, application and use of artificial intelligence.

We also have existing focused regulation directly relevant to the control of AI. I am thinking, above all else, of the dreaded GDPR, the General Data Protection Regulation of the European Union, which already imposes a considerable degree of constraint and demand of accountability on applications and uses of artificial intelligence.

These are just some examples. But for all the existing treaties and standards, it is obvious we are not protesting or in any way resisting the recognition that we do need additional concentrated regulation and treaty developments to address some of those new complexities and specificities related to AI.

Second, why is it necessary for me to make this point? It is necessary because we are facing resistance. Some parts of the industry embrace regulation, but there are still holdouts within industry that say that regulation has inappropriately constrained science and competition, restrained profits and so forth. Arguments are made as to why we do not need regulation if we are to thrive. So, it is important to make the point.

The third dimension that we are insisting on is that if regulation has, as part of its purpose, been promoting the respect for and the protection of the human rights of people in our societies, then the development of the regulation must be done in partnership with impacted people, with rights holders.

They are not observers outside the room, they must be part of the exercise. The regulation is about their lives and so we must find a meaningful way to engage them in the discussion. That is the basis on which we insist. That civil society be given a respected and proper and appropriate place within the halls where law is being designed.

Fourth, when I say it is about promoting and protecting human rights, which human rights do I mean? The answer is very simple: all human rights.

We have seen that AI can impact on just about any aspect of human well-being and therefore any aspect of rights. The examples that are most typically given relate to privacy. And that is fine. There is no doubt that there are really big privacy issues, but there are so many others engaging civil rights, political rights, social rights, economic rights, cultural rights.

Take, for example, the problem that occurred in the Netherlands not so long ago, where an algorithm used by public administration on fraud risks of citizens triggered a political storm with quite severe political consequences. 

The fifth of these principles is that any regulatory framework for artificial intelligence will need a certain amount of self-regulation. AI is and will be so ubiquitous that no oversight body on earth could engage with every single application of AI. That is just not feasible. Nevertheless, self-regulation will need to be complemented by an independent oversight authority that is independent of the state, independent of industry and whichever other interested party.

There is, by the way, a widespread willingness within the relevant bits of industry for such an oversight body. The controversy here is not so much about oversight, but as to the mandate of the oversight. For example, focus on the importance to privacy needs to be matched by attention to other impacted aspects of human well-being.

The sixth of my seven principles, again, is necessary for any engagement with human rights, and that is that the regulatory frame must provide for remedies. A human right without remedies is an empty right. Therefore, it is crucial that the consumer be recalled in the architecture and that where the consumer is harmed or the user is damaged or compromised, that there is a possibility of redress.

This brings me to the seventh principle, which is fundamental to the application of every single one of the others. And that is that any regulatory instrument must require full transparency in the design and in the application of artificial intelligence. Nothing else would work without such transparency. It is sometimes argued that it is impossible because AI is too complicated to be made transparent.

We need transparency of algorithms. We need transparency of training data and of every other dimension that ultimately has an impact on human well-being.

I am going to conclude now with my own reference to dignity. I was impressed by the extent to which everybody this morning came back to an idea of dignity as being at the core of how we engage with artificial intelligence. I will do it in the specific context of the Universal Declaration of Human Rights, article one, which proclaims that every human being is free and equal in dignity and in rights.

It is on the basis of that provision that we in the human rights world can work so closely with the words of ethics, the words of philosophy and theology, in a quest to tame artificial intelligence.

We are all, notwithstanding our separate vocabularies, we are all heading in the same direction towards an AI that, in the words of Pope Francis, is in the service of humanity.

Thank you.