Speech

Protecting Human Rights in the Digital Age

Speaker
Michael O’Flaherty
©tampatra - stock.adobe.com, 2020
A strongly human rights compliant, human rights respectful AI, that ultimately supports human development is going to be the most trustworthy AI. Trusted by consumers, by citizens, by everybody, says FRA Director Michael O'Flaherty in his talk on protecting human rights in the digital age at the IIEA in Dublin on 26 September. He discusses the implications of AI for human rights, the new AI Act, as well as the wider challenges facing human rights standards in an age of rapid digitalisation.

Dear friends,

a few years ago I was sitting on a hotel terrace, looking across the lawn. As I sat there, I became aware of a little machine going up and down the grass, cutting as it went. It was my first encounter with a robot lawn mower.

I was transfixed. For a good half an hour, I watched as it made its journey, up and down, a thousand times. And at a certain point, I began to feel sorry for it. That it was doing this job with no gratitude, no recognition, no encouragement. I had to restrain myself from going over to pat it.

But what I remember about that moment, above all, is a genuine sense of awe. It was my first encounter with robotic technology in any meaningful way. And I was deeply impressed.

Now, much has moved on since that first encounter, but I never cease to be in awe of AI and its potential for human thriving and well-being.

I was on the island of Lampedusa five days ago, to get a better understanding of what was happening with the huge arrivals of asylum seekers, which is very much a live issue now. The situation is dreadful; I am not going to pretend otherwise. But the Italian authorities and civil society were using tech to try and put some order on the chaos, and it was deeply impressive the way AI-driven technology was helping them to cope, to some extent, with what is an impossible situation. Again, AI for good; something I found deeply impressive.

But of course, given where I work, at the EU Fundamental Rights Agency, I am no less aware of the risks that AI pose for us, and for our societies. Through the work of the Agency, let me illustrate five examples of contexts in which AI can get it so badly wrong.

The first is the well-known one of discrimination. AI hoovers up every fact, every datum in our world, with all of the discriminations, the hatreds, the biases to be found in that data. And it is well known, this is nothing new. However, beyond mistakes or biases in data, there is also the astonishing extent to which data is mistaken. We have researched this in our Agency by, for instance, looking at large scale data sites in the migration context, where the level of error is worrying. This occurs in extremely consequential contexts, like putting the age of an adult for a child, with all of the subsequent consequences for that child of being registered as an adult.

So, it is about bias, it is about mistakes, and it is also about something very specific to tech and that is the role of feedback loops, and the extent to which feedback loops can enlarge error and enlarge mistakes over time and practice. We have looked at that in the context of automated online content moderation. We have seen how a piece of technology that begins benign and does its job relatively well can learn error and then expand the error, with some pretty remarkable consequences. Just to give you an example: we recently did research on automated online content moderation, where we developed algorithms and then tested language to see what would happen.

As is well known by everybody, moderation in lesser-used languages was largely ineffective. But in English, we inserted particular terms. One such term was “I hate Jews”. And the online tech did its job. The term was flagged as problematic speech - exactly what we intended it to do. But then, my colleagues inserted the words “I hate Jews love”. And the machine passed over the term. It did not flag it as problematic because of the power of the word ‘love’ and the associations of the word ‘love’, which, according to the machine, overrode the “I hate” part of the phrase. So again, an example of something rather specific to the online sphere, in terms of how error can multiply. That is the first worrying area - discrimination and all that is related.

The second has to do with the fact of who dominates the tech world: the private sector. There is nothing inherently wrong with the tech world owning technology but there is something inherently worrying when that is in the context of something that is so profoundly impacting our lives. And in a context where we know, again through our research, what are the primary drivers for much of the private sector in the development and advancement of a technology which has such a huge impact for every single person here today. One important driver is efficiency. We, again through research, concluded that the most important motivation for investment in technology is to do things quicker and more efficiently.

Again, nothing wrong with that. But if you think of what that could be at the expense of; then we see a worry.

Another driver is – no surprise – profit, and again, that can be a worry in the context of the impact on our lives. A third driver – among others, I have no doubt – is the idea the private owner of technology has, of establishing some idiosyncratic world goal. I do not need to give examples of this; they are pretty replete right now.

So, the second concern is the role of the private sector. The third concern is the exact converse and that is the extent to which AI enhances the power of the state. This is not inherently problematic - at least if you are in a state that respects democracy. Again, I do not need to give examples, it is perfectly obvious how tech in the wrong state hands can be a tool for repression and oppression.

The fourth of the five concerns I would like use to illustrate my worries is the somewhat more apocalyptic one of the transfer or the outsourcing of decision-making to artificial intelligence. We have very many examples here, but again the obvious one is well rehearsed. It is not unfamiliar; it is autonomous weapon systems, which should and do strike fear into the heart of anybody who is concerned about the well-being of our world.

The fifth of these illustrative examples of why we should be worried is something a little bit harder to pin down. It is broad, and something seen and understood over time; it is the erosion through the application of AI of our social solidarity. The degradation of the human community, in the sense that, so often today and far more likely in the future, we are dealing with a machine, not with a person. So often what is presented to us as our preference, has been decided by a machine, not a person, certainly not by me and so on. Finally, in recent times, psychologists and others are speaking of the risk to mental health of this phenomenon of the automation of life.

So, concerns such as these inevitably lead us to the question of how we tame technology. If we all accept that we need to tame this awesome power that has been developed, what should that look like? What solutions can we suggest so that the technology is in the service of human well-being? There are a few frames of reference for how we begin a discussion of how we tame tech but two of the most prominent are through the invocation of the language of ethics on the one hand and the language of human rights on the other.

Now, it is very positive that this is the starting point for most. Look at Ireland’s AI Strategy; how it locates the reflection on the future of AI in the context of the application of ethics and respect for human rights. This is all very welcome.

But there are some concerns in the discourse, and those of us who work in human rights have been disappointed by the extent to which the ethical discourse has until now – and I would argue still – dominates. It is as if the ethics and the human rights approaches are contesting, and we must fight our corner so that we dominate. To some extent, ethics has been the more successful. One cannot help but think that ethics is an inherently subjective area, where my sense of right and good does not have to be the same as your sense of right and good. Therefore, using ethics to frame the taming of technology, allows us a tool that is malleable, which can be sent in certain directions to achieve certain outcomes. This is not to diminish the importance of ethics; it is more to understand why it has dominated as the discourse.

Turning to the other frame of reference: human rights. Here, we see something rather different. We see a far more sturdy infrastructure on which to base standards and practice. My concern, as you can imagine as somebody working at the Agency for Fundamental Rights, is to ensure that the human rights frame is put at the centre. Not to displace ethics – it is not a competition. But to put human rights at the centre to help us figure out an appropriate and useful way forward. When we do that, what we are actually seeking to do is to take the rhetorical reference to human rights, that you will find in almost every strategy on human rights that you can read, and turn that rhetoric into a reality, to ask “What would that look like in practice?”

However, before I get to the reality of what a human rights approach would look like in practice, allow me a brief word on human rights, more generally. This year, we celebrate the 75th anniversary of the adoption of the Universal Declaration of Human Rights; the best effort by humanity, coming out of the horrors of the Second World War, to define the minimum standards for a society where we could thrive and mutually respect each other. The Universal Declaration has been repeatedly reaffirmed universally. This year, we are also celebrating the 30th anniversary of the Vienna World Conference on Human Rights, which was a solemn rededication of every country on earth to the Universal Declaration of Human Rights and what it stands for. The universality of an instrument that was not just stated to be universal from the outset, but was, of course, universally negotiated. It emerged from complex global negotiations that reflected the different world ways of thinking. It's a very subtle and sophisticated system that has derived from the Universal Declaration and all the treaties that followed. Notwithstanding popular misconceptions, it is rarely about absolutes. It is very insightful in the way that it allows rights to be limited in the interests of the public good. We saw that, sometimes for good and sometimes maybe a bit too enthusiastically, in the context of Covid. That period neatly illustrates the extent to which the human rights system accommodates extraordinary crises and issues, and in the public good, allows for the restriction of rights.

It is a subtle system, and is well supported, nationally and internationally by courts and oversight systems. The Universal Declaration is incorporated in domestic law of many countries around the world; therefore, national courts uphold it. Through various systems since then, we see human rights law, including that enshrined in the European Convention on Human Rights, relied on and invoked nationally and international, in the Irish courts, at European Union level, at the European Court of Human Rights, and the International Criminal Court. We have the myriad monitoring bodies of different organisations. It is a system, these human rights standards, which is immediately relevant in the context of AI. To use a phrase that is often invoked in the UN, “Human rights apply as much online as offline.”

Therefore, there is no jurisdictional dispute about its application in this context. Crucially, as I said earlier, it is binding on states. It is all binding, and it has as its goal – and this is its beauty and its power - human wellbeing. Article 1 of the Universal Declaration describes human rights as being about delivering that “All people are free and equal in dignity and in rights.”

So, we have this astonishing achievement of our societies, sometimes described as “modernity’s greatest achievement”, and the question arises of why it has been so peripheral to the discussion about the restraining, the taming, of artificial intelligence. There are many reasons for this. I have already alluded to some, but one that is very important and has preoccupied my Agency for the last seven years has been that we have failed to show, in concrete measures, how the human rights standards and systems apply in the AI context. We have been great on the rhetoric; we have not been as good on the drilled down guidance of applicability in practice.

Drawing on the work of my Agency, I would like to leave you today with seven elements of what the ‘drilled down’ look would be like, in the specific context of the ‘now’ of AI. By that, I am referring to this regulation-building moment. We are in the law-writing moment for artificial intelligence, which is very exciting if we can get it right - and it is crucial that we get it right. But we could get it wrong.

When I talk about the law-building moment, I am referring, in particular, to the development in the EU of the AI regulation, the AI Act, which is still a draft and remains an incomplete process, and the less developed but ongoing process at the Council of Europe, of the development of an international treaty on artificial intelligence.

So, what are the seven key elements that the drafters of all such laws must keep in mind? Let me suggest them to you.

The first is that we have to make sure that our laws are comprehensive; that we develop loophole-free regulation. What would that look like in practice? Well, in the first place, it means we have to agree on a broad definition of AI. We cannot so reduce the definition that we skip out on loads of practical applications. There is a risk that we would narrowly define AI and exclude such things as the databases on our borders because they are very basic AI. They could be missed if we go for an oversophisticated definition.

We have to make sure that our regulations equally apply to the private and public sectors. When we lock in the private sector, we have to lock in all of the private sector, including small and medium-sized enterprises.

Another element has to be that we ensure that all of the impacts on human well-being that we can identify is somehow captured by a regulation. Converted into human rights terms, this means that regulation must embrace standing up for all of your human rights, not just particular ones. Most of the discussion until now has been around the protection of privacy rights. This is completely normal because it is all about data and the first thing that comes to mind is privacy with regard to our data. So yes, we need a focus on privacy but also on so much more.

Look, for example, at where the scandals have emerged in recent years. I think of the social welfare scandal in the Netherlands, a couple of years ago, where thousands of people were ordered to recoup large sums of money to the State, that they were allegedly erroneously paid. The bias was massively against people from ethnic minorities and stemmed from an erroneous application of an algorithm.

We can see that every aspect of our lives can be engaged. So that was why, in broad terms and with quite a lot of elements, we need to have loophole free regulation.

The second thing that is essential if we are to meaningfully protect our human rights in hard law is that this law provides for human rights compatibility testing for high-risk applications. It is imperative that where an application is of high-risk for human well-being that it be tested so that we can understand what the risk is and manage it.

Now, there are two very important considerations here. One is that, with the explosion of general application AI, we are reminded that testing of AI must be use-case based. It is not good enough to test the app on the day it leaves the factory, with no regard to how it will go on to be used. We have got to test it in the use context because only there will we see the risk to human well-being.

The second dimension, which again, we know from our research, is that, because of this phenomenon of feedback loops and the manner in which mistakes can multiply and grow over time in the application of technology, testing needs to be repeated. You cannot get into your use context, test once, and assume all will be well forever. The science has now emerged to show that that is not guaranteed.

The third dimension of effective regulation has to do with the need for strong oversight. Again, it might seem obvious, but it is very important that attention be paid to ensuring that the systems in place to oversee the regulation are adequate to the job. They need to have the skills. They need to have the resources. If they are protecting human rights, we need human rights specialists working within those systems. Not just privacy people, but all human rights. We also need oversight at scale for the scope of the challenge. I get a sense, as I travel around Europe, that it has not yet dawned on the designers of systems quite how broad and demanding will be the oversight that will have to be put in place.

The fourth of my seven is with regard to a fundamental principle of human rights: that every violation should come with a remedy. Therefore, we need to make sure, whether in the regulations we design to tame AI or in separate legislation, which is the EU model right now, that we ensure that there is a pathway to a remedy for somebody whose human dignity has been violated by an application of technology.

And then the fifth, which could have been my first and could have been my last because it is so absolutely central to the delivery of all of the other dimensions, is ensuring transparency. It is vital for effective oversight and the proper monitoring of technology that there is transparency as to the contents of the technology; only then can there be effective oversight.

Now, as you can imagine, this demand for transparency is met with a lot of resistance. I will give you two examples. One is, “It is just not possible- we don't know how the tech reaches that good outcome, do not touch it.” I have heard that many times, including at a conference from a doctor carrying out medical research. We would argue that that response is just not good enough. We recognise that there may be huge complexity in terms of effective delivery of transparency but, at a minimum, in the context of tech we do not quite understand, what is to stop us demanding that you, the designer of the tech, describe what you do, tell us how you have tested your technology, show us what data you have entered into your technology. We are already a long way, at that point, towards what we need for oversight.

The second, and even less convincing, argument about transparency is that it is a secret that cannot be shared. I am not referring here to commercial secrets; I am talking about, for example, national security secrets. Here again, this resistance is very much a false argument because in many other sectors, over generations, we have found ways to implement effective oversight of highly sensitive contexts in a manner that does not compromise secrecy, does not compromise confidentiality. Think, for example, of the way in which we have designed judicial oversight of national security systems.

The sixth of my seven considerations is with regard to the need for continuous dialogue. Dialogue is not just a good, it is a necessity. As we continue to work our way forward in this whole new world, we need everybody on board to figure out the right way to go. And so, in the design of the regulations, in their rollout, application, future amendment, it must be all on the basis of a rich living dialogue across all of the relevant stakeholders.

There are many of them. But let me focus on civil society today. As a general observation, I have yet to find a single human rights innovation that did not begin with civil society. More narrowly, coming back to the AI context, again, we would be lost today if it was not for the warnings and the advocacy of civil society so far. It has done an astonishing job in educating us all, including people like me, as to the scale of risk and the need for high attention. And so, involving civil society for its advocacy function but also for its expertise is critical.

There are many relevant civil society actors but let me just mention one cluster that I think is neglected, and that is the cluster of national human rights authorities, or national human rights institutions. Here in Ireland, that would refer to the Irish Human Rights and Equality Commission. These bodies, everywhere, need to be involved as well. They are the unique centres of human rights expertise in our societies. They also have to be part of the conversation.

Just before I move away from this point, it is true to say - and I have had much personal experience of this – that dialogue can be very difficult. Quite simply, we often all speak different languages, and we do not understand each other. I learned this in a different context years ago, trying to talk to economists. They did not understand me, and I did not understand them, but I think it is even more challenging in the context of technology. Why should a tech engineer be expected to understand my human rights language? Why should I be expected to understand theirs?

But we have to try. At the very least, we have to find a common vocabulary in which to engage with each other.

The seventh and the final of my considerations is not about something we must do, but something we must challenge. We have to challenge the very frequently invoked argument after somebody like me speaks, which is the stance that this will only result in stifling innovation and other countries will leap well ahead us. I say in response, that argument is easily relied upon, but I challenge proof of it; we are not convinced at all.

We are able to ensure attention to human rights while still deeply respecting the need for innovation in our societies, in our business world, wherever else it is and here are some of the ways we can do it. One is the way we are currently developing regulation in Europe, using a risk pyramid model. To express it very briefly, this pyramid has the riskiest stuff in the very top band, followed by a wide band of ‘high-risk’ applications, which are proposed to be subject to quite tight regulation. Finally, at the bottom of the pyramid is a vast band where all the benign AI applications lie, which no one can find any huge risk in, and which would be subject to a very minimal human rights oversight.

The second dimension of how we can make sure that innovation is not unduly restrained is by doing sandboxing of the interplay of AI and human rights. Never compromising human rights; they should not be in play, they should not be in negotiation, but using this approach to see how one might do the fixes. I very much welcome that the current Spanish Presidency of the European Union is heavily promoting the concept of ethical or normative sandbox exercises. But again, as I say, such exercises, while very welcome, must never be at the expense of standards. The standards are not in negotiation.

I would also say, as my final argument to those who raise the innovation point, to consider the trust element. There is no doubt, and I have yet to see anybody convincingly push back against the view, that a strongly human rights compliant, human rights respectful AI, that is ultimately targeted to human thriving is going to be the most trustworthy AI. Trusted by consumers, by citizens, by everybody in our societies. I would be firmly of the view that, in the long game, it is the trustworthy AI that will ultimately win out.

Dear friends, before you put your questions or your observations to me, let me say that I am sure some of you think that I am naive and unrealistic, and I accept that I could indeed look and sounds like that.

But I feel that I have no choice. The game is just too serious. AI is profoundly impactful for human thriving, and there is no better shared pathway towards respecting humanity than human rights.

Let me finish up by putting it another way. I was in Brussels not so long ago and had some free time, so took myself to the Museum of Fine Arts. I went to revisit one of my favourite pictures in the world: Pieter Bruegel’s Landscape with the Fall of Icarus. It's a tiny picture, famous for the fact that it is full of shepherds on the hills, fishermen working on boats, men ploughing the land. There is a turquoise sea in the background, and it is only after you study it for a while that you see two little legs dangling, attached to someone who has just plunged into the water. The first time I saw it, I laughed out loud at how long it took me to notice Icarus.

I thought of it in today’s context for a different reason, which is why Icarus fell into the sea. You all know the story: Daedalus, his father, made him wings and bound the feathers together with wax and sent him off up. In a great act of hubris and self-confidence, Icarus flew too close to the sun, melting the wax, and he plunged into the sea and drowned.

And it occurs to me - why did this happen to Icarus? It was not just hubris; he had the wrong flight plan. This brings me back to AI because I would suggest that if we substitute Icarus for AI, and we provide it with the flight plan of human rights, then I believe that AI can indeed soar safely up to the sun and bring all of us with it.

Thank you.

See also