Video
09 November 2022

Automating Human Rights? FRA Director Michael O'Flaherty at the Web Summit 2022

Automation and AI have radically transformed how we work, live and play. In this video, Director of the EU Agency for Fundamental Rights, Michael O'Flaherty, discusses the implications of AI for our most basic of human rights.

The video was recorded at the Web Summit 2022 in Lisbon.

Transcript:

Ian Martin: Thank you for joining us today for this session talking about the issue of artificial intelligence and its impacts on human rights.

My name is Ian Martin, I'm the Europe news editor for Forbes. I write about startups, policy and investors in Europe and that increasingly touches on artificial intelligence.

This is technology that is already impacting all our lives and it's new applications have huge potential and potentially even larger impacts. I’m joined today by Michael O’Flaherty who is the current Director of the European Fundamental Rights Agency who joined the organisation after a distinguished career in law and also human rights around the world.

Obviously we're at a tipping point in terms of the application of this technology and we see already huge strides in the development of AI technology in the last few months alone in terms of generative AI and its potential to impact creative arts.

But I think our focus will be more on the corporate world and also the role of governments and regulators in terms of enshrining and protecting fundamental human rights.

So Michael, is this technology already impacting our lives without people being aware of it?

Michael O’Flaherty: Yes it is. Well everybody here at the web Summit knows about the ubiquity of algorithms. They touch on every aspect of human well-being for great good (we should frame this always in a positive sense) but also with any amount of risks.

We use the language of human rights and so algorithms impact your freedom of expression, your freedom of assembly, your freedom of movement, your privacy but also issues of socioeconomic well-being, issues of social welfare, right to a job, healthcare, education you name it and it's increasingly evident that governments are using algorithms across every dimension of their work but largely in a in a decentralized way. In very few countries do you have a central registry or a central pace of awareness of the extent to which algorithms are being applied across the different elements of governance - by the way not just central government but also local government.

We're concerned that citizens are simply not aware yet of the extent to which decisions that impact their well-being are being made - at least in the first instance -by a machine.

We have many indications of a low level of awareness not least the lack of complaints so we need to we need to take on the UN recommendation of doing an audit of the use of algorithms by the state and then generating the necessary public discussion. On that basis I think we can go forward in a more confident rights-respectful way.

Ian Martin: So what do you think is the role of the EU or governments in terms of trying to avoid either conscious biases in the design of algorithms or unconscious ones in terms of mistakes made in the construction or in terms of there being a selection bias in the data that's being used to train these models.

Michael O’Flaherty: Well as I said earlier, every imaginable human right is engaged by AI, for good, as I said, but also in this area of risk and human rights violation. Andthat triggers a duty on the part of the state because all of our states have signed up to human rights commitments in international treaties.

Here in Europe the best known is the European Convention on Human Rights. The convention requires the state to protect your human rights. So if algorithms and artificial intelligence more generally is impacting your rights then the state has no choice, it has to regulate. So it's not a question of regulation or no regulation. A rights-based society, a rule of law society must regulate. The issue is getting the regulation right.

Now we're in the laboratory here in Europe right now. We have the draft AI regulation, we have an AI treaty being developed by another body, the Council of Europe, which has just begun work on a new treaty and we're having to negotiate the issues of what the regulation should look like. It's clear you can't and you shouldn't regulate every aspect of AI. Much of it is benign. What Netflix tells me to watch tomorrow night, that doesn't need a heavy-handed state intervention. That's why I very much welcome the risk pyramid model that's been adopted in Europe whereby some applications are left for self-regulation, are more or less left alone by the state, by authorities but then as you rise up the pyramid you have a greater level of external scrutiny. I find that a welcome element of regulation.

One thing I'll say is that no matter what model of regulation we come up with we have to ensure transparency. We have to get past the stage we're still in of people suggesting it's too complicated to explain, we don't understand it ourselves, things of this nature. That's an impediment at the moment which we need to tackle because you cannot have regulation without having some form of operable transparency that allows the overseer to do the job.

Ian Martin: Now I understand you have some forthcoming research into bias and

speech detection in terms of trying to discern and identify and block hate speech. As part of that research I understand that you've lifted the lids and reviewed some of these algorithms. Tell me about some of your findings at this preliminary stage

Michael O’Flaherty: What we wanted to do was get past the rhetoric and look in real life applied reality of what the impact of AI is. There's an awful lot of generalised statement about the risks and the opportunities but we have to get past that if we're to meaningfully regulate it. So we dug into these areas that you’ve just mentioned. Take a speech recognition technology, we put it to use in the context of identifying hate speech online. What we found was that it is thoroughly unreliable to adegree that even surprised us. So “I hate Jews” will get flagged as hate speech. The machine is working in that context. But we found that when you enter thewords “I hate Jews love” it wasn't flagged. The word “Love” somehow made it all benign and there are many more examples of this kind showing that the human cannot let go here.

Humans have to stay very closely involved in monitoring the application of these technologies in the different uses because remember we used it in a very specific context of flagging hate speech.

We looked at predictive technologies in the particular context of policing. And again here we found a degree of mistake which should be a cause of great worry to police forces who are using this technology everywhere. We found that the feedback loops were generating massively mistaken information for the police services to go to literally the wrong bits of the city because of the extent to which the bias got ever stronger in the technology.

And again here the reminder is in any police force, anywhere using this technology, at least now the humans have to be embedded at the very heart of an ongoing review.

It comes back to the issue with regard to every application from a human rights point of view – test, test, test and always test in the context of the application which can vary enormously in an application of a general nature.

Ian Martin: Often these algorithms or machine learning tools are referred to as a black box. You've obviously started to unpick and try to discern what are some of the characteristics or what are some of the assumptions that they make. Do you think governments and agencies are well equipped to make those choices and to be able to get under the curtain of these algorithms?

Michael O’Flaherty: We have no choice, we have to get under the curtain - we've got to lift the lid. How can we protect our societies if we don't do that?

We've spent a bit quite a bit of our time demythologising the area and you can challenge a lot of these claims that the box must stay black, that the the lid must stay fixed through research of the type my agency does. We go into a specific sector, we work with the industry, we work with regulators, we've even designed malign algorithms to see what would happen to them in a specific context and it's not incomprehensible. Remember, AI is made by humans for human purposes. So then we really have to we have to face the fact that it needs to be brought under control, it needs to be tamed so that we can realise it's astonishing potential for human well-being.

Ian Martin: Is there a risk that for some people within the EU that they've experienced bias by governments or government agencies? They maybe have a perception that they would be treated more fairly by a machine-driven process but they are maybe not fully aware that there are also assumptions and potentially biases within those models?

Michael O’Flaherty: That's right. A few years back we asked Europeans through a large-scale survey, would they rather use a facial recognition up to determine their status in an airport or a human, an individual police person and 90 per cent said they would trust the machine more than the human. We've learned in the last few years this is completely misplaced. The scale of the mistakes that I've indicated to you now, to a large extent unacknowledged mistakes, is really quite worrying. That's not a message of you abolish things, it's not a message of stop investing in the technology, stop investing and learning - not at all. But it is a recognition that westill have a long way to go. We need to invest far more in AI education. We need to have this in our schools, from infant school right up, so that we become critical and aware of the extent to which we're in a risky environment whenever we're engaging with an application, whatever it might be.

Ian Martin: The European Union I think has been at the forefront of regulating technology and the internet in terms of laying down fundamental requirements and requirements around privacy with GDPR and also competition with the new Digital Services Act. How do we in Europe balance the need for innovation and for our technology companies to compete with companies in the US where there's fewer constraints around working with data or potentially in some countries like China where it seems there's almost no constraints, no ethical boundaries. How do we balance protecting human rights and also allowing innovation?

Michael O’Flaherty: I would never describe it as a balance it's not about a zero-sum game - more human rights/less Innovative technology. To the contrary. More human rights means more trustworthy technology. More trustworthy technology is more attractive technology. I think playing the long game it'll be the more successful technology.

Even if that weren't the case, we've no choice. Europe is a rule of law society. It has a duty to protect its people online as much as offline and so we have to regulate. But we also have to dispel myths like the one I've just mentioned. We have to dispel the myth that invoking human rights is invariably just this awful impediment, this big wall blocking us from going forward. We saw that with GDPR, the general data protection regulation. It seems to me that when an outfit, an organisation or a government agency doesn't want to tell you something it just says “can't do it, GDPR”. In so many of those cases that's based on a willful or an accidental misreading of the GDPR. Researchers will tell you that they can't do medical research because of GDPR. It's not true. There’s an exception for academic research in the GDPR.

It's the same with regard to human rights and the embedding of human rights in the forthcoming AI regulation. It simply is not the roadblock. Human rights is a nuanced system. It's subject to limitation for all sorts of reasons in the public good.

I can take a real world example. We were all subject to limitation during the context of COVID to contain the pandemic. Now people can agree or disagree with the level  of restraint but that was about the limitation of human rights in the interests of a public good. So we can have the limitation of human rights in the interests of public good also in the context of the application of artificial.

So let's avoid the headline engagement with this topic and look at human rights as the more sophisticated tool that it is.

Ian Martin: Obviously, this is a new technology. It's still being applied but could you expand on the kind of harms that we've already seen within Europe in the use of this technology. You've mentioned predictive policing and that leading to mistakes and misallocations. What other impacts have we seen already?

Michael O’Flaherty: Well let's start with the positive impacts. We wouldn't have a covid vaccine if it weren't for artificial intelligence. I look forward, if we get this right, to an astonishing future with cures for diseases that are beyond our dreams right now. I look at a delivery of public services to a degree of efficiency which simply isn't the case today. So we need to recognise that if we channel this in the right direction it's astonishing but then the risks, the risks are manifold.

Last year in the Netherlands a biased algorithm resulted in the clawback of social welfare payments from people who happen to belong to racial minority groups. It was an outrageous demonstration of a biased algorithm uncontrolled. It was by no means deliberate but nevertheless there had been a failure to identify the impact of the feedback loops and the manner in which this application was getting more and more racist as time went on until it was too late and a government fell as a result of this.

The dimensions of risk are vast and that's why we must adopt a four square human rights approach not just a privacy approach. In a lot of the conversations about AI, digital services and so on, you get a high attention to the need to protect privacy. But privacy is just one of dozens of human rights issues that are engaged here. They all need to be captured. That by the way reminds us that the oversight bodies need to be well skilled and resourced across all these issues. They need to know what social welfare discrimination looks like as much as what a data breach might look like.

Ian Martin: Is there a risk that the organisations that are using this technology don't fully understand the consequences or the assumptions that are already at play?

Michael O’Flaherty: In recent research of ours we worked with people in industry and we found an awful lot of goodwill and a generalised commitment to be good. But when we unpacked that and we dug deep into it, when we explored what that meant and what level of awareness of human rights there was, it was very low indeed. So there was a general sense of privacy, I would say an exaggerated sense of privacy, as something absolute that you can never limit and a very low understanding of discrimination.

One of the issues for instance was a recognition that you make sure to avoid in your algorithms obvious discriminatory elements such as identifying race as a selection criterion or gender as a selection criterion but an unawareness of proxies. So take for example a shoe size. Shoe size can be a proxy to distinguish men and women but when people in certain bits of industry were confronted with this they were they were honest enough to admit that it hadn't occurred to them. So we really do need to build up a level of awareness.

People don't all have to become human rights lawyers but at the minimum they need to engage the relevant expertise to get it right and use agencies like mine which is generating the evidence on this all the time.

Ian Martin: Can we learn from how other industries are regulated or governed in terms of making sure assumptions are checked and safeguarded?

Michael O’Flaherty: Yes of course. I mean regulation is not a new science. I've already mentioned different models that we have to play with here. There has to be a dimension of self-regulation. That goes without saying. It's not just an issue of different grades of seriousness of the impact of the technology but it's also the sheer scale. I mean how could you create a regulatory body capable of engaging with every AI application on Earth. It's unthinkable. So we have to borrow elements of self-regulation including from other sectors.

We've learned a lot in Europe in the area of digital services through the voluntary codes of conduct model that's been applied. Then we have to get into hard regulation and there we can look for example at what has worked and maybe not worked so well with GDPR, the data protection regulation.

And then of course there will have to there have to be new elements to take account of the specificities of this technology. One of which is the extent to which the human rights impact, the risk is very context specific. That's the dimension we have to keep in mind with general application AI technologies and I'm not sure that the drafters of the regulatory framework right now have fully come to grips with that yet but that's that's the next important issue to tackle.

Ian Martin: Obviously Europe is a hot bed of AR research but I think the largest applications are being made by large technology companies in the United States or in China. Is there an issue here in terms of regulating or safeguarding rights when this data, these models are being run outside the borders of the European Union?

Michael O’Flaherty: It has been the experience in the past that, as I said earlier, Europe has the laboratory. And where Europe goes other parts of the world tend to follow. Just to take one example. The data protection machinery of Australia is, I understand, largely modelled on the European and I think we'll see similar practices in this context. I don't have a predictive capacity to know how countries outside the EU will react. But I think that what will happen is that as we put in place these elements of regulation and as they are seen to generate levels of trust in the technology they'll become highly attractive to states elsewhere in the world, for sure not all, but to at the minimum to democratic states.

Ian Martin: Why do you think there's been this over emphasis both from regulators and also from companies in focusing on privacy issues?

Michael O’Flaherty: It's because of data. Everything is built on data so there's a very correct acknowledgment that if we share our data there are all manner of privacy issues that have got to be taken account of. And that's fine, I'm not disputing that. But the data is used for all manner of purposes and then these trigger all these other impacts on our lives. I've already given so many examples. But look at how data gathered by facial recognition technology by the police impacts how I behave in the streets. I will maybe avoid this street or that street because I don't want to be recorded, that's my freedom of movement. I'll choose with care with whom I associate, that's my freedom of association and I could go on with this example ad infinitum and that's just one narrow band of human rights.

Think about the social welfare issue I mentioned earlier. We really need to get a much wider appreciation.

Ian Martin: What do you see the sort of greatest risks in the application of this technology?

Michael O’Flaherty: The greatest risk is that the people who say that it's a black box and that we should just let it run at its own astonishing speed - that they'll win, that they'll win the argument. That's extremely frightening. We've got to stay up with the discussion, we've got to insist on transparency. Now I'm not a scientist. I'm aware that there are all manner of challenges about applying transparency in practice but you cannot regulate without knowing what's going on. The whole world doesn't have to know but the overseer, the regulator needs to know and what we need there is an intensified conversation between the scientists and the regulators to figure out what transparency would look like in practice. What's the minimum acceptable level of disclosure to do the effect of regulation? That's the big challenge going forward. That's the issue that would keep me awake at night about the future of AI

Ian Martin: This phrase keeps coming up “the black box of AI”. What do researchers mean by that and is it really truly that opaque or do we need to ensure that there is always some transparency?

Michael O’Flaherty: As I said, we have no choice. Would you put your life and the life of your children in the hands of a technology that you don't understand, the direction of which you can't predict? That's a dystopian future and we don't have to go there.

Ian Martin: What opportunities do you see in terms of this technology being able to sort of enshrine and protect human rights? I think one application which we obviously discussed a lot already is privacy but in terms of potential for machine learning to obfuscate privacy and ensure that it's not being harvested and tracked.

Michael O’Flaherty:  There's an enormous scope. It's still largely undeveloped to put AI in the service of human rights. Human rights organisations around the world are using AI right now to monitor the situation including remote monitoring of human rights abuses to a degree that was unthinkable 20 years ago. That's very important. We've seen some use of these applications in the context of Ukraine. This is very important, it's very valuable. I could give you so many different examples. Look at the capacity of AI to transform healthcare for the good. But let me also be a bit provocative and say that AI in policing, properly controlled, properly directed serves us all. It serves our human right not to be killed, not to be subject to a terrorist attack. So even here in that context it's often described as a sort of a non-human rights direction of AI. It can be directed, ultimately, in the service of human well-being, human thriving.