Speech

Safeguarding fundamental rights in the digital age

Speaker
Michael O’Flaherty
Human rights are the preeminent pathway towards achieving that shared goal of people of goodwill in the context of the technological revolution. Adopting a rights-based approach to this great project is not just legally required, as is the case here in the EU, but it is also axiomatic, a compelling matter.

President, Director General, honourable members of the committee.

Thank you so much for the invitation to be with you.

I would like to express appreciation to the [European] Commission for the excellent report on which our discussions are based. I think it does a very good job of throwing light across the breadth of the issues that we have to confront.

I also appreciate the narrowed-down topics for today's discussion: AI, non-discrimination, content moderation. And in these opening remarks, I would like to make some observations that I hope are overarching that speak to each of the topics for your debate.

The first observation has to do with my experience in gatherings to discuss tech issues in the last two years.

One of the most striking things about these encounters is that to be successful they have to be profoundly diverse. They have to involve a very wide array of actors. Some of the obvious ones, public policy, government and the private sector have to be present at the table. But so also do many others.

So many dimensions of the academy: of scientists, of philosophers, of ethicists. Civil society needs to be very strongly present, again in all its breadth. And it is only when all of these groups come together at that round table and invest in listening to each other, in understanding each other that progress can be made. And it is not easy.

It is difficult to attain mutual understanding. In my early conversations with scientists, we were unable to grasp each other's points. Why? Because we were using different languages, different contexts, even different sets of understandings of values.

And so we needed, and still need, to find common language. And that is the second learning that I have picked up over the last couple of years. It is possible to find the common language when the people around the table in all their diversity are people of goodwill.

Then inevitably and unfailingly, we reach a point where we all agree on the goal of this great technological revolution. The goal of enhancing, of supporting, of honouring human dignity.

We can reach an agreement that it is all in the end about the human. But, of course, it is great and important to agree on that goal but how do we achieve it?

Well, when we identify the goal as human dignity, then it is actually very easy to identify the preeminent roadmap to its achievement. We first identified that roadmap in 1948 when it was proclaimed at the United Nations General Assembly that all human beings are born free and equal in dignity and in rights. And, of course, these are the opening words of the Universal Declaration of Human Rights.

So, therefore, human rights are the preeminent pathway towards achieving that shared goal of people of goodwill in the context of the technological revolution. Adopting a rights-based approach to this great project is not just legally required, as is the case here in the EU, but it is also axiomatic, a compelling matter.

Now, if we agree on a rights-based approach to engaging with technology, there are some really important implications. And I will briefly mention just five.

The first, of course, is adopting a rights-based approach. Taking, for example, the Charter of Fundamental Rights as the normative pathway means that it is about all human rights.

It is not just about protecting privacy. It is about protecting freedom of expression. It is about a number of other rights, but it is also about all rights. And that is why I so much welcome the focus today, for example, on non-discrimination. It is very important as a core human rights guarantee.

But there are so many others. I will name just one cluster – socio-economic rights. We have seen how people suffer when an algorithm gets social welfare payments wrong. And so it is also about the socio-economic side of human well-being. And as we engage, let us keep challenging the efforts to regulate to ensure that they embrace the breadth of human rights protection and human rights risk.

A second unavoidable implication of adopting a human rights approach is that we have got to involve the rightsholder in the process. This cannot just be something that is given to the citizen. The citizen must be part of the crafting of the solution. I make this point to flag the high importance of civil society in this whole exercise.

We must ensure that civil society, which captures and represents the breadth of the views from across our communities into our discourse. And that must be given a central honoured and respected place.

I was listening to the radio news yesterday and there were three items related to technology. What was striking was that the people expressing concern in these broadcasts were all from civil society. I thought it demonstrated so clearly how it is civil society that raises the red flags and sometimes even proposes solutions; and they must be heard at the centre of the discussion.

I make that point strongly here because I know that this committee invests so heavily in protecting the role and the space for civil society.

The third of my five observations about this rights-based approach is that it requires us to be in a continuous exercise of learning.

We must be continuously looking to see how the norm engages the human reality. And we have to do that in recognition of how the human reality is changing constantly, including in the context of the rapid evolution of technology.

There are technologies in AI today that did not even exist when the [European] Commission proposed its draft AI regulation. That is evident of the extent to which we must keep engaging through research and acute observation.

That is where my agency, the Fundamental Rights Agency, has its most valued role. We are continuously investing in research on how AI is applied and what that means for human well-being and how we can avoid the risks to human well-being.

The fourth of the five implications of adopting a rights-based approach is the high importance of ensuring accountability. There is no right without a remedy, and we have to make sure that that applies as much online as offline.

This is difficult to achieve in every setting, never mind in the tech context. But it is a sine qua non of honouring human rights in our work. We have to invest the effort to get it right. This is why, for instance, regulatory systems must have strong, independent, well-resourced oversight. You cannot have remedies if you do not have oversight.

It is also why we have to continue to insist on a principle of transparency. We have got to know what is in the algorithms, why they are created, what their training content looks like, and all manner of other dimensions. Without transparency, we cannot have the effective oversight that is needed and the effective support and honouring of the rights of individuals.

Now, I want to acknowledge here that transparency is extremely difficult. I have come to learn this. I have had enough conversations now with scientists and tech experts to appreciate that it is not always easy. But nevertheless, I believe it is a principle that must not be negotiated away.

The final of the five dimensions of a rights-based approach has to do with the environment, the environment in which human beings experience the technical revolution. It is about digital literacy, but also very importantly, it is about digital access.

When I think of digital access, I think of the Roma child who two years ago was told to go home and remote learn. The child who had to go back to dreadful accommodation where often there is no electricity, where there is certainly no computer, and the idea of internet access is inconceivable. That child was expected to be supported by parents who are desperately struggling just to survive. And that is what I think of when I think of the need to engage the huge gaps in terms of digital access.

I would like to conclude by acknowledging that this is far from easy. The challenges are new. They are highly complex. There is a very rapid evolution even as we speak. The environment is changing. There are new partners with whom we must work where we have not had that experience before. There is a complex global environment.

But notwithstanding all these challenges, I have to say that I am genuinely encouraged and hopeful. I am deeply impressed by the EU global leadership in this area and with the draft regulations that are in evolution.

I note the progress of the Council of Europe, where the CAHAI Committee concluded its work and where efforts will soon begin for a convention. I find the UN roadmap on these topics ambitious. I am deeply impressed by global civil society activism. I see shifts in at least some parts of the industry sectors. And I truly do believe that together we can deliver a digital space that is fit for, and in the service, of human dignity.

Thank you.

See also