Director General, organisers, dear colleagues,
Thank you for inviting me to this roundtable discussion. The topic is smart, timely and tackles an issue of global importance for the protection of human rights.
On-line hate speech is a plague of our times. So many people are the targets of steady streams of vile and abusive language, images and video content. The assaults can take many different forms. They can be racist, xenophobic, anti-Semitic, Islamophobic, homophobic, sexist, ageist, or target persons with disabilities.
We know that things are getting worse. For example, my Agency’s 2018 survey among Jewish people in 12 EU Member States showed that antisemitism is most commonly expressed or experienced online, especially through social media. And nine in ten respondents said that expressions of antisemitism on the internet have increased in the past five years.
Women are particularly targeted by online hatred, including cyber harassment and cyberstalking. According to a FRA 2012 survey on violence against women, one out of 20 women had experienced cyberstalking since the age of 15. More than one in ten had experienced cyber-harassment, meaning they had received unwanted, offensive, sexually explicit emails or messages, or offensive and inappropriate advances on social networking sites. This problem is more widespread among younger women, with one in five women aged between 18 and 29 having experienced cyber harassment.
The array of specific human rights that are engaged by online hate speech is very wide. Yes it is about privacy rights and protection of personal data. But it concerns so much more – at the most extreme it can be about risks to the right to life itself, and it also engages such rights as freedom of expression, protection against discrimination, as well, of course, as rights related to remedies and redress. Matters are complicated by the extent to which abuses are perpetrated by non-state actors, including commercial enterprises – thus engaging the issue of the human rights accountability of business.
At the outset, let me emphasise two things – both obvious but in need of re-statement in a period when regulation of cyberspace is commonly discussed as a matter of voluntary codes and principles of good practice. The first is that human rights constitute binding legal obligations – be that in terms of international law and its human rights treaties or of EU law as concerns EU fundamental rights. Secondly, and no less axiomatically – human rights is as relevant – as normatively applicable – on line as it is off-line.
Yet another point of emphasis – again obvious and well known but needing to be repeated in light of contemporary debate – is that most human rights – including those at risk in the policing of online hate speech – are subject to limitation in the interest of the public good. To respect that principle typically we assess whether the restriction of a particular right – such as freedom of expression – are truly a necessity, are proportionate in scale and respect the principle of non-discrimination. The challenge of course is to identify the legitimate application of such tests.
Let me turn now to the role of artificial intelligence (AI). My starting point in this regard is the recognition that AI is a technology that, at least in our common current understandings, carries significant potential for the enhancement of our societies, including in terms of the protection of human rights. My primary frame then is one of possibilities rather than of risk.
Specifically regarding online hate speech, AI can support the policing of the internet. There is no doubt that we can use it for detecting human rights violations including on-line hate speech. With the use of algorithms we have the opportunity to identify abusive language that otherwise would go undetected. What is more, AI gives us the possibility of deploying a monitoring and take-down capacity at a scale commensurate with the quantity of online speech that requires review.
But we must proceed with great caution.
We have to acknowledge the extent to which AI itself can trigger an infringement of human rights. On that subject I welcome how you have framed today’s discussions. Among your framing questions three stand out for me:
In seeking to answer such questions and best align AI in support of human rights we need constantly to acknowledge that we are in a new field that is poorly understood. Discussions about the impact of AI-related technologies on human rights often take place in a context of scientific uncertainty. At the most general level it is uncertain as to what AI technologies actually can do and what will be possible in the future. Due to the novelty of AI-related technologies and the potential lack of transparency of those using them, it is often difficult to know in these discussions what is over-hyped, what is just marketing, and what is really happening on the ground.
I would therefore call for an evidence-based discussion on AI and human rights. That will greatly assist policy-makers. It will also support the judiciary and regulators in their decisions on potential human rights violations through the use of AI-related technologies.
The Fundamental Rights Agency is committed to contributing to the evidence-base. This is the context for a new focus of our attention on the related topics of artificial intelligence, big data and fundamental rights. Our current interest is in the identification of concrete applications and case studies of AI-related technologies to better understand where fundamental rights are impacted. We are seeking empirical evidence that allows for an informed and balanced assessment.
As we are seeing in our research, the issue of context that you flag is of central significance. It is only when we study context-specific scenarios that we can undertake the examination of whether the interference with a right – the taking down of online content is compatible with human rights – whether the restriction of the right of freedom of expression respects the applicable tests of necessity, proportionality and avoidance of discrimination.
To take just one rather well-worn example, the same words in an academic treatise may carry a very different significance to their use in a personally directed social media message or a site that promotes criminal acts.
The extent to which AI can itself take account of context, is subject to scrutiny on a case by case basis. In addition, drawing on general human rights good practice we would urge that before and after deploying any AI systems, human rights impact assessments and public consultations be carried out. These should be palpably transparent and open to engagement by the general public. Based on such exercises, it can be evaluated to what extent the task of online content moderation can be assisted by machines and/or left to the private sector.
Mention of the private sector allows me briefly to recall that the existing discourse on business and human rights offers much guidance already in the otherwise new context of the deployment of AI. The United Nations Guiding Principles on Business and Human Rights, and associated national action plans, offer directly relevant standards. Let me mention a few:
Many different actors are currently working on strategies, recommendations and guidelines on AI. For example, the European Commission has created a High-Level Expert Group on Artificial Intelligence, working on Ethics guidelines for trustworthy AI. Many governments have developed or are in the process of developing strategies on AI. I would urge them all to take adequate account of existing human rights and other legal commitments that are superior to any soft ethical framework. Paul Nemitz of the European Commission put it well in a recent article where he called for a culture of incorporating the principles of democracy, rule of law and human rights by design in AI-related technologies.
The UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye, has also reported recently on the issue. He says that, “States should ensure that human rights are central to private sector design, deployment and implementation of artificial intelligence systems. This includes updating and applying existing regulation, particularly data protection regulation, to the artificial intelligence domain […].” Speaking to the business sector he recommends that “all efforts to formulate guidelines or codes on ethical implications of artificial intelligence technologies should be grounded in human rights principles”. Finally, he observes, as a matter of human rights, that “all private and public development and deployment of artificial intelligence should provide opportunities for civil society to comment.”
Dear colleagues,
As we proceed to navigate this field I would emphasise the extraordinary importance of working together. Within the normative frame of human rights, we have to bring together different stakeholders and different expertise to get it right. We should have lawyers discussing with engineers, sociologists with computer scientists, statisticians with cognitive scientists. Our conversations and efforts require partnership of the private sector, government, regional and international organisations, academics and civil society.
Artificial intelligence can serve as a game-changing tool in the combat of corrosive hate speech. Let’s not miss, or worse abuse, this invaluable opportunity.
Thank you.