Online Hate Speech: Does Self-Regulation Work?

Curbing hate speech is one of the most difficult challenges that regulators – and platforms – face. For starters, hate speech is fairly difficult to define. The European Commission provides a handy guide. In its Council of the European Union framework decision on combatting certain forms and expressions of racism and xenophobia by means of criminal law (2008), it calls hate speech “all conduct publicly inciting to violence or hatred directed against a group defined by reference to race, colour, religion, descent or ethnic origin, when carried out by the public dissemination or distribution of tracts, pictures or other materials.”

But how do we determine precisely which comments are or are not hate speech and what does and doesn’t need to be removed from platforms? The issue, obviously, touches directly on questions of what constitutes free speech. And, if someone is to be deciding that some speech breaks the law or violates cultural norms, who exactly should take those decisions? On what basis? And should the platforms be held accountable – both for any hate speech they miss as well as for the comments they might overzealously bring down?

The European Commission has proposed a novel way of addressing this. Drawing industry and civil society together in a unique dialogue, it fomented a code of conduct on countering hate speech online (2016). Under this arrangement, civil-society organisations – particularly those with a background in spreading tolerance – are empowered to monitor activity on platforms. They flag content they find objectionable to the platforms, which commit to evaluate every piece of content flagged this way in under 24 hours. Each year, the NGOs file a report on how much of the content they flagged was taken down. The platforms themselves also have their own “community standards,” upon which they can ban some posts if they consider it violates their policy. And other countries – most notably Germany – have laws on what exactly constitutes hate speech and what speech is out and out illegal.

But is it working? The European Commission’s fourth evaluation on the code of conduct on countering illegal hate speech online (2019) states, broadly speaking, yes, it is. The code and its built-in monitoring mechanism “proves to be an effective tool to face this challenge,” the European Commission concludes, adding that the evidence compiled “confirms self-regulation works.”

The evidence also shows that platforms have been moving quickly to respond. The European Commission’s most recent assessment found that 71.7% of the content flagged was removed – much of it in under 24 hours (88.9%). All of the global platforms showed a more aggressive stance towards banning questionable content since the advent of the code of conduct. Facebook, for one, saw its take-down rate sore to 82.4% of flagged content in 2018, up from 28.3% in 2016. But takedown rates among the major platforms also varied. All showed improvement; but YouTube was the most aggressive, with 85.4% of flagged content removed in 2018. Twitter removed the least – only 43.5% in 2018, up from 19.1% in 2016.

The biggest differences are found not in the comparative rates between platforms but in the muscular way that some countries approach hate speech – and are willing to accept curbs on free speech to enforce it. Germany, for one, has very strict laws banning political hate speech of all types. Its 100% removal rate for flagged content in 2017 reflects the tough law in which the platforms must operate there, including the Netzwerkdurchsetzungsgesetz (NetzDG) (2017). Other countries – such as Denmark and the United Kingdom – have looser laws and more open traditions. Platforms still respond to calls to remove more and more content in both places. But, facing looser legal requirements and more liberal traditions, the removal rate for flagged content is 42.5% and 66.3%, respectively.

Core questions remain. For starters, how much hate speech is getting through the system? It would help to see a proper study of that. And are the trade-offs worth it? For sure, Germany has a higher take-down rate than most; but is its democracy any less rich because of it? Germany still rates a top-tercile score on Internet freedom in the Freedom House Freedom of the Net 2019 report, which measures overall obstacles to access, limits on content and violation of users’ rights.

For a full display of all 65 countries analysed by the report, please view the above chart in full screen. 

One thing is for sure: the situation with hate speech online is improving. But is it improving quickly enough? And if a nation’s laws or the rights of individuals are not being violated, do we really want private companies making decisions about what goes up and what comes down online? And if so, how?

VIORICA SPAC
Viorica Spac is project manager and research associate at the Lisbon Council. She sits on the core team curating the Intermediary Liability Evidence Hub.

Disinformation: How We Encounter, Recognise and Interact with It

Collecting data for the Intermediary Liability Evidence Hub and then testing the Evidence Hub’s “explore” feature, I noticed interesting anomalies in the information we’ve gathered. One such anomaly converged around how citizens interact with online disinformation.

At the Reuters Institute for the Study of Journalism at Oxford University, Richard Fletcher, Alessio Cornia, Lucas Graves and Rasmus Kleis Nielsen have published a fascinating factsheet about disinformation online. The authors looked at the most popular legitimate news sites and false news sites in France and Italy. They found that false news sites sometimes had broader social media impacts than legitimate news sites

This was true even though users spent less time viewing false news sites and were less likely to interact with false news sites. The authors found that the average monthly time spent with false news sites was significantly lower than the average monthly time spent with legitimate sites.

It is difficult to explain this disconnect. There may be a small number of super-fans, who may or may not realize that they are promoting disinformation and driving social media engagement for false news sites. Or false news sites may be employing dubious practices, such as buying fake “bot” accounts, to artificially inflate their social media impact. Or users may be sharing and interacting with content from false news sites on social media but not choosing to visit the sites themselves. 

But it’s also difficult to know whether the users interacting with these false news sites realize that they are viewing disinformation. The European Union’s Flash Eurobarometer 464 survey found that 75% of French people report encountering disinformation online once or more per week. The same survey found that 68% of Italian people reported the same

If these users are encountering disinformation on social media, their experiences seem to match with the Reuters Institute’s data. Unfortunately, though, we do not yet know where these people believe they are encountering disinformation. We also don’t know whether they are accurately judging content as disinformation or legitimate news. Disinformation is, after all, effective because users do not recognize it as false or misleading.

Clearly, there is more work to be done and more information to be gathered before judgments are formed. More surveys are needed to determine more specifically where users are encountering what they perceive to be disinformation and how they decide what content is disinformation. More research is also needed to understand how false news sites are able to garner more social media interactions despite having less trustworthy content. This research would allow policymakers to better understand how to educate users about disinformation and combat its spread.

SARAH REBECCA MYERS
Sarah Rebecca Myers is a student at Stanford University and a research associate at the Lisbon Council.