{"id":60,"date":"2020-05-07T10:03:00","date_gmt":"2020-05-07T08:03:00","guid":{"rendered":"https:\/\/evidencehub.net\/blog\/?p=60"},"modified":"2022-03-07T17:14:09","modified_gmt":"2022-03-07T16:14:09","slug":"self-regulation-of-hate-speech-online-what-the-evidence-is-telling-us","status":"publish","type":"post","link":"https:\/\/evidencehub.net\/blog\/self-regulation-of-hate-speech-online-what-the-evidence-is-telling-us\/","title":{"rendered":"Online Hate Speech: Does Self-Regulation Work?"},"content":{"rendered":"\n<p>Curbing hate speech is one of the most difficult challenges that regulators \u2013 and platforms \u2013 face. For starters, hate speech is fairly difficult to define. The European Commission provides a handy guide. In its <a href=\"https:\/\/eur-lex.europa.eu\/LexUriServ\/LexUriServ.do?uri=OJ:L:2008:328:0055:0058:en:PDF\">Council of the European Union framework decision on combatting certain forms and expressions of racism and xenophobia by means of criminal law<\/a> (2008), it calls hate speech \u201call conduct publicly inciting to violence or hatred directed against a group defined by reference to race, colour, religion, descent or ethnic origin, when carried out by the public dissemination or distribution of tracts, pictures or other materials.\u201d<\/p>\n\n\n\n<p>But how do we determine precisely which comments are or are not hate speech and what does and doesn\u2019t need to be removed from platforms? The issue, obviously, touches directly on questions of what constitutes free speech. And, if someone is to be deciding that some speech breaks the law or violates cultural norms, who exactly should take those decisions? On what basis? And should the platforms be held accountable \u2013 both for any hate speech they miss as well as for the comments they might overzealously bring down?<\/p>\n\n\n\n<p>The European Commission has proposed a novel way of addressing this. Drawing industry and civil society together in a unique dialogue, it fomented a <a href=\"https:\/\/ec.europa.eu\/newsroom\/just\/document.cfm?doc_id=42985\">code of conduct on countering hate speech online <\/a>(2016). Under this arrangement, civil-society organisations \u2013 particularly those with a background in spreading tolerance \u2013 are empowered to monitor activity on platforms. They flag content they find objectionable to the platforms, which commit to evaluate every piece of content flagged this way in under 24 hours. Each year, the NGOs file a report on how much of the content they flagged was taken down. The platforms themselves also have their own \u201ccommunity standards,\u201d upon which they can ban some posts if they consider it violates their policy. And other countries \u2013 most notably Germany \u2013 have laws on what exactly constitutes hate speech and what speech is out and out illegal. <\/p>\n\n\n\n<p>But is it working? The European Commission\u2019s <a href=\"https:\/\/ec.europa.eu\/info\/sites\/info\/files\/code_of_conduct_factsheet_7_web.pdf\">fourth evaluation on the code of conduct on countering illegal hate speech online<\/a> (2019) states, broadly speaking, yes, it is. The code and its built-in monitoring mechanism \u201cproves to be an effective tool to face this challenge,\u201d the European Commission concludes, adding that the evidence compiled \u201cconfirms self-regulation works.\u201d<\/p>\n\n\n\n\n\n<p>The evidence also shows that platforms have been moving quickly to respond. The European Commission\u2019s most recent assessment found that 71.7% of the content flagged was removed \u2013 much of it in under 24 hours (88.9%). All of the global platforms showed a more aggressive stance towards banning questionable content since the advent of the code of conduct. Facebook, for one, saw its take-down rate sore to 82.4% of flagged content in 2018, up from 28.3% in 2016. But takedown rates among the major platforms also varied. All showed improvement; but YouTube was the most aggressive, with 85.4% of flagged content removed in 2018. Twitter removed the least \u2013 only 43.5% in 2018, up from 19.1% in 2016.<\/p>\n\n\n\n\n\n<p>The biggest differences are found not in the comparative rates between platforms but in the muscular way that some countries approach hate speech \u2013 and are willing to accept curbs on free speech to enforce it. Germany, for one, has very strict laws banning political hate speech of all types. Its 100% removal rate for flagged content in 2017 reflects the tough law in which the platforms must operate there, including the <a href=\"https:\/\/www.bmjv.de\/SharedDocs\/FAQ\/EN\/NetzDG\/NetzDG.html\">Netzwerkdurchsetzungsgesetz (NetzDG)<\/a> (2017). Other countries \u2013 such as Denmark and the United Kingdom \u2013 have looser laws and more open traditions. Platforms still respond to calls to remove more and more content in both places. But, facing looser legal requirements and more liberal traditions, the removal rate for flagged content is 42.5% and 66.3%, respectively. <\/p>\n\n\n\n\n\n<p>Core questions remain. For starters, how much hate speech is getting through the system? It would help to see a proper study of that. And are the trade-offs worth it? For sure, Germany has a higher take-down rate than most; but is its democracy any less rich because of it? Germany still rates a top-tercile score on Internet freedom in the <a href=\"https:\/\/freedomhouse.org\/sites\/default\/files\/2019-11\/11042019_Report_FH_FOTN_2019_final_Public_Download.pdf\">Freedom House Freedom of the Net 2019 <\/a>report, which measures overall obstacles to access, limits on content and violation of users\u2019 rights.<\/p>\n\n\n\n\n\n<p class=\"has-text-align-center\"><em>For a full display of all 65 countries analysed by the report, please view the above chart in full screen.&nbsp;<\/em><\/p>\n\n\n\n<p>One thing is for sure: the situation with hate speech online is improving. But is it improving quickly enough? And if a nation\u2019s laws or the rights of individuals are not being violated, do we really want private companies making decisions about what goes up and what comes down online? And if so, how?<\/p>\n\n\n\n<p>VIORICA SPAC<br>Viorica Spac is project manager and research associate at the Lisbon Council. She sits on the core team curating the Intermediary Liability Evidence Hub.<\/p>\n\n\n\n<div class=\"wp-block-file\"><a href=\"https:\/\/evidencehub.net\/blog\/wp-content\/uploads\/2020\/05\/Self-Regulation-of-Online-Hate-Speech-What-the-Evidence-Tells-Us-4.pdf\">Download in PDF<\/a><a href=\"https:\/\/evidencehub.net\/blog\/wp-content\/uploads\/2020\/05\/Self-Regulation-of-Online-Hate-Speech-What-the-Evidence-Tells-Us-4.pdf\" class=\"wp-block-file__button\" download>Download<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Curbing hate speech is one of the most difficult challenges that regulators \u2013 and platforms \u2013 face. For starters, hate speech is fairly difficult to define. The European Commission provides a handy guide. In its Council of the European Union framework decision on combatting certain forms and expressions of racism and xenophobia by means of &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/evidencehub.net\/blog\/self-regulation-of-hate-speech-online-what-the-evidence-is-telling-us\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Online Hate Speech: Does Self-Regulation Work?&#8221;<\/span><\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[],"coauthors":[17],"class_list":["post-60","post","type-post","status-publish","format-standard","hentry","category-hate-speech","entry"],"_links":{"self":[{"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/posts\/60","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/comments?post=60"}],"version-history":[{"count":43,"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/posts\/60\/revisions"}],"predecessor-version":[{"id":612,"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/posts\/60\/revisions\/612"}],"wp:attachment":[{"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/media?parent=60"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/categories?post=60"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/tags?post=60"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/evidencehub.net\/blog\/wp-json\/wp\/v2\/coauthors?post=60"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}