‘Facts, Sir, Are Nothing Without Their Nuance.’

So said Norman Mailer, the Pulitzer Prize-winning author, testifying in the controversial trial of seven Americans arrested for protesting the Vietnam War (the “Chicago-7”), one of the United States most celebrated free-speech cases. To help us navigate the fraught path of delivering meaning from impartial information, we’ve created this Intermediary Liability Blog.

Incitement to Terrorism: Are Tougher Measures Needed?

Paul Hofheinz   

Terrorism is one of those rare domains where most people agree: it is a despicable practice that deserves no tolerance and ought to have no place in this world. Innocent lives are shattered, young minds are poisoned, rule of law is undermined and democracy is weakened. There is no free-speech opt-out for violent crime or incitement to violence. The violation is not the speech – it’s the crime behind it. And the violence that “speech” begets.

The effort to rid the Internet of terrorist content enjoys near unanimous support. All platforms have “community standards” that ban content that incites terrorism; and most of them act with speed and decisiveness to avoid allowing their platforms to serve as organising, communicating or propagandising hubs for terrorist organisations.

But the battle is difficult. Most platforms have long since put in place filters to remove and contain violent content before it can be uploaded or spread – and banned users and links that are palpably linked to terrorist or terrorist-based activities. But the advent of live broadcasting has made it harder for these methods to work. The horrific attacks in Christchurch, New Zealand – in which 51 people were killed, many of them in a live broadcast – are a case in point. More than 4000 people watched the broadcast on Facebook before it was “flagged” by viewers after an excruciating 29 minutes online. Once removed, the video – which resembled a live-action video game – continued spreading among sympathisers, who were able to make copies and copies of copies, many of which avoided automatic filters by having slightly altered content. In the end, Facebook says it took down a staggering 1.5 million videos of the attack within 24 hours – the benchmark for speedy removal these days (Facebook says 1.2 million of those videos and images were detected and blocked at upload). But six months later, a report by NBC News found videos and photographs from the shooting still online – including on some Facebook pages.

Regulators responded with a flurry of new coalitions and tougher rules. The European Commission, for one, convened the government-level EU Internet Forum, a high-level public-private body set up to fight terrorist propaganda online – and proposed a new regulation that would penalise platforms that allowed terrorist content to remain up for more than one hour (the law is being discussed in the European Union’s complex decision-making process). Elsewhere, industry and government united to launch the Christchurch Call to Action, in which they committed – at G20 level – “to detect and immediately remove terrorist and violent extremist content online.” Other methods include tighter commitments to ban accounts that have posted questionable content “without context,” i.e., that seem to spread a terrorist message rather than to comment on it. As well as a “database of hashes,” – currently containing more than 200,000 images and data points – which platforms can use to block content across the Internet (it makes terrorist content that has appeared in one place easily identifiable by other platforms so it can be blocked before it goes up).

National governments responded, too. Germany and France passed tough new laws requiring companies to remove terrorist content – or face heavy fines. The law has had some effect; at a minimum it showed governments’ intent to fight this scourge seriously. But, broadly speaking, the tougher rules don’t seem to have pushed platforms to go much further than their own community guidelines had already mandated.

And it created a bizarre anomaly in Europe: two European Union member states have strong laws, each with their own local quirks; twenty-five member states have only “guidelines.” The proposed European Commission regulation would establish the strict one-hour deadline adopted by the French as the pan-European standard.

But the problem is not the length of time that terrorist content stays up. The fundamental challenge is “virality” – the speed with which terrorist content can spread before it is removed. Much effort has been made to cut down these times, and, indeed, the platforms do seem to be doing better. YouTube, for one, has managed to raise the number of terrorist videos viewed nine or fewer times before removal to 50%, up from 6% in 2017. The number of videos watched more than 100 times before removal has fallen to 25%, down from 70% in 2017.

The bottom line is: banning terrorist incitement from the Internet is one area where we can and should work together. It’s up to regulators to set the tough, uncompromising targets that public safety requires and that citizens demand. But it’s up to platforms to make sure that their compliance comes as close to perfect as possible, employing every tool at their disposal to keep content that incites violence off of platforms and back in the gutter, where it belongs. And there is some evidence that is happening. Since the Christchurch tragedy, most of the large platforms have invested heavily in artificial intelligence to improve their detection systems and make removals more permanent. Governments have pitched in, offering stepped up “coordination and information sharing” across borders regarding terrorist incidents as part of the European Union Crisis Protocol (information sharing can only happen in strict compliance with the General Data Protection Regulation). The European Union Agency for Law Enforcement Cooperation (Europol) has taken a leading role, too, stepping up its coordination activities to make national and local responses more robust. Several leading universities have pitched in with advanced-research programmes on better detection and prevention.

The fact is, terrorist content has no place in a modern, democratic society. The Internet can and should be a vehicle for discussion, debate, connection and knowledge exchange – not a place where young minds are radicalised or sick minds made more ill. Tough sticks from regulators carry some punch. But so do tough standards from platforms. We all can do better at this. And we should.

PAUL HOFHEINZ
Paul Hofheinz is president and co-founder of the Lisbon Council.