Illegal Content

The law is clear; what is illegal offline is illegal online. And a bevy of European laws – such as the directive on combating terrorism (2017) and the directive on combating the sexual abuse and sexual exploitation of children and child pornography (2011) – have been promulgated over the years, requiring platforms to remove illegal content "expeditiously," in the words of the electronic commerce directive (2000), once they are notified or in some other way become aware of its existence.

A short list of offences covered by these rules includes incitement to terrorism, xenophobic hate speech with intent to incite, copyright infringement and child abuse. Few dispute the underlying principles at stake, but the debate grows heated over how – and how quickly – platforms should be legally bound to remove illegal material when it appears. How much redress do platform users have if they believe their content was removed unjustifiably? And how much duty do the platforms have to cooperate pro-actively with security and law enforcement services?

The European Commission has provided a handy framework: a communication on tackling illegal content online (2017) and a recommendation on measures to effectively tackle illegal content online (2018). These guidelines are non-binding, but – faced with a rise in phenomena like terrorism – some governments have stepped in with steeper, more binding rules. Germany is a case in point. Its Netzwerkdurchsetzungsgesetz (NetzDG), threatens large fines if "manifestly unlawful" material does not disappear from platforms within 24 hours of notification. France’s proposed Loi contre les contenus haineux sur Internet (the "Avia law," named for sponsor Laetitia Avia), would do the same.

Put simply, the discussion boils down to several simple determinations: is illegal content coming down quickly enough? Are the rules and codes of conduct strong enough to prevent damage from occurring given the speed with which harm can take place? Is the volume of illegal content decreasing given the measures already in place, and is it decreasing quickly enough? And if stronger measures are needed, how can they be constructed to obtain better results without violating important rights such as privacy, redress and free speech?

The evidence presented in this section cover illegal content broadly. Separate sections will deal with more concrete aspects, such as incitement to terrorism, hate speech and copyright infringement. Additional information on illegal content online can be found on the World Intermediary Liability Map (WILMap), led by the Center for Internet and Society at Stanford Law School.

Records 11 - 20 of 64


chart preview

Content Removed by Twitter for Policy Violations

The chart shows the amount of content removed by Twitter due to Twitter policy violations. The data shows 32% increase of the amount of content removed in the period January-June 2021 compared to the previous period, and it is three times higher compared to the same period of the previous year.
chart preview

Copyright Content Take-Down Notices Received by TikTok

The chart presents the number of copyright take-down requests received by TikTok and the number of succesfull copyrights take-downs in 2021. The TikTok Community Guidelines and Terms of Service prohibit content that infringes third party intellectual property. When valid take-down requests based on violations of copyright law and trademark law are received by the platform from the right holders, TikTok removes the alleged infringing content in a timely manner. There is a breakdown in data for the first half of 2021 and the data for the two periods are not directly comparable. In the period January - April 2021, data includes all copyright take-down notices received by the platform, while since May 2021, the data includes only valid copyright take-down notices. Valid copyright take-down notices are notices that include the statutorily defined elements in the DMCA, the EU Copyright Directive, and other similar law, that are required to report alleged copyright infringement.
chart preview

Distribution of Content Actioned Under Other Types of Violation on Facebook (2017-2022)

The chart shows the distribution of the content actioned under other types of violations on Facebook, from the fourh quarter of 2017 until the first quarter of 2022. A metric for a new policy area called violence and incitement was added to the Community Standards in the third quarter of 2021. Additionally, starting with the second quarter of 2021, the child nudity and sexual abuse category was renamed child endagerment and collects data on two separate topics: sexual exploitation and nudity and physical abuse. The data shows that the adult nudity and sexual activity remains the main reason of removal on Facebook, followed by violent and graphic content, violence and incitement and child endagerment. The chart excludes the content removed under fake accounts and spam content violations.
chart preview

Distribution of Content Creators on Social Media Platforms

A recent report on the “creator economy” by Yuanling Yuan, senior associate at SignalFire, shows that there are over 50 million creators on Youtube, Instagram, Twitch, TikTok, and other social media platforms. The chart presents the distribution of these content creators by social media platforms and professional status, with approximately two million full-time creators that earn six figure salaries by creating content daily or weekly. And that massive distributed content creation engine means that about 90% of the video, audio, photo, and text-based content consumed today by Gen Z is created by individuals, not corporations.
chart preview

Distribution of the Content Actioned on Instagram, by Reason of Removal

The chart shows the distribution of the content actioned on Instagram, by reasons of removal, from the fourth quarter of 2019 until the first quarter of 2022. A metric for a new policy area called violence and incitement was added to the Community Standards in the third quarter of 2021. Additionally, starting with the second quarter of 2021, the child nudity and sexual abuse category was renamed child endagerment and collects data on two separate topics: sexual exploitation and nudity and physical abuse. The data shows that adult nudity and sexual activity remain the main reason of removal of content, followed by bullying and harassment content and violent and graphic one.
chart preview

Estimated Impact of the Internet Intermediary Liability Regime on Startups’ Success Rate in Germany (2015)

Germany’s startup ecosystem could moderately benefit from increased liability protection in particular to increase its startup success rate. The model used by Oxera estimates that it could increase by 1.6%, translating into an increase of around 9% on its current success rate.
chart preview

Estimated Impact of the Internet Intermediary Liability Regime on Startups’ Success Rate in Selected Countries (2015)

The chart presents an estimated impact on expected profit for successful startups in four selected countries – Chile, Germany, India and Thailand. The analysis suggests that a regime with clearly defined requirements for compliance and low associated compliance costs could increase the startups’ expected profit for intermediaries in the focus countries between 1% (Chile) and 5% (India).
chart preview

Estimated Impact of the Internet Intermediary Liability Regime on Startups’ Success Rate in Selected Countries (2015)

The chart presents an estimated impact on the success rate for startups in four selected countries – Chile, Germany, India and Thailand. The analysis suggests that a regime with clearly defined requirements for compliance and low associated compliance costs could increase startups’ success rates for intermediaries in the selected countries between 4% (Chile) and 24% (Thailand).
chart preview

Fourteen Years of Democratic Decline

The chart shows the evolution of the countries' Freedom of the World score for the past 15 years, based on a report from Freedom House. The results show that the global freedom has declined constantly in the last the 14 years. The gap between setbacks and gains widened compared with 2018, as individuals in 64 countries experienced deterioration in their political rights and civil liberties while those in just 37 experienced improvements. The negative pattern affected all regime types, but the impact was most visible near the top and the bottom of the scale.
chart preview

Global Rankings of the Level of Internet and Digital Media Freedom

Freedom on the Net measures the level of internet and digital media freedom in 65 countries (for a full display of countries, please view the chart in full screen). Each country receives a numerical score from 100 (the most free) to 0 (the least free), which serves as the basis for an internet freedom status designation of free (70–100 points), partly free (40–69 points) or not free (0–39 points). Ratings are determined through an examination of three broad categories: obstacles to access (assesses infrastructural and economic barriers to access; government efforts to block specific applications or technologies; and legal, regulatory, and ownership control over internet and mobile phone access providers); limits on content (examines filtering and blocking of websites; other forms of censorship and self-censorship; manipulation of content; the diversity of online news media; and usage of digital media for social and political activism); violations of user rights (measures legal protections and restrictions on online activity; surveillance; privacy; and repercussions for online activity, such as legal prosecution, imprisonment, physical attacks, or other forms of harassment).