Illegal Content

The law is clear; what is illegal offline is illegal online. And a bevy of European laws – such as the directive on combating terrorism (2017) and the directive on combating the sexual abuse and sexual exploitation of children and child pornography (2011) – have been promulgated over the years, requiring platforms to remove illegal content "expeditiously," in the words of the electronic commerce directive (2000), once they are notified or in some other way become aware of its existence.

A short list of offences covered by these rules includes incitement to terrorism, xenophobic hate speech with intent to incite, copyright infringement and child abuse. Few dispute the underlying principles at stake, but the debate grows heated over how – and how quickly – platforms should be legally bound to remove illegal material when it appears. How much redress do platform users have if they believe their content was removed unjustifiably? And how much duty do the platforms have to cooperate pro-actively with security and law enforcement services?

The European Commission has provided a handy framework: a communication on tackling illegal content online (2017) and a recommendation on measures to effectively tackle illegal content online (2018). These guidelines are non-binding, but – faced with a rise in phenomena like terrorism – some governments have stepped in with steeper, more binding rules. Germany is a case in point. Its Netzwerkdurchsetzungsgesetz (NetzDG), threatens large fines if "manifestly unlawful" material does not disappear from platforms within 24 hours of notification. France’s proposed Loi contre les contenus haineux sur Internet (the "Avia law," named for sponsor Laetitia Avia), would do the same.

Put simply, the discussion boils down to several simple determinations: is illegal content coming down quickly enough? Are the rules and codes of conduct strong enough to prevent damage from occurring given the speed with which harm can take place? Is the volume of illegal content decreasing given the measures already in place, and is it decreasing quickly enough? And if stronger measures are needed, how can they be constructed to obtain better results without violating important rights such as privacy, redress and free speech?

The evidence presented in this section cover illegal content broadly. Separate sections will deal with more concrete aspects, such as incitement to terrorism, hate speech and copyright infringement. Additional information on illegal content online can be found on the World Intermediary Liability Map (WILMap), led by the Center for Internet and Society at Stanford Law School.

Records 31 - 40 of 64


chart preview

Investor Concern Regarding Potential New Regulation in Spain (2014)

In summary, 97% of Spanish investors believe the legal environment has the most negative impact on their investing activities with a significant majority of 93% concerned about investing in digital content intermediaries that are today confronted by ambiguity and uncertain outcomes, potentially large damages, and the risks of secondary liability if new anti-piracy regulations are introduced.
chart preview

Most Significant Obstacles Created by Market Fragmentation for European Small and Midsize Enterprises

This chart demonstrates that a clear majority of Europeans SMEs feel that failures in the single market have led to signifcant or very significant barriers in the expansion of their business. The most promiment of these barriers being complex administrative procedures and differing rules by individual Member States.
chart preview

Number of Accounts Suspended by Twitter for Policy Violations

The chart shows the number of accounts suspended by Twitter due to Twitter policy violations. The data shows 23% increase of the number of accounts suspended in the period January-June 2021 compared to the previous period, and 34% increase compared to the same period of the preivous year.
chart preview

Number of Accounts Suspended by Twitter for Policy Violations, by Type of Rule

The chart shows the number of Twitter accounts suspended due to Twitter policy violations, by type of rule, during the period July 2018 until June 2021. The data shows that the main reason of account's suspension is the child sexual exploatation content, followed by hateful conduct and impersonation contents.
chart preview

Number of Data Users Disclosures Requests, by Reporting Period

Google discloses the number of user data requests from government authorities alongside the total number of users/accounts specified in those requests in six-month increments, subject to certain limitations. Google began by reporting on the number of users/accounts requested in the first half of 2011.
chart preview

Number of Videos Removed by Google Under Their Child Safety Policy

The chart shows the number of videos removed by Google under their Child Safety policy, starting from September 2018. The latest available data shows that overall the number of videos removed under the Child Safety Policy declined in in the first quarter of 2022 by 81% compared to the same period of the previous year. Compared to the previous quarter, the change is considerably lower, declining only by 18% in the first quarter of 2022 compared to the previous one.
chart preview

Number of Web Pages containing Adverts or Links to Child Sexual Abuse Material

The chart provides information on the number of web pages containing adverts or links to child sexual abuse imagery, according to the age of children. The data shows an increase of these web pages in 2019 by 26% compared to 2018 and by 70% compared to 2017.
chart preview

Overview of the Number of Reported Items by Platform in Germany (2018)

The chart presents the data reported by tech companies under the Germany’s Network Enforcement Act about the number of items reported and removed in 2018. The data do not account for other removals based on other types of complaints, referrals, or injunctions.
chart preview

Percentage of Content Found by Facebook as Containing Adult Nudity and Sexual Activity Compared to the Content Reported by the Users

This chart shows the percentage of content found by Facebook as containing adult nudity and sexual activity compared to the content reported by the users. As the result shows, the percentage of content actioned that Facebook found and flagged before users reported it is significantly higher that the one reported by users.
chart preview

Percentage of Content Found by Facebook as Containing Child Nudity and Physical Abuse Compared to the Content Reported by the Users

This chart shows the percentage of content found by Facebook as containing child nudity and physical abuse compared to the content reported by the users from April 2021 until March 2022. The percentage reported by users is significantly lower that the one found by Facebook.