Few issues are as contentious as the role and spread of "disinformation" on social media and Internet platforms.

First and foremost is the thorny question of how disinformation can best be identified and when platforms should be required to block and/or remove content. The issue touches upon core questions of free speech and political expression. And has led to a plethora of confused policies and stop-start initiatives. According to a recent "code of conduct" agreed in 2018 with platform-industry input, platforms must remove any content that "may cause public harm" or poses "threats to democratic political and policymaking processes as well as public goods such as the protection of European Union citizens’ health, the environment or security." But the same agreement excludes a ban on "misleading advertising, reporting errors, satire and parody, or clearly identified partisan news and commentary." Drawing on the complex problem of stopping the spread of inaccurate health information in a global emergency, the European Commission released tackling COVID-19 disinformation - getting the facts right, a 16-page communication proposing monthly progress reports and tougher restrictions and labelling requirements for false health information that might be circulating through encrypted messaging apps.

Given the difficulty of legislating in this area, regulators have tended to rely on "self-regulation," such as the code of practice on disinformation mentioned above. But how well are these semi-formal agreements working? Are there perhaps lessons – positive as well as negative – that could be drawn up based on the relative success or failure of these codes in practice and the real-world functioning of the Internet in an unprecedented era of democratic expression and outright disinformation?

Below we pulled together some of the best evidence on the relative spread and level of identifiable disinformation as well as the efforts to contain and remove it.

Additional information on disinformation around the world can be found on the World Intermediary Liability Map (WILMap), led by the Center for Internet and Society at Stanford Law School.

Records 51 - 54 of 54

chart preview

Share of Respondends Concerned About What is Real and What is Fake on the Internet When it Comes to Fake News

The chart shows the percentage of persons surveyed who agreed with the statement, "Thinking about online news, I am concerned about what is real and what is fake on the internet." The results show that more than half (56%) of the survey’s respondents across 40 countries remains concerned about what is real and fake on the internet when it comes to news. Brazilian citizens exhibited the highest levels of concern, with 84% agreeing with the statement; whereas respondents from the Netherlands displayed lower rates of concern at only 32%.
chart preview

Share of Respondents That Agree They Can Trust the News Most of the Time

The chart shows the percentage of respondents who agreed with the statement, "I think you can trust the news most of the time." The results show that only in six out of 40 countries trust levels exceed 50%. The highlest levels of trust is found in Finland, with 56% agreeing with the statement, while South Korea exhibited the lowest levels of trust, only 21% agreement.
chart preview

Top-Down Versus Bottom-Up Misinformation

The chart shows that high-level politicians, celebrities, or other prominent public figures produced or spread only 20% of the misinformation in Reuters Institute's sample, but that misinformation attracted a large majority of all social media engagements in the sample. The first bar shows the share of content that was produced or shared by prominent persons in the whole sample (N=225). The second bar shows the per cent of total social media engagements of content from prominent persons out of the sub-sample of social media posts with available engagement data (N=145).
chart preview

Twitter Ban Effect on Misinformation About Election Fraud on Social Media

The chart shows how the online misinformation about election fraud changed after several social media sites suspended President Trump and key allies accounts. The new research by Zignal Labs reported that conversations about election fraud dropped from 2.5 million mentions to 688,000 mentions across several social media sites in the week after Trump was banned from Twitter.