Few issues are as contentious as the role and spread of "disinformation" on social media and Internet platforms.
First and foremost is the thorny question of how disinformation can best be identified and when platforms should be required to block and/or remove content. The issue touches upon core questions of free speech and political expression. And has led to a plethora of confused policies and stop-start initiatives. According to a recent "code of conduct" agreed in 2018 with platform-industry input, platforms must remove any content that "may cause public harm" or poses "threats to democratic political and policymaking processes as well as public goods such as the protection of European Union citizens’ health, the environment or security." But the same agreement excludes a ban on "misleading advertising, reporting errors, satire and parody, or clearly identified partisan news and commentary." Drawing on the complex problem of stopping the spread of inaccurate health information in a global emergency, the European Commission released tackling COVID-19 disinformation - getting the facts right, a 16-page communication proposing monthly progress reports and tougher restrictions and labelling requirements for false health information that might be circulating through encrypted messaging apps.
Given the difficulty of legislating in this area, regulators have tended to rely on "self-regulation," such as the code of practice on disinformation mentioned above. But how well are these semi-formal agreements working? Are there perhaps lessons – positive as well as negative – that could be drawn up based on the relative success or failure of these codes in practice and the real-world functioning of the Internet in an unprecedented era of democratic expression and outright disinformation?
Below we pulled together some of the best evidence on the relative spread and level of identifiable disinformation as well as the efforts to contain and remove it.
Additional information on disinformation around the world can be found on the World Intermediary Liability Map (WILMap), led by the Center for Internet and Society at Stanford Law School.
Records 11 - 20 of 54