Home PublicationsCommentary Policymakers Shouldn’t Ask Platforms To Solve Online Disinformation Alone

Policymakers Shouldn’t Ask Platforms To Solve Online Disinformation Alone

by Eline Chivot
by
2020-oped-disinformation

In 1835, readers of U.S. newspaper The Sun avidly followed a series of articles, “Great astronomical discoveries,” which claimed that a scientist had built the world’s largest telescope and identified herds of flying man-bats and unicorns on the moon. It took a week for The New York Herald to report that the stories were pure fantasy.

Hoaxes like this and other more nefarious examples of disinformation have plagued society long before the Internet emerged, yet many critics, like former Member of the European Parliament Marietje Schaake, blame tech companies for “creating the problem” and believe they should be forced to “build our immunity to infodemics.”

While the battle against disinformation rages on, tech platforms have been on the frontlines of this fight. During the pandemic, for example, Google banned ads on websites spreading misinformation and conspiracy theories about COVID-19, and prohibited anti-vaccination content on YouTube; Facebook removed anti-masks groups from its platform; and Twitter removed thousands of accounts and posts spreading misleading news about the virus.

Some policymakers want to demand more, but doing so could harm free speech. For example, they should not order companies to remove online content that would be lawful offline. Some forms of disinformation, such as political propaganda, may be undesirable but are not illegal in Western democracies. In addition, policymakers should recognize that no platform moderation system will be perfect: Some permissible content will be denied, and some impermissible content will be allowed. Platforms try to minimize both of these errors, but if policymakers threaten to penalize platforms for inadvertently allowing impermissible content, then these companies will apply stringent moderation rules that will likely sweep in a significant amount of lawful content, thereby diminishing free speech online.

Second, policymakers should continue to invest in digital and media literacy as a solution to disinformation. Tech companies have invested in resources to raise awareness and increase the responsible use of the Internet in partnership with other organizations. For example, Facebook launched a Digital Literacy Library with interactive lessons and videos for young people; Google, Amazon, and Microsoft offer online digital literacy classes and training; and Netflix partnered with the World Economic Forum to develop Southeast Asian governments and individuals’ digital skills. Governments can do more to increase news literacy as well: Finland is an inspiring example with a series of classes teaching residents, students, journalists, and politicians critical thinking online and how to detect and counter fake news.

Third, policymakers should hold the people producing misinformation responsible for the content they produce just as they would in the offline world. When this involves state-backed foreign actors, as in the case of election interference, this may require international sanctions and other diplomatic responses. And indeed, governments themselves should be careful about their own role in spreading disinformation: A recent study by Cardiff University researchers found that most cases of false, confusing, or misleading information about COVID-19 originated with the government or media, rather than social media or conspiracy websites.

Sometimes there are disputes over whether certain otherwise lawful content is disinformation. In those situations, platforms should be able to make the final decision about how to enforce their own policies, as long as they do so in a fair and transparent manner. Indeed, many online platforms already detail the types of content they allow, how they enforce those policies, and how users can report violations, as well as publish transparency reports on their content moderation decisions. Policymakers can provide input on these policies, including through co-regulatory frameworks, such as the EU’s code of practice on disinformation.

Disinformation is a problem online, but it is not an entirely new one. The risk of treating it as a new problem is that policymakers can use it to justify harmful regulations and impose different standards for online speech than offline speech, diminishing free expression online. Rather than holding the online environment to different standards, policymakers should focus on proven solutions such as increasing digital and news literacy among the general public and holding those unlawfully producing disinformation responsible for their actions.

Image credits: Wikipedia.

You may also like

Show Buttons
Hide Buttons