Home PublicationsCommentary Online Platforms Should Be Used as a Tool for Fighting Extremism, Not as a Scapegoat

Online Platforms Should Be Used as a Tool for Fighting Extremism, Not as a Scapegoat

by Joshua New
by
Theresa May

In the aftermath of the London Bridge terror attack, British Prime Minister Theresa May criticized Internet platforms for providing a “safe space” for extremism to flourish online, and she called for regulations to force Internet companies to prevent extremists from using their platforms to find an audience. But experts on extremism and radicalization dispute that these platforms should shoulder much of the blame. On the contrary, these platforms have made significant efforts on their own initiative to limit the influence extremists have online, and they have begun developing innovative data-driven tools to combat radicalization. To be as effective as possible, the private sector should not have to do this alone. Governments have a vital role to play, too—not as heavy-handed regulators, but as providers of the resources necessary to develop and deploy these tools effectively and expeditiously.

There are many reasons why removing all extremist content from the Internet is not as simple as it might sound. First and foremost, it is impossible to reliably filter, identify, and block extremist content without also blocking an unacceptable share of legitimate content. In addition, extremists can repost content faster than authorities can take it down, creating a never-ending game of whack-a-mole.

Second, countries have different levels of protection for free speech, and national regulators will not be able to develop a global consensus on what kinds of content is permissible online, because there is not a single correct answer for everyone. In addition, monitoring and complying with conflicting requirements from many different countries would be prohibitively resource-intensive for most Internet companies. It would also create large jurisdictional challenges—for example, if extremists in Syria use an American technology platform that hosts data in Brazil to spread extremist propaganda to social media users in the United Kingdom, which country’s regulations should the company comply with and at what stage of the process?

Regulation in this space may even be counterproductive by driving extremists to less visible platforms, such as the so-called “dark web” or private messaging groups, while still leaving many recruitment strategies available to those attempting to radicalize targets. Reducing the visibility of this content also makes it harder for law enforcement to monitor and limits opportunities to directly challenge and discredit extremism with effective counter speech. Finally, regulation also forces companies to spend more on compliance and takes away resources from efforts that would likely be more successful at curbing extremism.

Indeed, technology companies are working to address this challenge already. Virtually every online platform prohibits extremist content in their terms of service, and they shut down accounts and remove content that violate these rules when they find it. For example, Twitter has banned hundreds of thousands of accounts that promote terrorist acts. Major technology firms, as well as nonprofits and researchers, also have developed promising tools that take advantage of Internet platforms to both limit the impact of extremist content online and prevent future attacks.

In February 2016, Google began piloting a strategy that manipulated its AdWords algorithms to detect when users search for extremist content and instead display links to anti-extremism websites and content, such as YouTube channels designed to de-radicalize viewers. In two months, the search engine was able to divert over 300,000 people to these channels instead of leading them to extremist content. Google has also partnered with Facebook, Twitter, and Microsoft to fight the spread of extremism online by creating a system that can “fingerprint” extremist content whenever it is identified so all four companies can identify and remove it from their respective platforms. Meanwhile, researchers at the nonprofit Counter Extremism Project are developing a similar system that can fingerprint extremist images, videos, and audio and share the information in a clearinghouse for all platforms that want to ensure they are not also hosting this content. And researchers as the University of Miami have developed an algorithm that can monitor changes in social media activity by ISIS sympathizers and detect certain patterns that indicate an increased likelihood of future terrorist attacks.

While Internet companies undeniably have a valuable role to play in the fight against extremism, suggesting that it is possible to pass a law or regulation that would deny extremist content a home on their platforms is a naive approach to a very complex problem. As Twitter puts it, there is simply no “magic algorithm” that a platform could deploy to delete extremist content from the Internet. There is, however, much more that can be done by Internet companies—but only with more support from government. In particular, government should increase research and development funding for data-driven tools that can more effectively identify and respond to dangerous content, and use these online tools to create evidence-based interventions. This would should be done in partnership with technology firms to encourage them to continue developing and adopting more and better solutions for themselves and the whole Internet ecosystem. And it will require actively engaging on Internet platforms to strengthen civil society and counteract extremists by offering attractive and persuasive alternatives to their radical ideologies.

Image: UK Department for International Development

You may also like

Show Buttons
Hide Buttons