Home PublicationsCommentary Pretending Algorithms Have an Anti-Conservative Bias is Dangerous

Pretending Algorithms Have an Anti-Conservative Bias is Dangerous

by Joshua New
Jack Dorsey

The nerds apparently have it out for conservatives. In late August, President Trump tweeted that Google was suppressing news stories from right-leaning publications from appearing in its search results. And at a hearing this week, some members of  the House Energy and Commerce Committee blasted Twitter CEO Jack Dorsey over baseless allegations that Twitter had an anti-conservative bias. Specifically, some policymakers have falsely alleged that social media platforms are suppressing conservative viewpoints by unfairly blocking or “shadowbanning” conservative users (shadowbanning is when platforms allow users to post but significantly limit their visibility to others). These accusations, while good for making political hay, in fact engender support for policies to regulate algorithms in ways that would harm consumers, businesses, and democratic values alike.

Complaints that Facebook, Google, and Twitter have an anti-conservative bias are flatly wrong, and the few attempts to provide evidence of this claim have been shown to be lacking sufficient data or use flawed analysis (see 1, 2, 3). But despite no evidence for the claim, 65 percent of self-described conservatives believe social media companies are censoring conservatives and conservative ideas, according to a poll from the conservative Media Research Center. The popularity of this belief seems impervious to both evidence and common sense.

If enough lawmakers believe there is bias, they will likely enact regulations that force private companies to alter their algorithms to appease those that are convinced the algorithms are discriminatory. For example, Federal Communications Commission Ajit Pai argued as much this week, lamenting that the way digital platforms make decisions about how they present and moderate content is opaque, which requires policymakers to “seriously think about whether the time has come for these companies to abide by new transparency obligations.” Ironically, liberal commentators such as Cathy O’Neil who regularly rail against algorithms as being racist and evil have helped pave the way for conservatives to take up this call and demand “fair” treatment on social media platforms.

Trump’s tweets about the unbalanced ratio of left-leaning and right-leaning publications in Google search results implies that he does not believe a platform could show such a disparity in search results and still be fair or accurate. The logical conclusion of this mindset would be to require platforms to make their algorithms consider political leaning when presenting content or returning search results to provide a more even balance, rather than factors that their users actually value and make their services useful, such as timeliness, relevance, or accuracy. Following Congressional testimony this week from Twitter and Facebook executives, the Department of Justice (DOJ) issued a statement saying it was convening state attorneys general to discuss concerns that these platforms are “intentionally stifling the free exchange of ideas on their platforms.” It is wildly inappropriate for the federal government to use the threat of law enforcement to prevent private business from exercising their right to determine what types of legal speech they permit on their platforms. Even if platforms were to exhibit political bias, whether as the result of algorithmic bias or deliberate efforts to elevate certain political viewpoints—which, to be clear, is not the case—it is their right to do so. However competitive pressure strongly incentivizes these platforms to provide services that do not exhibit political bias, so the DOJ’s concern is irrelevant.

There is good reason to believe that these arguments are often made in bad faith. James Pethokoukis at the conservative-leaning American Enterprise Institute says that the issue of bias on social media platforms has become emotional and political, noting that some right-leaning policymakers are turning this into an “emotional wedge issue” rather than actually making sound arguments for regulation. And Andrew Schwartzman, a lecturer at Georgetown University Law Center, described the September 5 hearing as “totally politics.”

Regardless, these beliefs can have serious consequences for algorithm-driven platforms that will hurt, not help, Americans. Forcing platforms to consider political leanings in their search results would make their services less valuable to users. And preventing private companies from exercising their right to determine what kind of content to display on their platforms is an unjustifiable government overreach. Politicians concerned about the potential challenges algorithms pose to society should use their time more productively, such as by encouraging algorithmic accountability, rather than propose bad solutions to made up problems.

Image: C-SPAN

You may also like

Show Buttons
Hide Buttons