Home IssueArtificial Intelligence Antitrust Regulators Should Not Fear “Big AI”

Antitrust Regulators Should Not Fear “Big AI”

by Daniel Castro
by
Lego person in pieces

Antitrust is a hot topic these days, especially in tech policy circles, and many antitrust authorities are increasingly flexing their regulatory muscles. Not surprisingly, this increased activity has spurred some people to call for antitrust regulators to take on more issues, such as privacy, that have traditionally been outside their purview. The latest example comes from an op-ed in Wired by Bhaskar Chakravorti, dean of global business at Tufts University’s Fletcher School of Law and Diplomacy, which argues (in a notable break from the author’s past positions) that U.S. antitrust regulators should preemptively take action against “the growing concentration in AI.”

The biggest problem with his argument is that no company has a monopoly on AI, especially in the United States. Chakravorti argues that large tech companies from both the United States and China “are responsible for $2 of every $3 spent globally on AI.” This may or may not be true—the data he cites is five years old and the market for AI is changing rapidly. However, given that the United States and China are competing fiercely in AI, it would not be too surprising if large U.S. and Chinese companies account for the majority of AI investment.

But Chakravorti makes no distinction between large U.S. companies and Chinese ones. U.S. policymakers should want domestic companies to invest in AI, picking up the slack where federal investment may be lagging. It is hard to imagine the United States would be better off if large tech companies dedicated fewer resources to AI R&D. Indeed, one of its strengths in the global race for AI leadership is that collectively all U.S. companies accounted for 64 percent of the total global private investment in AI in 2019—a majority for sure, but hardly dominating the market.

Chakravorti also suggests U.S. tech companies have too much power because they are “among the top AI patent holders.” But the facts do not support this claim. Of all AI patents granted between 1976 and 2018, the top 30 U.S. companies only held 29 percent. Moreover, many businesses outside the tech industry, such as Capital One, Bank of America, and Accenture, are among the top recipients of AI patents in recent years.

Finally, Chakravorti argues that “U.S. AI talent is intensely concentrated” in the five largest U.S. tech companies. Once again, there is no real evidence for this claim, other than the fact that these employers have large AI workforces. But even if true, that alone is not a sign of malfeasance—large and successful companies are likely to hire highly skilled workers. There is no evidence, for example, of widespread predatory hiring practices or other exclusionary behaviors.

While his diagnosis is wrong, Chakravorti’s proposed four-part cure would be even worse.

First, he wants antitrust authorities to imagine the worst and take preemptive action designed to stop “dystopian” scenarios. Doing so would require antitrust regulators to take preemptive action against companies even when there is no evidence of anticompetitive conduct. The problem with embracing the precautionary principle, which says that policymakers should consider new technologies risky until proven otherwise, is it may be possible to mitigate some risks, but this prevention comes at the expense of economic growth, social progress, and competitive advantage in AI. In other words, regulators may prevent the worst, but they also prevent the best—they are throwing the baby out with the bathwater. A better approach is to embrace the innovation principle, which holds that the vast majority of innovations are beneficial for society and pose little risks, so policymakers should wait to craft targeted solutions for specific problems if they occur.

Second, Chakravorti wants policymakers to use tax policy to push corporate investments away from what he terms “value-destroying” AI and towards “value-enhancing” AI, such as by changing tax policies that incentivize “excessive automation” that replaces labor. Unfortunately, his premise is inherently flawed—AI is simply a tool, like many other technologies, and there is no way to separate out good or bad AI. Moreover, AI that enhances productivity, even if it might reduce jobs in certain occupations, is still beneficial AI as it leads to economic growth and increased competitiveness.

Third, he wants antitrust regulators to “scrutinize acquisitions of AI startups by the major tech companies more closely.” Chakravorti offers no details on what exactly closer scrutinization would entail, but presumably it would involve denying more mergers and acquisitions of AI startups. Yet doing so would have a negative impact on the U.S. AI startup ecosystem as finding a buyer is one of the main exit strategies for startups, and if this option is less viable, investors may be unwilling to fund U.S. firms.

Fourth, Chakravorti wants policymakers to “establish a ‘creative commons’ for AI R&D” including by having antitrust authorities “mandate open IP” for AI patents. But again, this is a poor reading of the facts on the ground. The World Intellectual Property Organization reports that of the top 500 AI patent applicants globally, 167 are universities and public research organizations, meaning a significant amount of AI R&D occurs outside of businesses. Moreover, the AI community already has deep roots in supporting open innovation, with the two most popular machine learning libraries, Tensorflow (created by Google) and PyTorch (created by Facebook), available as open source.

Having antitrust regulators force companies to share patents has backfired in the past on U.S. competitiveness. In the 1950s, the Justice Department forced RCA to license its patents, leading to the rise of Japanese color TVs, and in the 1970s, the Federal Trade Commission forced Xerox to share its patents with its competitors, causing it to eventually lose market share to competitors like Canon and Toshiba. Forcing U.S. companies to share their AI patents with competitors will only make it easier for Chinese competitors to take over the industry.

The hipster antitrust movement has always posed a threat to American AI innovation because many of the targets of its fury are large tech companies and these companies tend to use AI. But this latest attack shows that some want to specifically go after “Big AI” even if there is no “Big AI” for regulators to break up.

Image credit: Jackson Simmer/Unsplash

You may also like

Show Buttons
Hide Buttons