Home PublicationsCommentary Innovation Wars: Episode AI – The Techlash Strikes Back

Innovation Wars: Episode AI – The Techlash Strikes Back

by Hodan Omaar

The White House has been working on an ambitious plan to create a national computing and data resource that addresses the growing needs of AI researchers through public-private partnerships, but members of the Federal Trade Commission’s (FTC) advisory group on AI want to stop their efforts altogether. Their criticisms are misguided and represent the latest installment in a problematic trend of inaccurate criticisms about technological innovation in general.

The list of grievances from the FTC advisory committee is long but predictable. They pull from the playbook of issues that define the “techlash”—the undercurrent of animus and suspicion that pervades discussions involving tech companies and their technology. They claim the new resource will consume too much energy and accelerate climate change, raise serious privacy and security concerns, increase economic inequality, further entrench big tech monopolies, and fuel the proliferation of inherently biased AI systems.

While some of the issues they raise around data protection and data security are real, and deserve smart, considered responses, the rest of their claims are at best misleading and lack context, and at worst just plain wrong.

Take the assertion that a national AI resource will have massively negative impacts on the global carbon footprint. Such hyperbolic claims are not new. AI has been increasingly attacked for its energy use. The first estimates of the energy and carbon footprint of AI in 2019 quickly garnered media buzz with the New Scientist running the headline “Creating an AI can be five times worse for the planet than a car,” and S&P Global, a financial services company that provides market analysis, cited the article in a blog titled “AI’s large carbon footprint poses risks for big tech.” The critics of the national AI resource claim to be particularly concerned with the environmental impacts of the large AI language models the resource will support from a tech sector that is already “responsible for a global carbon footprint comparable to the aviation industry, and data centers make up 45 percent of this footprint.”

While there is some truth to those numbers, by taking them out of all context, critics grossly misrepresent the facts. Across the entire tech sector—including the data centers that store and process data, the transmission networks that transfer data through fixed or mobile networks, and the connected devices such as computers and smartphones that exchange information—ICT accounts for about 1.4 percent of global carbon emissions. Indeed, the ICT industry is one of the few sectors that is “on track” to decarbonize its footprint and more importantly, digital technologies are key to decarbonizing existing energy sources. Even the European Commission’s recent white paper on AI finds that “digital technologies such as AI are a critical enabler for attaining the goals of the Green Deal.” The EU is right, powerful AI technologies enable other sectors to become more energy efficient. From powering intelligent transportation systems, to enabling smart grids, to improving city operations and maintenance, AI is already supporting smarter energy use and reducing greenhouse gas emissions. Even the large natural language processing (NLP) models critics disparage are helping researchers understand the solar panel innovation process and identify climate risks and investment opportunities from public company disclosures.

The question should not be whether a national AI resource uses energy, but whether the energy consumption involved generates net-positive societal benefits. After all, we don’t shut down the U.S. government even though it accounts for around 1 percent of annual U.S. energy consumption because we understand that its function benefits society. A national AI research resource would do the same.

A pragmatic recommendation would be for the government to minimize the resource’s energy consumption by prioritizing computationally efficient hardware and algorithms when designing and developing this resource, much like the Massachusetts Green High Performance Computing Center (MGHPCC) has done. MGHPCC is a regional resource for computationally intensive research developed by Boston University, Harvard, MIT, Northeastern, and the University of Massachusetts, and has achieved LEED Platinum Certification, the highest level the Green Building Council awards for energy and environmental design. A national AI resource could be even better if it used energy-aware algorithms that report their energy consumption to help researchers prioritize algorithmic efficiency going forward.

The loudest critique detractors use to justify their “smash the machine” response to a national AI resource however, is that in order to build computing resources for under-resourced AI researchers, the government will have to buy or license systems from the big tech companies the Biden administration has directed the FTC to rein in. They argue that “by investing money in companies that dominate their market,” the White House’s plan will further entrench these firms’ power. But they fail to identify the relevant market, which in this case is high-performance and AI computing. Here, the firms in question—Amazon, Google, and Microsoft—only currently account for around 10 percent of the total market because they are only leading providers of cloud service offerings. Their biggest competitors are providers of traditional on-premises products and services that currently capture at least 90 percent of IT spending. These include processor vendors such as Nvidia, Intel, and AMD; server vendors such as Cray, Huawei, and Fujitsu; and storage vendors such as Dell, HPE, and IBM. 

The charge that the White House’s plan will entrench Internet companies’ power over computing resources and in turn result in an “alarming-but-quiet capture of academic AI research by big tech,” as the FTC’s senior advisor on AI puts it, would only make sense if the national resource is exclusively an AI cloud. But it won’t be. The task force charged with creating a roadmap for the resource has made clear that they intend for the resource to “federate computational resources, embodying a mix of cloud and on-premise resources.” Diverse perspectives on what this mix should be are sorely needed to maximize societal welfare, but for those who stand in stubborn opposition to AI progress itself, there is no balance that includes corporations that will be satisfactory.

Creating a national AI resource for research deserves a clear-eyed discussion. Claims that overstate the potential negative impacts of this resource and look at these issues in isolation—without addressing the benefits—don’t help. Worse, they throw sand in the gears of progress that only serves to harm the public good.

Picture credits: flickr user

You may also like

Show Buttons
Hide Buttons