Home IssueArtificial Intelligence Fighting Military AI Research Undermines Social and Economic Progress

Fighting Military AI Research Undermines Social and Economic Progress

by Joshua New
by
DARPA robot

In November 2017, an advocacy group called the Campaign to Stop Killer Robots published a short film called “Slaughterbots” in which autonomous drones developed by the military-industrial complex terrorize and kill innocent civilians. Though compelling, Slaughterbots is disingenuous propaganda that stigmatizes incredibly valuable AI research done by defense agencies—research that will have broad social and economic benefits beyond just defense applications. Defense research and development (R&D) activities have long played a crucial role in the U.S. innovation ecosystem and have been responsible for many widely-used technologies today, including the Internet, GPS, and smartphones. Defense agencies’ investments in AI will be no different, yet should policymakers succumb to baseless fears that military AI research will lead to a dystopian world full of killer robots, it will set back important AI research poised to deliver many benefits to Americans.

Though the Campaign to Stop Killer Robots and others typically only call for the ban on “fully autonomous weapons,” this is not nearly so clean cut of an issue and would significantly hinder innovation. The development of any fully autonomous system, whether it be a self-driving car or a military drone, would rely on huge amounts of R&D into its various components: navigation algorithms, computer vision algorithms, facial recognition algorithms, and others, depending on its function. Thus, fear of autonomous weapons, stoked by propaganda like “Slaughterbots” and pundits hand-wringing about “runaway AI” alike, engenders apprehension for AI R&D efforts that could conceivably be used in an autonomous weapon. Case in point, Google earlier this year opted to not renew a contract with the Pentagon for Project Maven, an initiative to develop better computer vision algorithms to analyze drone video, after facing significant backlash due to the fact that the technology could conceivably be used to help automate drone strikes.

Unfortunately, these concerns overshadow the vast amount of valuable AI research taking place in U.S. defense agencies. The Defense Advanced Research Projects Agency (DARPA) alone is investing heavily in AI R&D efforts that could generate crucial breakthroughs that would benefit broad swathes of AI applications beyond military ones. For example, in 2017, DARPA allocated $75 million for its Explainable AI (XAI) program to spur breakthroughs in machine learning techniques that could explain themselves or be more interpretable by humans without sacrificing performance (there can be as-of-yet inescapable tradeoffs between accuracy and interpretability for advanced machine learning systems). Explainable AI would be enormously beneficial for applications ranging from judicial decision-making to medical diagnostic software, and would alleviate pervasive concerns about the potential for AI to be biased and unfairly discriminate. Just recently, DARPA also announced its Machine Common Sense program, which aims to improve AI’s ability to understand the world and communicate naturally, as AI currently can only understand and evaluate very narrow types of problems that do not require outside knowledge. AI with common sense would be an enormous boon to practically every conceivable application of AI, enabling it to intuit, for example, that solid objects cannot pass through one another, which AI expert Oren Etzioni describes as the “holy grail of AI for 35 years or more.” The list of broadly useful AI R&D initiatives at DARPA is long, including using AI to discover new molecules that could lead to new medical treatments, using AI and smartphones to conduct ongoing, passive health monitoring, and using AI to uncover and account for bias in datasets.

These and other projects are part of DARPA’s “AI Next” campaign, a $2 billion initiative to “advance the state-of-the-art” in AI. These R&D efforts will benefit the military’s use of AI, to be sure, but will also benefit applications that generate social and economic value in countless ways. Resisting such efforts due to their potential military application is fundamentally a disagreement about the ethics of defense activities and warfare. Debating how nations should govern and use autonomous weapons has its place in policymaking, but sabotaging important AI research that can serve the public good as a means of avoiding confronting these issues head on is counterproductive and will harm innovation.

There are compelling reasons to pursue the development of AI explicitly for defense purposes, as countries such as China and Russia develop autonomous systems of their own. However, as policymakers responsible for funding the federal government’s R&D activities evaluate agency budgets, they should be careful to recognize that this technology is not just about killer robots: AI research in defense can create immeasurable benefits that can be broadly enjoyed by the public and shying away from these efforts will leave Americans worse off.

Image: U.S. Department of Defense

You may also like

Show Buttons
Hide Buttons