Home IssueArtificial Intelligence Proposed Rules on AI Bias Would Undermine DOD’s AI Plans

Proposed Rules on AI Bias Would Undermine DOD’s AI Plans

by Hodan Omaar
by
Rep Clarke

The U.S. Department of Defense (DOD) released an ambitious new data strategy this month, describing how the agency plans to leverage data as a strategic asset for military advantage. One of the driving goals of this effort is to accelerate DOD’s adoption of AI. But legislation brewing in Congress threatens to undermine DOD’s AI plans.

An amendment to the 2021 National Defense Authorization Act offered by Representative Yvette Clarke (D-NY) would require DOD to only procure AI systems if it, or its vendors, evaluate these systems for discriminatory bias (i.e. systems that produce disparate impact) within one year of acquisition and, prior to deploying the systems, address any discriminatory bias. Unfortunately, the amendment misses the mark by mistakenly framing all discriminatory bias in AI systems as bad, potentially curtailing beneficial uses of AI systems, and assuming there is one reliable solution to algorithmic bias. 

While DOD should seek to avoid problematic and pernicious bias, the amendment incorrectly frames all AI systems that have the “potential to perpetuate or introduce discriminatory bias against protected classes of persons, including on the basis of sex, race, age, disability, color, creed, national origin, or religion” as something DOD should avoid. In reality, some biases can be useful components of a sound and ethically desirable system. For example, an autonomous weapons system might have a subroutine that will not allow it to fire at civilians, especially those it recognizes as women and children, in line with the moral norms DOD has laid out in their law of war manual. This AI system would be statistically biased in the sense that its decisions deviate from what a “neutral” algorithm might do, but it would do so in pursuit of an ethical safeguard.

Not being able to invest in useful AI applications  would also threaten to put the United States at a strategic disadvantage to China, which has already begun using AI to research and develop intelligent warfighting concepts such as improved war-gaming, and Russia which is exploiting AI to accelerate its various warfare tactics ranging from cyber-attacks to information operations. Congress needs to ensure it is helping the United States invest in its own innovation in a way that is ethical but also recognizes that the nature of warfare is rapidly changing from one dominated by traditional military strength to one centered in algorithmic and informational superiority. 

Further, the amendment’s testing criteria would also be impractical for many existing AI applications in DOD mission areas. For instance, DOD can use AI in disaster relief to better obtain critical information pertaining to the situation on the ground. DOD’s Joint AI Center has already begun looking for industry partners with promising technologies to assist with humanitarian relief, but these new regulations would make it difficult, if not impossible, for the Department to put these technologies in the field as many of the regions DOD operates in, especially developing countries, suffer from a lack of reliable data, poor data collection, and  insufficient monitoring on indicators such as disability, age, and national origin. Ironically, these regulations could end up exacerbating inequity rather than reducing it.

Perhaps most importantly, the amendment fails to recognize that there is no “one size fits all” solution to AI bias. Proper response measures depend deeply on the nature of the bias, the social context in which a system operates, as well as the moral and legal norms that are relevant to that context. DOD is uniquely positioned to fully appreciate the full range of relevant ethical and legal standards in force in a military context, and the amendment is wrong to allow DOD to offload this task to vendors.

For instance, if DOD wants to acquire an autonomous military vehicle, it should be the one to decide how the system should distribute risk over the passengers inside the vehicle and the people outside of it. In combat situations, passengers’ lives would take precedence, whereas in non-combat situations where civilians may be present, the distribution of risk might be more uniform. In cases like this, where a biased AI system can be beneficial, regulation should facilitate rather than impede its use.

Unfortunately, policymakers cannot solve complex algorithmic bias issues by slipping a few lines into DOD’s procurement rules. But doing so would impede DOD from using beneficial AI systems that have legitimate biases. Instead, Congress should require DOD to create a working group that evaluates potential benefits and risks associated with both the U.S. military and potential adversaries using biased AI systems. The working group should draw on a range of experts, including military, technical, and private sector backgrounds, to evaluate the direct impact of biased systems as well as assess the longer-term potential impacts of AI in military contexts including on strategic stability and escalation dynamics.

You may also like

Show Buttons
Hide Buttons