WASHINGTON— In response to the joint statement from the Federal Trade Commission (FTC), Civil Rights Division of the U.S. Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC) committing to enforce their respective laws and regulations to AI systems, Hodan Omaar, senior policy analyst at the Center for Data Innovation, issued the following statement:
The FTC, DOJ, CFPB, and EEOC are right about one thing: They don’t need to create new non-discrimination and civil rights laws for AI systems—instead, they can apply the laws that already exist to this emerging technology. But they should make sure to apply their enforcement authorities equally to both human decision-making and automated decision-making.
There is a real risk that these agencies will unfairly focus on addressing issues of bias and discrimination from automated systems, which will divert attention away from the root causes behind unfairness and narrow the scope of what politicians can change. For instance, there are widespread calls to ban AI-enabled risk assessment tools that decide if an accused person should be allowed bail by predicting the likelihood they will miss a future appointment related to their case. But the underlying social problem—which is that a person’s ability to leave jail and return home to fight the charges depends on their access to resources—is not one that AI created nor is it one that transparency over algorithms and data alone will solve.
These agencies have taken the right first step in acknowledging that they already have the necessary tools to address potential AI bias. Now they need to ensure they use those tools fairly and appropriately.