Home IssueArtificial Intelligence U.S. Regulators Should Support the Adoption of AI That Addresses Human Bias

U.S. Regulators Should Support the Adoption of AI That Addresses Human Bias

by Hodan Omaar
by

Regulators across the Biden administration announced this week their intention to continue to enforce existing civil rights laws against AI systems. It is helpful that regulators have admitted that existing civil rights laws apply to AI systems and that new laws are not necessary to cover this emerging technology. But focusing solely on ensuring organizations don’t start using biased AI systems won’t effectively address social problems. Moreover, U.S. regulators should develop a plan to support AI systems that address bias in existing human-led processes. 

It is not surprising that the four agencies involved in the plan—the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), the Department of Justice’s Civil Rights Division (DOJ), and the Equal Employment Opportunity Commission (EEOC)—appear to be focusing their enforcement authorities on AI. Many of the social problems these agencies are charged with addressing, such as discrimination in employment, housing, and credit opportunities, are what American philosopher Charles West Churchman called “wicked problems” in 1967, meaning they are complex, messy, and long-standing. Policymakers often chip away at these big social problems by focusing on certain facets of them, and because anti-tech rhetoric attracts media attention and support from the populist wings of both political parties, highlighting the technological components of these “wicked” problems is a strategic way for policymakers to get attention they otherwise might not. 

But AI systems aren’t only capable of perpetuating bias. There are also many ways these systems can help reduce bias and improve individuals’ access to opportunity, especially for individuals for whom opportunities have been historically limited. The problem is, even when algorithms can do good by making existing processes more efficient and equitable for consumers, public backlash and opaque implementations can erode the trust needed for them to achieve impact. U.S. regulators should therefore not solely focus on mitigating new bias from AI systems, but also on how they can ensure the successful adoption of AI systems that help mitigate existing bias from human processes. 

The experience of the Boston public school system should serve as a lesson to U.S. regulators for what can happen without policies to support AI systems that help rectify existing inequities. In 2018, the Boston public school system proposed using an algorithmic system to improve school busing in ways that would cut costs by millions of dollars a year, help the environment, and better serve students, teachers, and parents. The district had two aims, the first of which was to cut transportation costs. More than 10 percent of the public school system’s budget goes toward busing children to and from school—the district’s annual cost per student is the second highest in the United States. The district’s second goal was to reconfigure school start times so that high school students could get more sleep, as early school starts for teenagers has been linked to serious health issues such as decreased cognitive ability, increased obesity, depression, and increased traffic accidents. Indeed, the American Academy of Pediatrics recommends that teenagers not start their school day before 8:30 AM, but only about 17 percent of U.S. high schools comply.

Boston public school officials engaged researchers from the Massachusetts Institute of Technology (MIT) to build an algorithm to achieve its twin goals, which they did. The Boston Globe called their solution a “marvel.” The algorithm helped the district optimize bus routes, cutting 50 of the 650 school buses used, $5 million off the budget, and 20,000 pounds of carbon emissions each day while also optimizing bell times. 

Importantly, the algorithm’s solution for bell times addressed inequity. In the past, the district manually staggered start and end times, but its approach predominantly provided wealthier and whiter schools with later start times while schools with poorer and minority students disproportionately shouldered earlier times. In contrast, the algorithm’s solution distributed advantageous start times equally across major racial groups, while significantly improving them for students in all of those groups. Under the status quo, white students were the only group with a plurality (39 percent) enjoying start times in the desirable 8:00 AM to 9:00 AM window but under the algorithmic-determined schedule, a majority of all students (54 percent) in every ethnic group would have start times in that window.

Despite everything the algorithm offered, the district had to scrap the algorithm due to the swift and strong public pushback. Rutgers law professor Ellen Goodman describes how disgruntled parents carried signs at a school committee meeting that read “families over algorithms,” and “students are not widgets” in her 2019 paper The Challenge of Equitable Algorithmic Change

But the algorithm wasn’t really the problem, rather it was the disruptive change to school schedules that was too much, too fast. Implementing the change meant that some elementary school students had bell times that were pulled forward from 9:30 AM to 7:15 AM, some families with children of different ages had to manage several different bus schedules, and some high school students who finished school later had clashes with their extra curricular activities.

Goodman describes the pushback as a case of “algorithmic scapegoating,” which Cornell researchers explain is where the algorithm “stood in for substantive issues around equity and disruptive change that were really at stake (though potentially more contentious to discuss) and might well have been at stake even without an algorithm in the picture. The tragedy of the case is that the algorithm could have provided the flexibility to involve the public in choosing among multiple trade-offs. If implemented, it might have created a more equitable system than what existed originally.”

The lesson for U.S. regulatory agencies from this episode is twofold: One, algorithmic systems can reduce inequality from human decision-making when they are designed well. Two, communities may not adopt these AI systems even if they could benefit from them if they are implemented in a way that does not explain the rationale behind the use of AI or give citizens sufficient room for recourse. There are several ways U.S. regulators can help. For example, EEOC could help identify and amplify automated tools that reduce bias in employment decisions, and CFPB could help identify AI tools that reduce bias in lending. DOJ can use its visibility and platform within the civil rights community to engage with communities to help these tools gain legitimacy. Finally, all of these regulators should tone down the unhelpful rhetoric that depicts AI as a threat to civil rights, and instead treat this emerging technology with an even hand.

Image credits: Pexels

You may also like

Show Buttons
Hide Buttons