Home IssueArtificial Intelligence Worried About Bias in AI? Worry About Humans Instead

Worried About Bias in AI? Worry About Humans Instead

by Joshua New
by

In October 2018, news broke that Amazon had developed a machine learning system to vet job applicants that was inadvertently biased against women. Commentators seized on it as a scandal, writing dramatic headlines such as “Amazon Created a Hiring Tool Using AI. It Immediately Started Discriminating Against Women.” However, Amazon had actually scrapped the program in 2017 after first attempting to control for this bias, and Amazon recruiters had never solely relied on the system to evaluate candidates. Concerns about bias in hiring are understandable, but this story was incorrectly presented as a scandal, rather than a success story. 

Bad training data can introduce bias in AI systems. For example, when a facial recognition system misidentifies black female faces at a higher rate than white male faces, it is likely due in part to the developers not including enough black female faces in their training data as they developed and validated their models, causing them to underperform for this group. In the case of Amazon’s job applicant screening system, its developers trained their system on patterns in 10 years of resumes submitted to the company. However, because these patterns indicated men were more likely to be hired, the system learned to associate phrases in resumes indicating attendance of an all-women’s college or participation in women’s groups with less competitive applications.

The difficulty of building an unbiased screening system for job applicants demonstrates one of the pitfalls of using historical data that reflects human bias and broader systemic causes of discrimination. For example, people often exhibit unconscious bias by associating groups with certain concepts, such as “black man” and “athletic.” Not surprisingly, when AI systems learn language associations from social media or news articles they develop similar biased associations. Training machine learning systems to learn by association can reveal the extent to which this implicit bias permeates society. A computer science researcher in 2017 trained an AI system on billions of words from English-language content ranging from social media threads to the Declaration of Independence, and found that the AI system learned to exhibit biased associations at very similar rates to humans. In short, training AI systems like humans makes them biased like humans.

But this should be a cause for optimism. The status quo of human decision-making is rife with both deliberate and unconscious bias. This is, after all, the reason anti-discrimination laws exist in the first place. And while it is unlikely that hiring managers ever intentionally penalized applicants for attending all-women’s colleges, an AI system would learn to correlate that factor with an unsuccessful application if applicants with that trait were less likely to be hired. Fortunately, automating the traditionally human-led process of hiring enables the operators of these systems to evaluate their performance in ways they likely never did before and with less effort. This means that companies can use AI to more aggressively identify and root out discriminatory practices.

Pundits frequently lament that companies will recklessly deploy AI and appeal to the perceived neutrality of the algorithm to maximize profits at the expense of societal good. However, no matter how loudly commentators argue this point, algorithms do not operate in a vacuum and are intrinsically and inescapably linked to their operators. If a company values non-discrimination in employment, it will take steps to ensure it does not rely on algorithms to make biased hiring decisions. If a company does not value non-discrimination highly, it will not promote it, regardless of whether it uses AI.

Knowing that bias is virtually inescapable in human decision-making, substituting AI for humans will be an important way of reducing discrimination and creating a fairer society. But portraying Amazon’s decision to not use a biased AI system as a scandal, rather than an example of good governance, will limit these opportunities by conflating those who use AI responsibly with those who do not.

You may also like

Show Buttons
Hide Buttons