Home IssueArtificial Intelligence Event Recap: How Society Can Use Algorithms to Make Better Choices

Event Recap: How Society Can Use Algorithms to Make Better Choices

by Joshua New
by
Algorithmic decision-making panel

Algorithms—software designed to solve complex problems—can generate substantial social and economic benefits. Unfortunately, people fear what they do not understand, and right now, says Bob Sutor, vice president of business solutions and mathematical sciences at IBM Research, “most people don’t have the vaguest idea of what an algorithm is.” At the Center for Data Innovation’s November 17 panel discussion, Deciding with Data: How Society Can Use Algorithms to Make Better Choices, Sutor and others said that this lack of understanding has led to inflated concerns about algorithms, which can slow adoption of technology and increase the potential for detrimental regulation. Though algorithmic decision-making does carry some risk of unintended outcomes, the panelists concluded that the public and private sectors are capable of designing and deploying algorithms responsibly, minimizing these risks and creating opportunities for society to make smarter decisions.

As the world becomes increasingly data-driven, algorithms can make decisions automatically to improve outcomes: delivery companies can optimize routes to make deliveries faster and more efficiently, police departments can predict which crime hotspots they should patrol to reduce crime, and financial services companies can automate fraud detection to prevent unauthorized payments. But because computers are making these decisions, rather than humans, some policymakers have been quick to caution against such practices out of fear that algorithms will be used to unfairly harm consumers or discriminate against a particular group. However, this fear is unwarranted. As Sutor explained, “An algorithm is just a recipe for doing something,” and like a recipe, an algorithm can be simple or complex, static or dynamic, and executed extremely well or extremely poorly. While algorithms can be used inappropriately, Courtney Bowman, co-director of Privacy and Civil Liberties Engineering at Palantir Technologies, explained that algorithmic decision-making does not have to come at the cost of privacy and civil liberties, as some may fear, and that the negative perception of algorithms likely stems from hypothetical worst-case scenarios or even purely fictitious examples. Madeleine Elish, a researcher at Data & Society’s Intelligence and Autonomy Initiative, expects some resistance to algorithmic decision-making is tied to workers’ fears that increasing automation may put their job at risk, despite the fact that many of the benefits of algorithmic decision-making come from algorithms helping humans do their jobs better, rather than simply replacing them. And when it comes to the use of algorithms in policing, many discussions are dominated by allusions to the movie Minority Report, a dystopian sci-fi thriller in which the police arrest suspects before they ever commit a crime. However, as Bowman explained, due process protections are not negated just because police can use algorithms to make decisions.

A more productive way to think about algorithmic decision-making, panelists agreed, is to recognize that while algorithms are not inherently good or bad, they are also not inherently neutral. Algorithmic decision-making can be risky, Bowman explained, when the actor deploying an algorithm is acting irresponsibly. For example, if a company repurposes an algorithm for an application it was not specifically designed for without adapting it appropriately, it may produce unintended and unforeseen detrimental outcomes. Or, if a company fails to account for potential bias in historical data used to train an algorithm, the algorithm might replicate biases inherent in the human decision-making that generated that historical data.

To minimize these risks without sacrificing valuable opportunities to use algorithms, companies can adhere to what U.S. Federal Trade Commissioner Terrell McSweeny calls “responsibility by design.” Principles of responsibility by design, as defined by the panel, include: regularly monitoring algorithm-driven processes to ensure algorithms are producing their intended outcomes; avoiding overbroad applications of algorithms designed for specific functions; developing a strong understanding of how a particular algorithm works to ensure that it is acting appropriately; and designing algorithms robustly enough to account for potential bias in training data.

However, while responsibility by design is a promising strategy to ensure that algorithms are used to benefit society, rather than harm it, rules requiring adherence to certain principles of responsible design could have unintended consequences given the wide variety of applications of algorithmic decision-making. For example, as Elish explained, 80 percent certainty that an algorithm is acting as intended for a marketing decision could be perfectly acceptable, whereas such a relatively low level of certainty could be incredibly irresponsible for an algorithm that makes decisions about how a self-driving car avoids pedestrians.

Overall, the panelists expressed much enthusiasm for the benefits that algorithmic decision-making could generate, and they were optimistic that algorithms could be designed and deployed responsibly. As new opportunities emerge for the public and private sectors to deploy algorithms to make smarter decisions, policymakers and the public alike should be careful to avoid letting misconceptions about algorithmic decision-making limit opportunities for it to improve society.

You may also like

Show Buttons
Hide Buttons