Home IssueArtificial Intelligence Event Recap: Accountability in the Algorithmic Economy

Event Recap: Accountability in the Algorithmic Economy

by Joshua New
by

The proliferation of artificial intelligence (AI) has prompted fears that AI will exacerbate human bias, manipulate consumers, and cause other harms. On May 22, 2018, the Center for Data Innovation hosted a panel discussion to examine strategies policymakers could use to mitigate these harms, as well as to discuss the Center’s proposal that policymakers should reject popular strategies such as algorithmic transparency and instead pursue a policy framework based on algorithmic accountability.

The Center for Data Innovation’s Joshua New set up the discussion with a presentation of the Center’s new report, How Policymakers Can Foster Algorithmic Accountability. New gave an overview of the main proposals that have been put forth to regulate algorithms, including mandating algorithmic transparency or explainability and establishing new regulatory bodies to oversee algorithmic decision-making. New highlighted many of the flaws of these proposals, such as how mandating companies use explainable algorithms would effectively prohibit the use of many advanced AI systems since there are tradeoffs between accuracy and explainability in AI, and there are few situations where a less-accurate algorithm would be desirable. New also explained the Center’s definition of algorithmic accountability, which is the principle that an algorithmic system should employ a variety of controls to ensure operators (the party responsible for deploying an algorithm) can verify it acts in accordance with its intentions, as well as identify and rectify harmful outcomes.

Panelists agreed that emphasizing the need for algorithmic accountability helps advance the debate about how to ensure algorithms are beneficial, rather than detrimental. Lauren Smith, policy counsel at the Future of Privacy Forum, lamented that discussions about how policymakers should treat algorithms often become mired in confusion over terminology, and that by laying out arguments for and against algorithmic transparency and explainability, the report helps move the conversation forward. Neil Chilson, senior fellow for technology and innovation at the Charles Koch Institute and former acting chief technologist at the Federal Trade Commission (FTC), noted that much of the discussion about how to regulate algorithms focus on creating new laws, whereas the more effective approach would be to ensure existing laws about discrimination could be effectively applied to algorithms. Chilson also pointed out that the report made a useful distinction between public- and private-sector uses of algorithms, as many market forces that apply to the private sector and influence how it uses algorithms may not be relevant in the public sector.

Despite the prevalence of concerns about the potential harms of AI, panelists pointed out technical expertise was often lacking from policy discussions. For example, Frank Torres, director of consumer affairs at Microsoft, stressed just how complex AI could be and explained that many efforts to develop high-level principles about how to ensure technology is safe are often impossible for engineers to operationalize. Smith agreed, stressing the need for policymakers to engage with technologists as they determine how to regulate algorithms.

The panelists also discussed methods operators could employ to help reduce harm. Nicol Turner-Lee, a fellow at Brookings’ Center for Technology Innovation, agreed that it is important to hold operators accountable for their algorithms, but also argued that developers should have some responsibility for ensuring their systems are free from bias. Turner-Lee explained how implicit bias and bad training data can lead to algorithms causing significant harm, but software developers could improve their ability to detect and remove bias by encouraging diversity on their development teams. Panelists also discussed other methods for achieving algorithmic accountability, including algorithmic impact assessments, disparate impact analysis, and ethical review boards.

Echoing a key point from the report, Smith pointed out that there is no straightforward checklist for a company that wants to avoid causing harm with algorithms, and that companies need to be flexible and use contextually relevant tools to do so. As policymakers continue to grapple with algorithms, they should recognize that a policy framework built around algorithmic accountability would provide this flexibility and minimize harms without sacrificing the benefits algorithms can offer society and the economy.

You may also like

Show Buttons
Hide Buttons