Home IssueArtificial Intelligence DC’s Proposed “Stop Discrimination by Algorithms Act” Would Discriminate Against Algorithms

DC’s Proposed “Stop Discrimination by Algorithms Act” Would Discriminate Against Algorithms

by Daniel Castro
by
Washington DC neighborhood with houses

The D.C. Council discussed the Stop Discrimination by Algorithms Act at a hearing this week, legislation that D.C. attorney general Karl Racine unveiled last December to prohibit organizations from using certain types of data in algorithmic decision-making in an effort to ensure that organizations do not use algorithms to discriminate against individuals. While policymakers should take steps to reduce discrimination in society, they should do so directly by enforcing and strengthening existing civil rights laws. By attempting to achieve this indirectly through a broad set of restrictions on the use of algorithmic decision-making, D.C. policymakers risk stifling the use of innovative technologies, hurting both businesses and consumers.

Understanding exactly what the proposed law would do requires a close look at the legislative language. The law would apply broadly to any organization that meets at least one of the following conditions: has personal information on more than 25,000 DC residents; has greater than $15 million in average revenue for the prior three years; is a data broker; or is a service provider that provides algorithmic decision-making to others. The proposed law contains four main provisions. First, it prohibits organizations from using algorithms to discriminate against individuals in certain situations. Second, it requires organizations to disclose how they use personal information in algorithmic decisions. Third, it creates a requirement for organizations to audit their algorithms for discriminatory impacts and report this information to the attorney general. Fourth, it authorizes both the attorney general and individuals to bring civil action against anyone in violation of the law. Each of these provisions is problematic as drafted.

The first provision is the most sweeping. It would prohibit any covered entities from using an algorithmic process that uses artificial intelligence (AI) to make decisions based on certain protected traits in a way that impairs access to, or advertising for, “important life opportunities.” The bill defines “important life opportunities” as access to “credit, education, employment, housing, insurance, or a place of public accommodation,” the latter referring to a broad category that encompasses everything from restaurants and hotels to barber shops and bowling alleys. Protected traits include “race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income, or disability.”

While well-intentioned, policymakers do not need to enact AI-specific anti-discrimination laws because existing laws already prohibit discrimination. Using AI does not exempt organizations from adhering to these laws. Rather than pursue duplicative laws, policymakers should review and clarify how existing anti-discrimination laws apply to AI to ensure organizations comply with the spirit of these laws. At the local level, policymakers could clarify the DC Human Rights Act and at the federal level, this could include federal laws like Title VII of the Civil Rights Act, the Americans with Disabilities Act, the Age Discrimination in Employment Act, and the Pregnancy Discrimination Act. Moreover, if the purpose of the legislation is to prevent discrimination, it should remain narrowly focused on discriminatory actions with adverse effects on individuals, rather than broadly regulating the use of AI for marketing which will likely have unintended consequences, such as restricting targeted advertising for coding boot camps for women or faith-based colleges.

The second provision requires organizations to provide detailed documentation on how they use personal information in AI-enabled algorithmic decision-making, including what personal information they collect or use, where they get it from, who they share it with, how they use it, and how long they keep it. All these details must be explained completely, but in no more than one printed page, and then provided in English, Spanish, Chinese, Vietnamese, Korean, and Amharic. Organizations must provide this notice to individuals before making any algorithmic decision, as well as a separate notice to individuals if they take any adverse action against them. They must also update their notice within 30 days every time that make a change to their practices.

While transparency can help consumers make more informed decisions, consumers should receive the same level of transparency for automated decisions as for non-automated decisions. If policymakers believe that organizations are making decisions about individuals without sufficient notice, then they should apply disclosure requirements to all organizations regardless of whether they are using a computer algorithm or a human process to make decisions. In addition, the proposed law’s notification requirement for adverse actions is not limited to only decisions based on protected traits, which means virtually any automated decision could fall under this requirement. For example, a credit card denying a charge that appears fraudulent or an employer rejecting an applicant that does not hold a required credential could all trigger this notification obligation.

The third provision requires organizations to have third parties conduct annual audits of their algorithmic decision-making, including to look for disparate-impact risks, and create and retain an audit trail for at least five years that documents each type of algorithmic decision-making process, the data used in that process, the data used to train the algorithm, and any test results from evaluating the algorithm as well as the methodology used to test it. Organizations must also provide a detailed report of this information to the DC attorney general’s office. This provision places an enormous and burdensome auditing responsibility not only on those organizations using algorithms for decision-making, but also on service providers who may offer such functionality to others. Many of the auditing requirements would be inappropriate to require service providers to report since they will not necessarily have details about how a particular customer uses their service. Moreover, many businesses and service providers are struggling to comply with the algorithm auditing requirements in New York City, which only apply to AI systems used in hiring. The audit requirements in the proposed Act would apply to a much broader set of activities and present even more challenges.

The fourth provision establishes enforcement mechanisms for this law. First, it empowers the D.C. attorney general to investigate potential violations and seek fines of not more than $10,000 per violation, plus damages, restitution, and “any other relief that the court considers appropriate.” Second, it creates a private right of action that allows individuals to bring a civil suit against organizations that violate the act. The private right of action is particularly problematic because it would likely open a floodgate of frivolous lawsuits, as has occurred in other jurisdictions with similar laws, imposing substantial costs on organizations that are eventually passed on to consumers.

Overall, while the legislation is well-intentioned, it is ultimately misguided and harmful. By imposing a different anti-discrimination standard on organizations that use AI, adding additional compliance burdens, and exposing these organizations to more liability, this law would effectively discourage many organizations from using AI, especially those serving D.C. residents. Given that AI provides many opportunities for organizations to reduce costs through automation, by discouraging the use of AI, this law will keep consumer prices higher than they could be. Moreover, given that AI can help organizations improve the accuracy of their decisions, as well as reduce human bias in decision-making, this law will likely result in more consumers being denied access to the various “important life opportunities” policymakers are trying to protect.

In today’s digital economy, organizations increasingly use algorithms to automate certain decisions, such as whether to extend credit to a loan applicant or which job applicants appear most qualified for a position. Understandably, policymakers want to prevent discrimination in the digital economy, but the best way to achieve that is to strengthen the enforcement of anti-discrimination laws, not create a regulatory environment that discriminates against the use of algorithms. Moreover, AI offers many opportunities to detect and eliminate human biases, and policymakers should look for more opportunities to use these tools rather than unfairly stigmatizing their use.

Image credit: Katelyn Warner

You may also like

Show Buttons
Hide Buttons