Home IssueArtificial Intelligence Canada’s Reasons for An AI Law Do Not Stand Up to Scrutiny

Canada’s Reasons for An AI Law Do Not Stand Up to Scrutiny

by Daniel Castro
by
Canadian flag waving in front of the Parliament Building on Parliament Hill in Ottawa

The Canadian government recently released a “companion document” providing more background on the Artificial Intelligence and Data Act (AIDA), its proposed legislation to regulate AI systems. While the government is still vague about the details of the new law, it clearly outlines its reasons for regulating AI. Unfortunately, every one of those reasons is based on flawed logic or simply wrong facts. If policymakers have such a poor understanding of the evidence, then it would be prudent to halt the rush to regulate.

Like the European Union, with its proposed AI Act, Canada hopes to create “a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses.” The proposal has three main components: 1) imposing certain requirements on high-impact AI systems; 2) establishing a new AI and Data Commissioner responsible for enforcing the law; and 3) prohibiting certain harmful uses of AI. But the proposal lacks detail on the specifics, such as the criteria that will be used to determine what qualifies as a high-impact AI system or what requirements those systems would be subject to.

While lacking specifics on why “how”, the AIDA companion document is more clear about the “why.” In a section of the document titled “Why now is the time for a responsible AI framework in Canada” it says that “it is difficult for consumers to trust the technology” and cites three examples of alleged “high-profile incidents of harmful or discriminatory outcomes.” But none of these examples are valid.

First, on the list is “A resume screening AI system used by a large multinational company to shortlist candidates for interviews was found to discriminate against women.” This example refers to a well-known news report of Amazon experimenting with a hiring tool to rate candidates for technical jobs. Amazon’s developers identified that the tool penalized women and discontinued the project in 2017. Moreover, during the experiment Amazon’s recruiters did not use it to evaluate applicants. In other words, the company did exactly what policymakers should want: It tested its use of an AI tool, detected problems, and then mitigated harms by stopping the project. Creating a new AI law would not have improved that outcome. In addition, Canada’s gender equality laws already prohibit workplace discrimination, and those protections apply even if employers use AI in hiring.

The second example is “An analysis of well-known facial recognition systems showed evidence of bias against women and people of color.” But the now five-year-old study cited is not about facial recognition—technology used to match similar faces, either by searching for similar images in a database (i.e., one-to-many matches) or by confirming whether two images show the same person (one-to-one matches). Instead, it is about facial analysis—technology used to infer characteristics such as age, gender, or emotion based on a photo. Specifically, the study was about whether three commercial facial analysis systems could correctly predict gender across both light and dark-skinned individuals. The two technologies may sound similar, but they are as different as apple trees and apple sauce. Moreover, recent testing by the National Institute of Standards and Technology (NIST) shows that the best facial recognition algorithms have “undetectable” differences between different demographics. So here again, the evidence falls flat.

The third and final example used to justify regulating AI is that “AI systems have been used to create ‘deepfake’ images, audio, and video that can cause harm to individuals.” The issue is legitimate, although not novel: Software has long made it possible to digitally create “fake” images, although deepfake technology is making it much easier for anyone to produce realistic fake images and video without much technical expertise. While there are concerns about deepfakes as a source of disinformation, particularly in elections and global affairs, and infringing on celebrities’ publicity rights, its most visible impact is on individuals, particularly celebrities and women, who have fake pornographic images created about them. But AIDA would not address this problem, as the code to produce hyper-realistic images and video—whether legitimate or harmful—is open source and publicly available (and the AIDA explicitly, and rightly, does not attempt to regulate distribution of open-source software). Instead, this problem should be addressed with legislation updating Canada’s revenge porn law to prohibit nonconsensual distribution of deepfakes as well.

The AIDA seems to be premised on the assumption that stronger technology regulation increases consumer trust, and that higher levels of consumer trust will lead to more technology use. But, as past research has shown, there is little evidence to back up that claim. Indeed, fears that a lack of consumer trust may hold back AI adoption appear to be pure conjecture—ChatGPT gained 100 million users in two months, crushing all past records of consumer adoption of a new app.

Given that the government does not appear to understand its own evidence for regulating AI, it should pump the brakes on its aggressive pursuit of new regulations. Before creating an expansive regulatory framework for what promises to be a fundamental technology in the Canadian economy, it should better understand where the real risks exist—including the risk from overregulation—to ensure its rules are effective and avoid unintended consequences.

Image Credit: Jason Hafso on Unsplash

You may also like

Show Buttons
Hide Buttons