Home IssueArtificial Intelligence Labeling Incorrect AI Output As Deceptive Would Be Misguided Overreach By The FTC

Labeling Incorrect AI Output As Deceptive Would Be Misguided Overreach By The FTC

by Morgan Stevens
by and
The FTC

The Center for Artificial Intelligence and Digital Policy (CAIDP) recently filed a complaint with the Federal Trade Commission (FTC) urging it to investigate OpenAI. Its complaint argues that when GPT-4 produces incorrect information, “for the purpose of the FTC, these outputs should best be understood as ‘deception.’” Further, it echoes a prior FTC blog post about “[AI that can] create or spread deception.” In that article, the FTC warned that it is unlawful to “make, sell, or use a tool that is effectively designed to deceive” and demanded companies take immediate steps to address the risk. However, labeling false output from AI models as a “deceptive practice” under the FTC Act is misguided for four reasons.

First, incorrect answers are not deception, they are simply mistakes. Search engines sometimes return wrong answers, GPS systems sometimes give incorrect directions, and weather forecasts are sometimes not right. Unless the FTC plans to label all those errors “deception” then it should not do the same for erroneous AI output. Not to mention, as the poet Alexander Pope famously wrote, “to err is human.” The FTC should not require AI systems to meet a higher standard for accuracy than any other technology or professional.

Second, even if the FTC believes companies have designed some AI systems to deceive others, that is not something regulators should necessarily stop. Many legitimate companies make products designed to deceive someone, including those that make photo editing software, makeup products, and magic props. Indeed, many photo filters already incorporate AI. Unless the FTC plans to halt all these companies as well, it should not arbitrarily target AI companies, especially when they do not give incorrect answers to advance any malicious purpose or to cause consumers harm.

Third, the FTC does not have authority under the FTC Act’s prohibitions on “deceptive acts or practices” to regulate AI systems in the way CAIDP is advocating. The FTC’s Policy Statement on Deception makes clear its authority is focused on a “representation, omission, or practice” likely to mislead a consumer, such as inaccurate information in marketing materials or a failure to perform a promised service. It would be completely reasonable for the FTC to use its authority to investigate an AI company for deceptive claims it has made about its products, but that is very different than using that same authority to investigate the output from that company’s AI systems.

Fourth, such a ruling would frustrate AI development in the United States. No company would be able to bring new AI systems to market if they had to be 100 percent accurate all of the time. AI systems learn from real world data which is often flawed. Imagine if the FTC had ruled in 1938 that radio stations would be liable for deception if they aired anything false on their station. Americans never would have been able to enjoy news and sports on the radio.

In conclusion, arguments that the FTC should consider GPT-4’s mistakes as unlawful deception are entirely misguided. There are plenty of legitimately deceptive practices in need of the FTC’s attention, but GPT-4 is not one of them.

Image credit: Flickr user Emma K Alexandra

You may also like

Show Buttons
Hide Buttons