The Federal Trade Commission’s (FTC) recent investigation into OpenAI, prompted by a minor data breach and a defamation lawsuit, seems driven by anti-tech ideology rather than a measured understanding of the evidence. The FTC’s ham-fisted response is not only disproportionate, but it also appears to misunderstand the inherent nature of generative AI and apply consumer protection laws beyond their intended scope in a manner that will likely stifle innovation in one of America’s most promising digital startups.
The FTC’s decision to investigate OpenAI’s security practices in the wake of a data breach in March 2022 is surprising considering the context. First, the incident only affected 1.2 percent of active ChatGPT Plus subscribers (the company’s paid service) and the data breach only revealed partial payment information. The data breach did not expose full payment card information. Second, the cause of the data breach was a bug in a widely-used open-source library that was not maintained by OpenAI. Nevertheless, OpenAI swiftly identified and patched the bug, thereby enhancing security for every company using this open-source code, and resolved the issue on the same day of the discovery. Finally, OpenAI communicated transparently about the limited nature of the breach and the technical details. The company also decided to launch a bug bounty program to identify vulnerabilities in the future, showing that it takes security seriously.
These responses are exactly what regulators should want companies to do in this scenario, so it is concerning that the FTC has decided to subject OpenAI to its intense scrutiny over this incident. Data breaches are unfortunately common occurrences, yet most do not trigger FTC investigations, so it appears punitive and inconsistent for the FTC to single out OpenAI.
More broadly, the FTC’s investigation into OpenAI’s practices has raised concerns about the Commission’s jurisdiction and role in overseeing AI technologies. The FTC lacks clear and specific oversight authority to govern AI. During a recent hearing, Rep. Dan Bishop (R-NC) questioned the FTC’s legal authority over OpenAI, citing concerns about overreach and noting that libel and defamation are typically state matters. In response, FTC Chair Lina Khan clarified that the focus was not on those issues but on whether the misuse of private information in AI training could be seen as fraud or deception under the FTC Act, emphasizing a broad interpretation of “injury” to consumers. This exchange highlights the murkiness and potential overreach of the FTC’s approach to AI.
While the intention to protect consumers from potential harm is laudable, Rep. Bishop’s questioning reveals that the FTC’s legal authority in this domain is not well-defined. The agency’s expansive interpretation of “injury” and its decision to step into areas typically governed by state laws, such as libel and defamation, raise significant concerns that the FTC is misusing its authority to bring cases against AI companies because of its open hostility to tech companies.
Moreover, the FTC’s investigation of OpenAI looks like a broad fishing expedition for potential wrongdoing rather than a targeted investigation of alleged legal violations. The 20-page civil investigation demand letter—effectively an administrative subpoena—requests an extraordinary amount of detailed information from the startup. The FTC wants to know everything from what data OpenAI used to create its models, the names and credentials of everyone who has been involved in developing its models, all contracts since 2017 related to its AI models, and all public statements about its products. Satisfying many of the FTC’s requests would require substantial effort, on par with writing a detailed technical article, such as demanding the company “describe in detail the process of retraining a large language model in order to create a substantially new version of the model.” Due to this, the ratio of lawyers to engineers at OpenAI, and similar AI startups, will likely change significantly in the near future.
Balancing innovation and accountability in AI requires nuance and collaboration—not having regulators treat tech companies as adversaries. The FTC’s actions against OpenAI are a mistake, colored more by anti-tech sentiment than a pragmatic understanding of AI. Rather than burying the company in legal demands and holding a threat of legal action over its head, the FTC should take a more measured approach to ensure that it protects consumers but not at the expense of U.S. leadership in AI collaboration and innovation.