The EU is considering placing generative artificial intelligence (AI) tools, such as ChatGPT, in a “high risk” category in its upcoming AI bill, thereby subjecting such tools to burdensome compliance requirements. This sloppy addition needlessly stunts creativity and shows the EU is hitting the panic button instead of carefully considering the benefits and risks of new technologies.
The AI Act targets so-called “high risk” applications of AI—including those used in public services, law enforcement, and judicial procedures—that must comply with the strictest requirements, including conformity assessments, technical documentation, monitoring, and oversight measures. A new proposal would dump AI systems that generate complex text (chatbots) in a new high risk category despite their low risk. AI-powered chatbots can generate complex text from limited human input and fulfill various functions, from writing recipes, poems, scripts, and articles to Internet searches, creative ideation, and summarizing texts. Like many new technologies, AI chatbots have evoked familiar panic: Doomsayers prophesize such tools will destroy education, create catastrophic redundancies, confuse and control the masses—or become sentient (and sad about it).
Where there are plausible concerns about chatbots, such as the spread of misinformation or toxic content, legislators should deal with these risks in sectoral legislation, such as the Digital Services Act, which obliges platforms and search engines to tackle misinformation and harmful content. Not, as proposed, in a way that entirely ignores the different use case risk profiles. For example, there is nothing wrong with a generative AI system writing fictitious content for part of a novel; it is a problem if it is writing fictitious content for a scientific journal.
Instead of carefully considering these risks, legislators have succumbed to the latest panic and proposed to penalize products that are clearly beneficial to EU citizens. In addition to ChatGPT, which people already use for a range of valuable functions, this amendment would carelessly assign as “high risk” other helpful and harmless tools, including:
- Grammarly, a text-prediction and correction software
- Prose Media, a marketing and creative content tool
- Speechmate, a speech-writing tool
- GitHub Copilot, a code generation tool
- Bloomberg’s Brief Analyzer, a summation tool used by litigators
By absurdly considering the above use cases as “high risk,” the AI Act will curb productivity and creativity. Worse, it would limit these tools—many currently free to use—by subjecting them to expensive compliance requirements.
The biggest concern right now is not so much that chatbots are spewing lies—but that critics are spewing lies about chatbots.