Thousands of creators, along with several organisations, signed a public statement declaring that “the unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.” However, this statement overlooks three critical points: 1) AI models train on publicly available data in ways consistent with long-established practices of learning from existing works; 2) creators are seeking protection from AI-driven changes instead of adapting to them; and 3) existing copyright protections already safeguard creators against unauthorised reproduction and sale.
First, the statement mistakenly argues that the unlicensed use of creators’ work to train AI models is unjust. However, there is nothing unjust about training AI models on publicly available datasets—whether images, text, or audio. When a painting is displayed in a gallery or a song played on the radio, it is accessible for anyone to view, analyse, or even draw inspiration from—no license required, provided there is no replication or reselling involved. Copyright law permits people to view and learn from public art, and it should not impose new hurdles simply because the observer is a machine.
AI models learn from creative works in a way that mirrors human learning and inspiration. Historically, artists, musicians, and writers have drawn from a variety of influences, studying past masters to create something unique. Just as an artist studies the styles of past masters to produce something novel, AI models analyse data to generate fresh content based on their observations. No one argues that a painter must license every brushstroke inspired by Renaissance masters, so policymakers should not impose a different rule for AI.
Secondly, creators are seeking exemption from AI-driven change rather than embracing it. New technologies have always sparked anxiety in creative fields—for instance,19th-century portrait artists feared photography would threaten their craft. While photography became the standard for portraits, it also made them affordable for the masses and inspired a new painting style: Impressionism. Today, AI offers a similar opportunity—creators can reach wider audiences, streamline their creative processes, reduce costs, and explore new art forms. For example, if technology lowers the costs of creating art, it makes content more accessible to consumers, bringing the benefits of creativity to a broader public.
Third, copyrighted works are protected by existing laws that prevent unauthorised reproduction, distribution, or sale. For example, illegal sales of copyrighted works or any form of unlicensed distribution—whether physical or digital—are clear violations of copyright protection. Just as it is unlawful to sell unauthorised duplicates of a copyrighted work, like a photocopy of a photograph, it is also unlawful to duplicate works using AI. When developers use vast datasets to train AI models capable of generating original content in response to prompts, they are using a technical process of recognising and predicting patterns, not simply copying and pasting creators’ work.
Imposing additional restrictions on AI training is unnecessary and would only hinder the development of AI models. Imagine if the UK government had banned the production of CDs featuring UK music to “protect artists”; UK musicians would have seen limited international reach. Similarly, curbing AI’s access to UK content (as the government is considering) could prevent UK culture, values, and ideas from being represented in global AI models.
Policymakers should prioritise a forward-looking strategy that fosters innovation, expands consumer access to diverse and affordable content, and promotes the nation’s cultural presence in an AI-driven world.
Image Credits: Emma Farrer/Getty Images