Lately, the news cycle has included a series of ever-shifting claims about what impact generative AI systems will have on jobs. The purported impact varies wildly from outlet to outlet but the central message from the news media is clear: AI is here to take almost all the jobs, not just the blue-collar ones, the white-collar ones too. The problem is, many of the claims are hokum. Let’s unpack one of the worst recent examples: Vice Media’s article “OpenAI Research Says 80% of U.S. Workers’ Jobs Will Be Impacted by GPT.”
Vice leads with a sound bite statistic. Its headline is eye-catching, emotionally resonating, and easily repeatable, but it is narrowly true and broadly misleading. The statistic comes from a research paper by OpenAI, but the paper doesn’t simply say 80 percent of jobs will be impacted, it says “around 80 percent of the U.S. workforce could have at least 10 percent of their work tasks affected.” That means the real statistic is that large language models (LLMs) might impact at least 8 percent of work in the U.S. economy. A significantly less dramatic picture of the research findings but a more honest one.
If the statistics in Vice’s headline are a bullhorn for impending job disruption though, the words it uses are a dog whistle. Rather than saying jobs “could be impacted,” it says jobs “will be impacted,” subtly suggesting the impact on jobs is urgent and certain, which the OpenAI researchers themselves say is by no means the case. The ability for firms in different segments of the economy to adopt LLM models hinges on a myriad of factors, such as the availability of data for training the models, the regulatory environment, and innovation culture within businesses, to name just a few.
More importantly, just because a job is exposed to LLM automation “doesn’t determine whether the technology is likely to replace workers or merely augment their skills” as the New York Times more accurately reports. Tools like ChatGPT might be able to draft a legal document in half the time a human legal secretary can, but that doesn’t necessarily mean law firms can or should substitute their staff in favor of LLMs because these tools are still at a stage where they can misrepresent key facts and cite evidence that doesn’t exist. They still need humans to verify and check their outputs.
Unfortunately, Vice is not the only outlet exhibiting this vice (pun intended). Forbes recently published a hyperbolic article titled “Goldman Sachs Predicts 300 Million Jobs Will Be Lost Or Degraded By Artificial Intelligence,” based on a recent report from the investment firm. The report takes U.S. and EU data on job tasks, identifies the ones generative AI systems can do, and calculates exposure. The 300 million jobs number it comes up with is arguably presumptuous because it’s an extrapolated figure for the number of jobs around the entire world that might one day be exposed to generative AI automation (when that day is supposed to be isn’t entirely clear from the report). Still, Forbes has no qualms in spinning this potential “exposure” metric into a spurious narrative about seemingly certain job displacement. Several other outlets fall into the same trap.
Granted, conveying facts without any editorial color isn’t very well-suited to the modern world. The media has an important role to play in keeping the public informed in a way that is engaging and accessible, but there is a line between using data to tell a story and statistical spin-doctoring. Consider that Goldman Sachs’ own blog summarizing its research is titled “Generative AI could raise global GDP by 7%” because one of its central messages is that generative AI may displace some jobs but can also create new jobs, which history has shown are typically higher-paying jobs in the same industry. The Financial Times portrays this message more faithfully, noting “Goldman’s base case is for 7 percent of workers to lose their jobs entirely in the decade after generative AI reaches half of employers, but most will find something nearly as productive to do.” In fact, since many of the jobs that might be substituted involve mundane, repetitive tasks, there is no reason why many would not find jobs they find more creative and more rewarding.
It is easier for reporters to write these attention-grabbing headlines than to dig into real substance. But they should look at the facts. Concerns about AI taking jobs are based on the “lump of labor fallacy,” the idea that there is a fixed amount of work, and thus productivity growth, such as from automation, will reduce the number of jobs. But the data tells a different story, labor productivity has grown steadily for the past century (even if that growth has been slower recently) and unemployment is near an all-time low.
It is becoming more difficult to wade through the hogwash of claims about AI but if readers, and more importantly policymakers, aren’t prudent, they will make decisions based on unfounded fears or hype. Worse, such balderdash can distract from the real issues related to AI systems: how to spur their safe development and responsible use.
Image credits: OpenAI’s DALL-E-2