WASHINGTON— In response to an open letter calling for labs to pause the development of advanced AI systems, Daniel Castro, director of the Center for Data Innovation, issued the following statement:
Fears about out-of-control AI are not new, but today’s letter shows that fears about AI are out-of-control. Comparing large-language models (LLMs), like GPT-4, to human cloning and eugenics is both outrageous and unfounded. These critics, many of whom have a long history of parroting doomsday scenarios about artificial general intelligence, have no evidence to back up their latest claims that advanced LLMs present an unprecedented existential risk.
The sky is not falling, and Skynet is not on the horizon. However, AI advances have the potential to create enormous social and economic benefits across the economy and society. Rather than hitting pause on the technology, and allowing China to gain an advantage, the United States and its allies should continue to pursue advances in all branches of AI research. Policymakers and others need to understand that the risk for most people is that AI is not deployed soon enough—creating lost opportunities to improve healthcare outcomes, address climate change, and reduce injuries and accidents in workplaces and in transportation.
The bigger risk is that lawmakers will remain unaware of the potential magnitude of benefits from AI—not that AI systems will become self-aware.