Home IssueArtificial Intelligence Jumping on the Bletchley Declaration’s Existential AI Risk Bandwagon Hurts the US and AI

Jumping on the Bletchley Declaration’s Existential AI Risk Bandwagon Hurts the US and AI

by Daniel Castro
by and

The UK’s AI Safety Summit brought together dozens of leaders in governments, industry, and civil society to consider how to best manage risks from recent advancements in artificial intelligence (AI). Its most tangible outcome has been the Bletchley Declaration, a statement signed by 28 countries, which asserts, “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.” The decision by so many governments, especially the United States, to legitimize the belief that AI presents an existential risk that governments must address will seriously undermine efforts to rapidly develop and adopt the technology for beneficial purposes.

Fears that AI presents an existential risk to humanity are not new, but they have gained prominence in recent years as the effective altruism movement (headlined by disgraced FTX founder Sam Bankman-Fried) poured funding into groups focused on evangelizing the need to address the long-term risks of superintelligent AI. For example, earlier this year the Center for AI Safety, which received millions from FTX, released a statement arguing that “mitigating the risk of extinction from AI should be a global priority” and the Future of Life Institute, funded by Elon Musk, called for a six-month “pause” on AI development for similar reasons. While those statements generated many headlines, many top AI researchers firmly reject the merits of these doomsday scenarios and consider these concerns entirely speculative.

Unfortunately, the UK has made existential risks from AI a key focus of the summit. The UK invited both the Future of Life Institute and the Center for AI Safety to participate, organized a one-on-one discussion between Elon Musk and UK Prime Minister Rishi Sunak, and placed the risk that there will be unpredictable leaps in AI capabilities or loss of human control over AI systems at the top of the agenda. In addition, in her opening remarks, Secretary of State Michelle Donelan compared advancements in AI to the discovery of an ozone hole over Antarctica and called for similar levels of mobilization to “effectively tackle an existential problem”

For the UK, the AI Safety Summit was a strategic opportunity to carve out a role for itself on the global stage in AI governance. While the UK has important capabilities in AI, the United States, China, and EU have workforces, consumer markets, capital, and global trade capabilities that dwarf Britain’s medium-sized economy. Regrettably, the UK has opted for an expedient but misguided path by emphasizing its role in preventing existential risks from AI rather than putting its considerable research capabilities and global soft power behind the common-sense, outcomes-oriented, and pro-innovation approach for AI it laid out earlier this year. 

It has been clear for some time that the UK has embraced the idea of existential risk, but it is surprising that the United States has now decided to endorse this view too, especially given the lack of consensus about the validity of this risk. Indeed, while the UK prime minister has been somewhat circumspect in his commentary on existential risk (Sunak acknowledged that “some experts think it will never happen at all”), U.S. Vice President Kamala Harris offered no such caveats when she bluntly warned in her speech at the summit that the “existential threats of AI…could endanger the very existence of humanity.”

For the United States, the AI Safety Summit was a missed opportunity to lead and shape global discussions on AI safety, a key principle in the White House’s new executive order on AI. AI safety itself is not very controversial. Virtually all stakeholders agree that there should be some guardrails for AI and that safety considerations should be embedded in AI development. The question is what to do about it. The United States should be focused on recruiting other countries to accept the standards and best practices it is developing in partnership with the private sector to support AI safety. In particular, it should be seeking international recognition of the newly launched U.S. AI Safety Institute which could create common definitions for AI terminology like “frontier models,” guidelines for conducting red teaming, and methodologies for testing, evaluating, verifying, and validating AI models. By equating conversations about AI safety with speculative concerns about existential risks, the United States has squandered an opportunity to get productive, concrete outcomes emerging from the summit. And the United States has the most to lose; the vast majority of the countries who signed the Bletchley Declaration don’t have “frontier” AI companies who will be impacted by future rules and regulations.

Labeling AI an existential threat will seriously undermine support for deployment and adoption of the technology. Why would policymakers fund advancements in AI if doing so might bring the world closer to global catastrophe? Moreover, while the Bletchley Declaration gave lip service to the benefits of AI, the simple fact remains that policymakers are more interested in minimizing the risks of AI than maximizing the benefits. There has been no similarly high-level global summit bringing together world leaders on how to ensure AI improves health outcomes, provides people with a quality education, or improves sustainability. The fact that it is easier for policymakers to envision a world where machines destroy humanity rather than feed the poor says more about our global politics than it does about the risks of AI.

Luckily policymakers have a chance to do better next time. South Korea has already committed to hosting the next summit, and France will host the one after that. As policymakers plan these future meetings, they should reconsider how to ensure their safety initiatives do not negatively impact the rapid adoption of beneficial uses of AI. One can hope they might even go so far as to focus on how to accelerate AI innovation and adoption.

Image source: Center for Data Innovation

You may also like

Show Buttons
Hide Buttons