Home IssueArtificial Intelligence Policymakers Should Use the SETI Model to Prepare for AI Doomsday Scenarios

Policymakers Should Use the SETI Model to Prepare for AI Doomsday Scenarios

by Daniel Castro
by
Allen Telescope Array

Will technological advances create superintelligent artificial intelligence (AI)—also known as artificial general intelligence (AGI)—that threatens humanity’s existence? Many experts remain skeptical that AGI is on the horizon. For those who believe AGI will occur, many are optimistic that AI safety methods will mitigate any potential hazards. But enough prominent individuals in the field have warned of a catastrophic risk from AGI that policymakers feel compelled to act.

Unfortunately, given the fear-driven rhetoric around AI, policymakers may overreact to this hypothetical scenario and enact measures that stall beneficial AI innovations that could improve human welfare. Indeed, some activists famously demanded a six-month world-wide pause on AI research in response to these fears. In addition, many other ideas have been offered. Most notably, some have proposed an Intergovernmental Panel on AI Safety, modeled after the Intergovernmental Panel on Climate Change (IPCC) to give policymakers “evidence-based predictions about what’s coming.” But research on AGI is much less mature than research on climate change and, as noted previously, there is little consensus on these issues, so a global panel is unlikely to offer clear and timely advice to policymakers. Similarly, some have proposed an International Atomic Energy Agency (IAEA) for AI, which unfairly portrays AI as being equivalent to nuclear weapons and would likely lead to significant regulatory interventions to restrict its development and use. But the bigger problem is that these proposals put the cart before the horse because they focus on monitoring and managing AGI safety before anyone even knows if developing AGI is possible.

Fortunately, a better model exists for the current stage of AI development. As the Center for Data Innovation has written previously, the risks of dangerous superintelligent AI wiping out human civilization is very similar to the risks of hostile superintelligent aliens invading our planet. Indeed, both risks are hypothetical, unprovable, and potentially catastrophic. Moreover, there are true believers on both issues as well as those who are skeptical or agnostic, and yet the risk of a future alien invasion does not paralyze policymaking around issues like radio signals, space exploration, or national defense.

How policymakers and researchers have chosen to respond to this risk is quite instructive. Since the 1950s, scientists have actively worked to find proof of the existence of intelligent life elsewhere in the universe. They have sought consensus on where and how to look for alien life and have designed experiments to test their hypotheses. They have also conducted this work in a challenging geopolitical environment when the United States and the Soviet Union were in the midst of the Cold War and challenging one another for technological superiority in the space race. Yet the policymakers and researchers pressed on, with this work culminating in the formation of the Search for Extra-Terrestrial Intelligence (SETI) Institute in 1984, with funding from NASA as well as private donations, which has served as a hub for this research for nearly four decades.

Policymakers and researchers should pursue a similar model for AI and establish a Search for Artificial General Intelligence (SAGI) Institute focused on identifying advanced machine intelligence. Its goal should be to develop consensus around signs of AGI, how to test for AGI, different levels of AGI, and what researchers should do if they ever identify AGI. A SAGI Institute would relieve the private sector of the responsibility of developing such tests and post-detection protocols. Organizations developing AI, including private sector firms and universities, could voluntarily commit to cooperating with a SAGI Institute. For example, AI researchers could report results to the SAGI Institute, which could also serve as a global clearinghouse for evidence of AGI, and whistleblowers could also submit reports about researchers hiding results.

A SAGI Institute could also facilitate geopolitical cooperation around the risks of AGI. The tensions between the United States and China as rival superpowers and fierce competitors on AI make cooperation on AI risks more challenging but no less important. Understanding if and when someone develops AGI is a question (as a 1977 NASA report on the need for international cooperation on SETI research wrote) “pertinent to the human species, both to us and our descendants.” Cooperating on scientific research to identify AGI would not limit geopolitical rivalries, but it would allow for a common understanding of whether the dawn of superintelligent machines has arrived and allow policymakers to prioritize and respond accordingly.

A SAGI Institute would not be a substitute for safety research on existing and emerging AI models. Industry and academia are rapidly exploring and innovating in the field and these efforts should continue. Nor would a SAGI Institute be a replacement for traditional regulation of products and services, such as for autonomous vehicles or AI-enabled healthcare services. But today’s AI models are not the ones provoking fears about an existential threat to humanity. A SAGI Institute would be designed to alert the world of the advent of AGI that requires a new level of risk management.

Early SETI researchers hoped for “relatively quick results” but realized that they might be running a marathon not a sprint. On the 50-year anniversary of modern SETI research a 2009 editorial in Nature declared the field as “marked by a hope, bordering on faith” that alien intelligence could be found. The same is likely true for AGI. Debates about the potential for AGI are not new. Herbert Simon, one of the pioneers of AI, wrote in 1960 that “machines will be capable, within 20 years, of doing any work that a man can do.” Recent predictions about AGI being on the near horizon might also prove to be illusory. But even if AGI is a perpetual mirage, it will continue to attract proponents. Creating a SAGI Institute offers a compromise that should satisfy both those who believe AGI risks are imminent and those who remain skeptical as to its likelihood.

Image credit: Seth Shostak/SETI Institute

You may also like

Show Buttons
Hide Buttons