The United States and China commenced their first intergovernmental dialogue on artificial intelligence (AI) in Geneva on May 14, 2024. The two sides agreed to discuss areas of concern and share domestic approaches to AI risks and governance, with a particular emphasis on safety issues associated with advanced systems. This dialogue was a crucial step towards better cooperation between the United States and China on addressing AI risks, but given the continued growth in both commercial and military AI applications, much more collaboration is necessary, including among non-governmental stakeholders.
As the world’s two biggest economies and leading powers in AI, the United States and China hold the keys to shaping the future of AI. When U.S. President Joe Biden and Chinese President Xi Jinping agreed to launch these intergovernmental talks on AI last November, they opened the door to building a mutual understanding of AI risks and potential AI governance frameworks. The hard part will be turning that initial agreement into meaningful, sustainable action.
Cooperation faces significant headwinds. The geopolitical rivalries and the lack of trust between these two countries may slow international coordination and lead to flawed countermeasures, thus hindering international agreements on mitigating AI risks. In addition, diplomatic processes which typically move at a slow pace may be unable to keep up with the rapid development of AI technologies.
Promoting direct collaboration among experts from both nations is critical to bridge gaps in understanding and accelerate the development of scientific, evidence-based policymaking. Outside formal “track 1” diplomatic channels, holding track 1.5 and track 2 dialogues involving non-governmental experts can help build trust, foster innovative solutions, and create consensus among different countries. Track 1.5 dialogues include both government officials and non-governmental experts, while track 2 dialogues include only unofficial representatives. These types of conversations happen a lot between stakeholders in China and the United States to promote stable relations. The advantage of Track 2 diplomacy is that it allows for more flexible discussions on cutting-edge and even sensitive issues. For example, participants from the AI industry can explore consensus on addressing risks through global norms and technical measures.
But when looking at the participants of Track 2 dialogues on AI in recent years, most of them are foreign policy and military professionals, and the involvement of content experts and industry representatives is very limited. This imbalance can lead to a narrow focus on geopolitical and national security concerns, neglecting the technical and broader aspects of AI safety. Scientists and industry experts are crucial in providing a nuanced understanding of AI technology development and application under real-world scenarios. By involving these stakeholders, the dialogue can shift from high-level political negotiations to technical problem-solving, increasing the likelihood of developing practical solutions to shared challenges.
The inclusion of experts from academia and industry is vital for several reasons. First, scientists and industry leaders are at the forefront of AI research, development, and implementation. They possess the technical expertise needed to identify, predict, and address complex AI safety issues, and they can look into specific AI deployments and work on solving technical issues. Second, collaboration between scientists can help depoliticize the dialogue, focusing on shared goals of advancing knowledge and technology about AI safety, including safety benchmarks and testing protocols. Academic experts tend to focus on technical aspects and how they could contribute rather than discussing existential risk with an alarmist attitude. Third, given the significant roles of AI companies in both China and the United States, they can provide vital input on technology research and development, technical measures, and governance frameworks. Overall, expanding the dialogue to include a broader range of stakeholders can provide more diverse perspectives on the risks and opportunities associated with AI, potentially creating new opportunities for innovation and experimentation while ensuring that AI applications are safe and beneficial. For example, they could explore joint research projects on specific AI safety issues, such as algorithmic bias and deepfakes, which could lead to breakthroughs that benefit both nations and the global community.
While the intergovernmental dialogue between China and the United States on AI safety is a promising start, the initiation of Track 1 dialogue does not mean that Track 1.5 and Track 2 dialogues are unimportant. On the contrary, these dialogues can play an even more effective role, stimulating and promoting Track 1 dialogue to develop in a more comprehensive and in-depth direction, including fostering cooperation among scientists and the private sector. By engaging a broader range of stakeholders and fostering international cooperation, the two countries can lead the way in developing a robust, safe, and ethical framework for AI that benefits the global community.
Image credits: Rawpixel