Canada, one of the countries striving to lead in artificial intelligence (AI) research and development, hosted an AI conference in Montréal this week. This conference was one of the last events as part of its G7 Presidency, which France will take over in 2019. Eager to follow in Canada’s footsteps, France used the Montreal conference to convince other countries to partner on the creation of the International Panel on Artificial Intelligence (IPAI), showcased as “an IPCC of AI”—a group similar to the Intergovernmental Panel on Climate Change (IPCC), which brings together global experts to comprehensively, openly, and objectively assess the risk of human-induced climate change, for the purpose of developing global consensus on the impact of AI. This proposal is not only misguided for France, which should be directing its energy into fixing EU laws to better support AI and increasing EU public investments in AI, but it is also a distraction for the other G7 countries, which should be focusing on accelerating the development and adoption of AI through global cooperation on research, development, and deployment.
The “IPCC of AI” is rooted in the idea that new global leadership is necessary to avert creating an AI dystopia, or in the words of French President Emmanuel Macron the “opaque privatization of AI or its potentially despotic usage.” While the IPCC model may be appropriate for evaluating the scientific literature on climate change—where there is an objective truth—this approach will not resolve unanswerable ethical questions about the societal changes brought about by AI or the willingness of countries to make tradeoffs between technological progress and economic disruption. Moreover, the EU is already part of many similar initiatives. In addition to the UN Centre for Artificial Intelligence and Robotics which studies the “risk-benefit duality” of AI, EU member states have signed on to the Declaration of cooperation on AI to implement a common European approach to responsible use of AI, and the European Commission’s High Level Expert Group on AI is developing guidelines on the ethical use of data in AI.
Instead of launching more distracting debates about AI ethics, France should be shepherding the EU into the emerging AI economy by encouraging it to match its ambitions of being a global leader in AI with adequate investment and supportive regulations. With plans to invest 1.5 billion euros in AI through 2022, France is ahead of its fellow member states. But to emerge as a credible leader in AI, the rest of Europe will have to commit considerably more resources and become more attractive to tech investors.
Moreover, even if the EU follows through on its newly released plan to commit more public funds to AI R&D, more private sector investment is also necessary. In this respect, Europe is still lagging behind the United States and Asia. Unfortunately, the GDPR makes it difficult for companies to develop and use AI, which will discourage this investment. Indeed, the negative impact of the GDPR is already visible: a new study “found evidence suggesting negative and pronounced effects following the rollout of GDPR.” Specifically, the study’s authors note that compared to their U.S. counterparts, EU ventures are receiving less venture financing and deals post-GDPR.
To their credit, a growing number of EU member states do recognize that success in AI is key to national competitiveness, and some have been enacting targeted rules to mitigate some of the limitations GDPR creates on access to data. For example, the UK has set up data trusts—agreements between the government and industry to facilitate the secure exchange of sensitive or proprietary data. In addition, the recent French AI strategy calls for proactively using a clause in the GDPR that allows personal data to be repurposed if it serves the public interest, including health, defense, the environment, and transport.
Rather than attempt to establish global governance of AI, the G7 countries should instead aim for increased collaboration between AI research labs, especially those working on using AI in particular sectors such as manufacturing or health, as this will accelerate the development and deployment of AI. There has been some progress on this front as a few European academic research centers have announced collaborative research agreements, such as the UK’s Alan Turing Institute and France’s DATAIA and Belgium’s Imec Institute and France’s CEA-Leti, but more work is needed.
In short, there is still a lot of work to do in AI, and the G7 can play an important role in accelerating progress. But this progress will not come from increasing the number of multilateral meetings, intergovernmental working groups, and international governance models. It will only come from closer collaboration, more public funding, and creating regulations that are fit-for-purpose.
Image credit: Frederic Köberl