The U.S.-EU Trade and Technology Council (TTC), a task force U.S. and EU officials formed last year to help coordinate policy between the two regions, had its third ministerial meeting this week. One of the key outcomes of the meeting was the launch of a joint AI roadmap that introduces a three-pronged plan to “guide the development of tools, methodologies, and approaches for AI risk management and trustworthy AI.” These steps provide a practical pathway for the TTC to translate its high-level ideas into concrete actions.
First, the EU and United States plan to develop shared terminologies and taxonomies when it comes to operationalizing “trustworthy AI” and risk management. While both sides are pursuing risk-based approaches to AI and share common values around democracy, human rights, and freedom, as noted in the roadmap, they “have different views on regulatory approaches,” such as who should be responsible for risk assessment and what is the right balance between regulatory and voluntary measures. The two sides are not trying to harmonize regulation through the TTC but rather are trying to ensure their respective policies are interoperable by having a shared understanding of key terms such as trustworthy, risk, harm, risk threshold, bias, robustness, safety, interpretability, and security. To do this, they intend to map terminology and taxonomy in key EU and U.S. documents such as the U.S NIST AI Risk Management Framework, the U.S. Blueprint for an AI Bill of Rights, and the EU’s AI Act, and develop a shared understanding.
The second area the roadmap focuses on is providing allied leadership in international AI standards—and this area will be a key litmus test for whether the TTC can deliver meaningful cooperation and action. Both sides state that they seek to “create consistent ‘rules of the road’ that enable market competition, preclude barriers to trade, and allow innovation to flourish,” by cooperating on R&D for AI standards, promoting continual U.S.-EU expert-level information sharing, and convening their respective stakeholders to ensure appropriate representation at important standards-setting bodies and organizations. However, as ITIF explains in a 2022 report, Europe’s statements about working with the United States on technology standards haven’t matched its actions so far, which have excluded experts from foreign firms that have for years played a constructive role in European standards setting. Following through on this element of the roadmap provides the TTC with an opportunity to prove it can deliver on building meaningful ways to work cooperatively on tech standards toward shared goals.
Third, the roadmap states that the EU and United States will develop knowledge-sharing mechanisms to better monitor and measure existing and emerging AI risks. For instance, the roadmap says “both parties intend to take actionable steps towards a tracker of existing and emergent risks and risk categories based on context, use cases, and empirical data on AI incidents, impact, and harms.” In essence, the TTC envisions this tracker would serve as a sort of baseline for detecting and analyzing AI risks that would be continually updated to include “the dynamics of development and use, improvements in understanding of the potential harms to shared values, compound risks due to the interaction of several systems, or unknown but predictable risks that could arise from new AI methods and/or contexts of use.” The short term objective is to establish what the objectives and methodology of such a tracker would be with a long term view toward creating benchmarks and evaluations of AI risks.
The joint AI roadmap is a solid effort to take practical steps forward on several of the ideas presented for transatlantic AI cooperation in the TTC’s inaugural meeting. Now the job for the TTC and its various working groups is to properly implement the plan and prove it can walk the walk.
Image credits: Wikimedia commons.