Home PublicationsCommentary Tracking AI Incidents and Vulnerabilities

Tracking AI Incidents and Vulnerabilities

by Daniel Castro
by
closeup photo of eyeglasses

As artificial intelligence (AI) systems become increasingly advanced and widespread, it is clear policymakers want to ensure that there are mechanisms in place to understand and manage the risks. AI already assists in high-stakes domains like healthcare, criminal justice, and financial services, and the technology’s impact on society will only grow as models become more capable. But there is no process in place to systematically track AI failures, vulnerabilities, and incidents to learn from mistakes and uphold public trust. To address this problem, Congress should charge the newly created AI Safety Institute—housed in the Department of Commerce’s National Institute of Standards and Technology (NIST)—with creating a national AI incident database and a national AI vulnerability database.

A centralized government repository would allow for the structured reporting and analysis of AI incidents across different sectors. Existing efforts at creating this type of database in the private sector have neither the resources nor the stakeholder buy-in to scale to the level possible with government backing. NIST could work with the private sector, academia, and civil society to standardize taxonomies, metrics, and reporting thresholds.

The concept would be modeled off, and built upon, existing incident tracking efforts at government agencies. For example, the FDA monitors issues with medical devices and pharmaceutical drugs, and the National Highway Traffic Safety Administration tracks auto defects. This type of post-market surveillance is a crucial regulatory tool as it gives consumers, businesses, and regulators insights on the real-world performance and safety of products. A national database for logging AI failures could operate in a similar fashion, facilitating transparency and learnings as the field develops. Better data means better responses when something goes wrong. Incident reporting has been vital for instilling public confidence in many sectors, and it could facilitate the introduction of AI.

Not every incident would need to be reported nor would reporting be necessary in every sector—the criteria should be those that cause tangible harm. In many cases, these reports should go to an existing incident database. For example, the Consumer Product Safety Commission does not need a separate recall database for AI-enabled products, but it should document if AI is the cause of unsafe products and relevant information about the AI system involved. By standardizing incident reporting about AI across these different databases, the AI Safety Institute could create a supra-database of AI incidents in multiple sectors. Over time, patterns may emerge from this data highlighting risk factors to prioritize while developing AI governance frameworks. In addition, consumers would gain greater insight into the current state of AI safety.

In addition to tracking incidents after they occur, the AI Safety Institute should proactively catalog vulnerabilities in AI foundation models to allow downstream developers and users to better mitigate risk. The Common Vulnerabilities and Exposures (CVE) program, established in 1999 and maintained with U.S. government funding, created a standardized naming system and baseline for reporting software vulnerabilities. This program has been enormously successful, with hundreds of partner organizations in 40 countries. NIST collates this information in the National Vulnerability Database (NVD), a comprehensive public database of cybersecurity vulnerabilities.

But vulnerabilities in foundation models are not the same as cybersecurity vulnerabilities, so either the CVE/NVD programs should be expanded to include AI-specific vulnerabilities or the AI Safety Institute should create a separate vulnerability database for AI. The AI Safety Institute should work with other countries to create a common vulnerability reporting and naming standard to facilitate information sharing among stakeholders globally. In addition, establishing an accepted channel for responsible vulnerability disclosure would incentivize ethical AI developers and researchers to come forward with their findings about new vulnerabilities. Better data on weaknesses that span multiple models could direct research agendas and preempt future incidents.

Managing AI risk is a complex challenge that will require input from across industry, academia, civil society, and government. These databases could become critical infrastructure supporting safe AI development. In addition, they would provide data to guide future policymaking, help educate the public, and give developers crucial feedback loops. They would also benefit national security because better understanding of AI risks could reduce vulnerability to adversarial attacks that exploit model weaknesses.

Creating national databases for incidents and vulnerabilities wouldn’t immediately solve all outstanding AI risks. But the effort would be an important step toward more capable safety and security disciplines. Congress should take action now to get these resources in place as AI systems continue advancing.

Image credit: Kevin Ku (Unsplash)

You may also like

Show Buttons
Hide Buttons