NIST’s Artificial Intelligence Framework Should Address Low-Probability, High-Impact Risks

Note: The author previously submitted an anonymous comment to NIST. This blog post summarizes material from that comment.

The National Institute of Standards and Technology (NIST)’s recent initial draft of its AI Risk Management Framework (AI RMF) provides an important reminder for the future of AI: advances in artificial intelligence have the potential to provide significant benefits to society, but to do so, they will need a smart approach to low-probability, high-impact risks.

For background: the draft framework, released in March, reasons that “[c]ultivating trust and communication about how to understand and manage the risks of systems will help create opportunities for innovation and realize the full potential of this 20 technology.” NIST intends for the AI RMF to serve as voluntary guidance on AI risk management processes for AI developers, users, deployers, and evaluators. NIST’s proposed framework would not be a mandatory requirement, but developers and deployers would have incentives to follow the framework as part of due care. For example, insurers or courts may expect developers and deployers to show reasonable usage of NIST’s AI RMF guidance as part of due care when developing or deploying AI systems in high-stakes contexts.

NIST’s proposed framework provides a useful starting point for organizations seeking to understand how to incorporate AI risk management into their existing governance processes. The framework’s “examples of potential harms” deserve particular attention, especially the “harm to a system” item (see Figure 1). As the COVID-19 pandemic has made abundantly clear, there is great danger in failing to account for low-probability, high-consequence risks. These are just the sorts of risks that NIST should make a central element of its risk management framework. Otherwise, society risks being hit with another unlikely but significant event, this time as the result of increasingly powerful AI systems. By accounting for risks from unintended uses and misuses of AI systems, NIST can help organizations identify and manage catastrophic risks as part of an overall risk management strategy. 

Figure 1. Potential Harms from AI systems

For an example of a catastrophic risk associated with the use of AI, consider autonomous weapons systems. As seen with other technologies, such as drones, there is a risk that autonomous weapons could be used in ways that endanger human rights—for example, by targeting civilians or engaging in indiscriminate attacks. Another area where catastrophic risks may arise is in the use of AI for decision support systems in critical infrastructure sectors, such as energy, transportation, and healthcare. If these systems are compromised by cyberattacks or other malfunctions, they could result in massive damage or loss of life.

NIST plans to release Version 1.0 of the AI RMF in early 2023, with the goal of helping organizations identify, assess, and manage risks associated with artificial intelligence deployments. As it continues developing the RMF, NIST should remember that while AI researchers are making great strides in developing fantastically capable AI systems, they have made very little progress in ensuring that these systems are reliable and safe. We have already seen critical failures in AI, such as self-driving cars malfunctioning, and YouTube’s AI algorithms automatically creating playlists for child abusers. Without the right attention, even greater problems will arise in the future. In particular, if AI becomes a strategically important weapon (for example, via autonomous code-generating cyberweapons), the U.S. and China may race to deploy theirs first without taking the necessary precautions, potentially leading to catastrophic systems malfunction. Past security breaches, such as hackers’ theft of sensitive documents regarding the F-35 jet, will seem quaint by comparison. Investments in AI security need to happen now, before it is too late.


Short Description
Social Links
Dan Lips
Head of Policy
Zach Graves
Executive Director
Grace Meyer
Chief Operating Officer
Marshall Kosloff
Media Fellow
Luke Hogg
Director of Outreach