New framework aims to improve transparency and accountability in AI development and implementation
The National Institute of Standards and Technology (NIST) has released Version One of its long-awaited Artificial Intelligence (AI) Risk Management Framework (RMF). After the U.S. Congress mandated NIST to take the lead in 2020, the launch of this project has been a long-term endeavor. This included gathering information, drafting a concept paper, revising it twice, and holding three public workshops.
NIST has a history of shaping the technology adoption practices of companies. Its influence in the realm of cybersecurity is particularly notable, with the NIST Cybersecurity Framework widely regarded as the gold standard in the field and adopted by numerous businesses and cybersecurity leaders. Given this track record, it is likely that NIST’s latest initiative on managing AI risks will influence how companies build, test, and deploy AI applications.
Managing AI risks is becoming increasingly important as machine learning models become more widely deployed and powerful Generative AI applications like ChatGPT and Stable Diffusion become more popular. In addition to fostering the responsible design, development, deployment, and use of AI systems, the framework aims to provide organizations and individuals with approaches that enhance the trustworthiness of AI systems. The AI RMF proposes a new, shared guidelines to improve transparency and accountability in the development and implementation of AI.
AI teams should operationalize the NIST AI RMF sooner rather than later.
The importance of documentation for managing AI risks has long been a central guideline shared by BNH1, the first law firm focused on AI and analytics. This emphasis has now been echoed by the recent release of the NIST framework, which stresses the need for data and AI teams to prioritize documentation from the outset. Other key takeaways include the importance risk triage in managing AI risks, as well as the need for transparency and accountability in the development and implementation of AI systems. The AI RMF also emphasizes the need for social responsibility and sustainability in the use of AI systems, and the importance of understanding and managing the risks of AI in order to enhance trustworthiness and cultivate public trust. The framework is voluntary, rights-preserving, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors to implement it.
Overall, the release of the NIST AI RMF is a significant step forward in addressing the risks associated with AI technology. It provides organizations and individuals with a set of guidelines to enhance the trustworthiness and transparency of AI systems, and emphasizes the importance of documentation, risk triage, and social responsibility in the development and implementation of AI. As the NIST Cybersecurity Framework is widely regarded as the gold standard in the field, it is likely that the AI RMF will have a significant impact on how companies build, test, and deploy AI applications. The voluntary and flexible nature of the framework allows organizations of all sizes and sectors to implement the approaches outlined in the framework. With the EU AI Act and other AI regulations set to be enacted in 2023, AI teams should operationalize the NIST AI RMF sooner rather than later.
If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter:
 Ben Lorica is an advisor to BNH and other companies.