Risk Management » Understanding and Implementing the NIST AI Risk Management Framework

Understanding and Implementing the NIST AI Risk Management Framework

December 12, 2023

Understanding and Implementing the NIST AI Risk Management Framework

The adoption of artificial intelligence (AI) in business introduces novel risks, necessitating organizations to ensure that their AI implementation is secure, legal, and ethical. Alongside traditional risks such as fraud, AI poses unique challenges. In response, the US National Institute of Standards and Technology (NIST) has released the AI Risk Management Framework (AI RMF) to assist public and private sector organizations in managing these risks effectively, according to an article by Pivotpoint Security.

Mandated by the National Artificial Intelligence Initiative Act of 2020, the AI RMF is designed to help organizations using AI systems to navigate risks and foster trustworthy and responsible AI practices. It stems from a collaborative process involving diverse stakeholders across the AI landscape, aiming to provide flexibility and applicability to organizations of all sizes and sectors.

The AI RMF focuses on embedding trustworthiness into AI technologies, emphasizing seven characteristics that contribute to a system being considered trustworthy. These include validity, safety, security, accountability, explainability, privacy enhancement, and fairness, with the framework aiming to adapt to evolving AI technologies and usage.

To complement the AI RMF, NIST has introduced companion documents such as the NIST AI RMF Playbook and AI RMF Roadmap within its Trustworthy and Responsible AI Resource Center. The framework offers organizations a structured yet flexible approach to comprehensively assess, measure, monitor, and communicate about AI risks, aiming to enhance the benefits derived from AI while minimizing negative impacts on individuals and communities.

By utilizing the AI RMF, organizations can improve processes for governing and documenting AI risks, enhance awareness of tradeoffs among AI trustworthiness characteristics, establish stronger policies for organizational accountability in managing AI risks, and foster a culture prioritizing risk identification and management. Additionally, it equips organizations with the capacity to perform testing, evaluation, verification, and validation of AI systems, promoting responsible and secure AI adoption.

Read full article at:

Get our free daily newsletter

Subscribe for the latest news and business legal developments.

Scroll to Top