In January 2023, the National Institute of Standards and Technology (NIST) published the AI Risk Management Framework (AI RMF 1.0), providing the first comprehensive U.S. government framework for managing the risks associated with artificial intelligence systems. As AI capabilities become embedded in an ever-growing number of vendor products — from customer service chatbots and fraud detection engines to code generation tools and healthcare diagnostics — organizations must extend their third-party risk management programs to address AI-specific risks that traditional security questionnaires were never designed to capture.
Understanding the NIST AI RMF
The NIST AI RMF 1.0 is a voluntary framework designed to help organizations govern AI risks throughout the AI lifecycle. It is structured around four core functions:
- Govern: Establishes the organizational structures, policies, and processes for AI risk management. This includes defining roles, responsibilities, and accountability for AI systems, as well as fostering a culture of responsible AI use.
- Map: Identifies and contextualizes the risks associated with specific AI systems. This involves understanding the intended purpose of an AI system, who it affects, the data it uses, and the operational environment in which it operates.
- Measure: Employs quantitative and qualitative methods to analyze, assess, and track identified AI risks. This includes testing for bias, accuracy, robustness, and other trustworthiness characteristics.
- Manage: Addresses and treats identified AI risks through prioritization, response, and monitoring activities. This includes establishing processes for incident response, risk communication, and continuous improvement.
NIST also released the AI RMF Playbook, which provides specific suggested actions and references for each subcategory within the four functions. Together, the framework and playbook give organizations a practical roadmap for AI governance.
AI-Specific Risks in Your Vendor Portfolio
When a vendor embeds AI into a product or service your organization uses, you inherit a set of risks that differ fundamentally from traditional cybersecurity risks. These include:
| AI Risk Category | Description |
|---|---|
| Bias and Discrimination | AI models trained on biased data can produce discriminatory outputs, creating legal and reputational liability for organizations that deploy them |
| Hallucination and Confabulation | Generative AI systems can produce plausible but factually incorrect outputs, leading to flawed decisions if not properly validated |
| Data Poisoning | Adversaries can manipulate training data to cause AI models to behave in unintended ways, a supply chain risk unique to AI systems |
| Model Theft and Extraction | Proprietary AI models can be reverse-engineered or stolen, compromising intellectual property and competitive advantage |
| Privacy and Data Leakage | AI models may inadvertently memorize and reproduce sensitive training data, including personal information or trade secrets |
| Lack of Explainability | Complex AI models may produce outputs that cannot be meaningfully explained, complicating regulatory compliance and accountability |
Incorporating AI Risk Assessments into TPRM
Traditional vendor security questionnaires focus on network security, access controls, encryption, and incident response. While these remain important, they are insufficient for assessing vendors that use AI. Organizations need to extend their TPRM assessment templates to include AI-specific questions aligned with the NIST AI RMF functions:
Govern-Aligned Questions
- Does the vendor have a documented AI governance policy?
- Has the vendor designated roles and responsibilities for AI risk management?
- Does the vendor conduct impact assessments before deploying AI systems?
Map-Aligned Questions
- What AI models are used in the products or services provided to your organization?
- What data is used to train and operate these AI models?
- Who are the intended and potential unintended stakeholders affected by the AI system?
Measure-Aligned Questions
- How does the vendor test for bias, fairness, and accuracy in its AI outputs?
- What metrics does the vendor use to evaluate AI system performance and trustworthiness?
- How frequently are AI models retrained and evaluated?
Manage-Aligned Questions
- What processes exist for addressing identified AI risks, including model rollback?
- How does the vendor communicate AI-related incidents or performance degradation?
- Can the vendor provide audit trails for AI-driven decisions that affect your organization?
The Regulatory Context
The NIST AI RMF does not exist in isolation. In October 2023, President Biden signed Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which directed federal agencies to use the NIST AI RMF as a foundation for AI governance. The European Union's AI Act, which entered into force in August 2024, imposes binding requirements on AI systems based on risk classification. Organizations operating globally will increasingly need to demonstrate structured AI governance across their vendor portfolios.
For TPRM professionals, this regulatory convergence means that AI risk assessment will shift from a best practice to a compliance requirement. Organizations that integrate AI risk assessment into their vendor management programs today will be better positioned to meet these evolving obligations.
Getting Started
Organizations do not need to overhaul their entire TPRM program overnight. A practical starting point is to identify which vendors in your portfolio use AI in the products or services they provide to you, tier those vendors by the criticality and sensitivity of the AI use case, and apply AI-specific assessment questions to your highest-risk AI vendors first. As AI adoption continues to accelerate across every industry, the organizations that treat AI risk as a core component of vendor risk management will be best positioned to realize the benefits of AI while managing its unique challenges.
Protect Your Organization from Third-Party Risk
Fair TPRM is a free, open-source platform for vendor risk management, GRC compliance, and FAIR risk quantification.
Free Demo Download SourceSources & References
- AI Risk Management Framework (AI RMF 1.0) - National Institute of Standards and Technology, January 2023
- NIST AI RMF Playbook - NIST AI Resource Center
- Executive Order 14110 on Safe, Secure, and Trustworthy AI - The White House, October 2023
- EU AI Act: First Regulation on Artificial Intelligence - European Parliament
- NIST AI 100-1: Artificial Intelligence Risk Management Framework - NIST Computer Security Resource Center