On August 2, 2026, the EU AI Act's requirements for high-risk AI systems become enforceable. Non-compliance carries penalties of up to EUR 35 million or 7% of global annual turnover -- whichever is higher. That is more than GDPR's maximum fine.
But regulatory compliance is only one reason to maintain an AI risk register. The practical reason is simpler: AI systems fail in ways that traditional software does not, and if you do not track the risks systematically, you will be surprised by them in production.
This article provides a practical AI risk register template based on the NIST AI Risk Management Framework (AI RMF) and the EU AI Act's requirements. It covers what to track, who owns each risk, and how to assess and mitigate the categories that matter most.
What Is an AI Risk Register?
A risk register is a structured document that lists identified risks, their likelihood, their potential impact, their current mitigation status, and their owner. It is a living document that gets updated as the system evolves.
An AI risk register extends the standard software risk register with categories specific to AI: model behavior risks, data risks, fairness risks, and regulatory risks that do not exist in traditional software.
Core fields for each entry:
| Field | Description |
|---|---|
| Risk ID | Unique identifier |
| Category | Model, Data, Infrastructure, Regulatory, Operational |
| Description | What could go wrong |
| Likelihood | Low / Medium / High |
| Impact | Low / Medium / High / Critical |
| Risk Score | Likelihood x Impact (use a 5x5 matrix) |
| Current Controls | What mitigations are in place |
| Residual Risk | Risk level after controls |
| Owner | Specific person responsible |
| Review Cadence | How often this risk is reassessed |
The NIST AI RMF Framework
The NIST AI Risk Management Framework provides four interconnected functions for managing AI risk. Your risk register should map to these functions:
GOVERN
Establish the organizational structures, policies, and culture for AI risk management. This is the foundation -- without governance structures, risk identification and mitigation are ad hoc.What to document:
MAP
Identify and characterize AI risks in the context of your specific system and its deployment environment.What to document:
MEASURE
Assess identified risks using both quantitative and qualitative methods.What to document:
MANAGE
Prioritize and respond to identified risks through mitigation, transfer, acceptance, or avoidance.What to document:
Risk Category 1: Model Behavior Risks
These are risks related to what the AI model produces.
Risk 1.1: Hallucination / Confabulation
Description: The model generates false information presented as fact.
Likelihood: High (all generative models hallucinate to some degree)
Potential Impact: Critical for medical, legal, financial domains. Medium for general productivity tools.
Mitigations:
Metrics to track: Hallucination rate on evaluation set, faithfulness score in production.
Risk 1.2: Bias and Fairness Failures
Description: The model produces outputs that systematically disadvantage certain groups.
Likelihood: Medium to High (depends on training data and application domain)
Potential Impact: Critical for hiring, lending, insurance, or any decision that affects individuals differently.
Mitigations:
Metrics to track: Accuracy parity across groups, demographic distribution of outcomes.
Risk 1.3: Prompt Injection
Description: An attacker crafts input that overrides the model's instructions.
Likelihood: High (OWASP ranks prompt injection as the #1 LLM vulnerability for the second consecutive year)
Potential Impact: High. Can cause data exfiltration, unauthorized actions, or system prompt leakage.
Mitigations:
Risk 1.4: Output Toxicity
Description: The model generates harmful, offensive, or inappropriate content.
Likelihood: Medium (modern models have safety training, but it can be bypassed)
Potential Impact: High. Brand damage, legal liability, user harm.
Mitigations:
Risk Category 2: Data Risks
These are risks related to the data the AI system consumes and produces.
Risk 2.1: Training Data Poisoning
Description: Malicious or corrupted data in the training set causes the model to learn wrong patterns.
Likelihood: Low for third-party models (OpenAI, Anthropic control their training data). Medium for fine-tuned models. High for models trained on user-generated data.
Mitigations:
Risk 2.2: PII Exposure
Description: The AI system exposes personally identifiable information through its outputs or logs.
Likelihood: Medium. Especially relevant for RAG systems that index documents containing PII.
Potential Impact: Critical. GDPR, CCPA, and other privacy regulations impose significant penalties.
Mitigations:
Risk 2.3: Data Leakage Between Tenants
Description: In multi-tenant systems, one customer's data appears in another customer's AI responses.
Likelihood: Medium. Common in shared vector stores or models with conversation memory.
Potential Impact: Critical. Legal liability and loss of customer trust.
Mitigations:
Risk Category 3: Operational Risks
These are risks related to running AI systems in production.
Risk 3.1: Model Availability and Degradation
Description: The AI service becomes unavailable or degrades in quality.
Likelihood: Medium. Third-party APIs have outages. Self-hosted models require infrastructure management.
Mitigations:
Risk 3.2: Cost Overrun
Description: AI inference costs exceed budget due to traffic spikes, prompt injection attacks, or misconfigured systems.
Likelihood: Medium. Token costs are linear with usage. A 10x traffic spike means 10x the cost.
Mitigations:
Risk 3.3: Model Drift
Description: The model's performance degrades over time as input distribution changes or the world changes.
Likelihood: High over long time horizons. The data your model was evaluated on six months ago may not represent current inputs.
Mitigations:
Risk Category 4: Regulatory and Compliance Risks
Risk 4.1: EU AI Act Non-Compliance
Description: Failure to meet the EU AI Act's requirements for high-risk AI systems.
Likelihood: Depends on system classification. Medium if proactively managed.
Potential Impact: Critical. Fines up to EUR 35 million or 7% of global turnover.
Key requirements (effective August 2, 2026):
Mitigations:
Risk 4.2: AI Transparency Failures
Description: Users do not know they are interacting with AI, or do not understand how AI-generated content was produced.
Likelihood: Medium. The EU AI Act requires transparency for AI-generated content and certain AI interactions.
Mitigations:
Maintaining the Register
A risk register is only useful if it is maintained. Here is the operational cadence:
| Activity | Frequency | Owner |
|---|---|---|
| Full risk register review | Quarterly | AI Governance Committee |
| New deployment risk assessment | Per deployment | Engineering Lead |
| Incident-triggered review | Per incident | Incident Owner |
| Regulatory landscape scan | Monthly | Legal/Compliance |
| Risk metric dashboard review | Weekly | ML Engineering |
Most successful 2026 AI governance models use a hybrid approach:
- •Centralized: Policy, risk appetite, and compliance standards
- •Federated: Execution, implementation, and day-to-day risk management by product teams
The AI Governance Committee should include representatives from: Engineering, Legal/Compliance, Product, Data Science, and Security.
Getting Started
If you do not have an AI risk register yet, start with these five steps:
- Inventory all AI systems in production or development
- Classify each system by risk level (unacceptable, high, limited, minimal -- using the EU AI Act's framework)
- Document the top 5 risks for each system using the template above
- Assign an owner to each risk
- Schedule the first quarterly review
The risk register is not paperwork. It is the document that prevents the incident you did not see coming and the fine you did not budget for.
