Security

AI Risk Registers: What to Track Before You Deploy to Production

Clarvia Team
Author
Mar 19, 2026
12 min read
AI Risk Registers: What to Track Before You Deploy to Production

On August 2, 2026, the EU AI Act's requirements for high-risk AI systems become enforceable. Non-compliance carries penalties of up to EUR 35 million or 7% of global annual turnover -- whichever is higher. That is more than GDPR's maximum fine.

But regulatory compliance is only one reason to maintain an AI risk register. The practical reason is simpler: AI systems fail in ways that traditional software does not, and if you do not track the risks systematically, you will be surprised by them in production.

This article provides a practical AI risk register template based on the NIST AI Risk Management Framework (AI RMF) and the EU AI Act's requirements. It covers what to track, who owns each risk, and how to assess and mitigate the categories that matter most.


What Is an AI Risk Register?

A risk register is a structured document that lists identified risks, their likelihood, their potential impact, their current mitigation status, and their owner. It is a living document that gets updated as the system evolves.

An AI risk register extends the standard software risk register with categories specific to AI: model behavior risks, data risks, fairness risks, and regulatory risks that do not exist in traditional software.

Core fields for each entry:

FieldDescription
Risk IDUnique identifier
CategoryModel, Data, Infrastructure, Regulatory, Operational
DescriptionWhat could go wrong
LikelihoodLow / Medium / High
ImpactLow / Medium / High / Critical
Risk ScoreLikelihood x Impact (use a 5x5 matrix)
Current ControlsWhat mitigations are in place
Residual RiskRisk level after controls
OwnerSpecific person responsible
Review CadenceHow often this risk is reassessed
---

The NIST AI RMF Framework

The NIST AI Risk Management Framework provides four interconnected functions for managing AI risk. Your risk register should map to these functions:

GOVERN

Establish the organizational structures, policies, and culture for AI risk management. This is the foundation -- without governance structures, risk identification and mitigation are ad hoc.

What to document:

  • AI governance committee composition and charter
  • Risk appetite statement for AI systems
  • Policies for human oversight requirements
  • Escalation procedures for AI incidents
  • MAP

    Identify and characterize AI risks in the context of your specific system and its deployment environment.

    What to document:

  • System purpose and intended use
  • Stakeholders and affected parties
  • Deployment context and constraints
  • Known limitations of the technology
  • MEASURE

    Assess identified risks using both quantitative and qualitative methods.

    What to document:

  • Evaluation metrics and benchmarks
  • Testing results (accuracy, fairness, robustness)
  • Monitoring data from production systems
  • User feedback and incident reports
  • MANAGE

    Prioritize and respond to identified risks through mitigation, transfer, acceptance, or avoidance.

    What to document:

  • Mitigation strategies for each identified risk
  • Residual risk after mitigation
  • Contingency plans for risk events
  • Improvement actions and timelines

  • Risk Category 1: Model Behavior Risks

    These are risks related to what the AI model produces.

    Risk 1.1: Hallucination / Confabulation

    Description: The model generates false information presented as fact.

    Likelihood: High (all generative models hallucinate to some degree)

    Potential Impact: Critical for medical, legal, financial domains. Medium for general productivity tools.

    Mitigations:

  • RAG with explicit grounding instructions
  • Citation verification pipeline
  • Confidence scoring with user-visible indicators
  • Human review for high-stakes outputs
  • Abstention training (model says "I don't know" when uncertain)
  • Metrics to track: Hallucination rate on evaluation set, faithfulness score in production.

    Risk 1.2: Bias and Fairness Failures

    Description: The model produces outputs that systematically disadvantage certain groups.

    Likelihood: Medium to High (depends on training data and application domain)

    Potential Impact: Critical for hiring, lending, insurance, or any decision that affects individuals differently.

    Mitigations:

  • Pre-deployment bias testing across demographic categories
  • Ongoing monitoring of output distributions
  • Fairness constraints in model configuration
  • Regular human audit of model decisions
  • Documented appeal/override process for affected individuals
  • Metrics to track: Accuracy parity across groups, demographic distribution of outcomes.

    Risk 1.3: Prompt Injection

    Description: An attacker crafts input that overrides the model's instructions.

    Likelihood: High (OWASP ranks prompt injection as the #1 LLM vulnerability for the second consecutive year)

    Potential Impact: High. Can cause data exfiltration, unauthorized actions, or system prompt leakage.

    Mitigations:

  • Input sanitization and validation
  • Separation of system instructions and user input
  • Output filtering for sensitive content
  • Regular red-team testing (see our guide on red-teaming)
  • Rate limiting and anomaly detection
  • Risk 1.4: Output Toxicity

    Description: The model generates harmful, offensive, or inappropriate content.

    Likelihood: Medium (modern models have safety training, but it can be bypassed)

    Potential Impact: High. Brand damage, legal liability, user harm.

    Mitigations:

  • Content filtering on model outputs
  • Safety classifiers as a separate layer
  • Human review for public-facing content
  • Regular testing with adversarial inputs

  • Risk Category 2: Data Risks

    These are risks related to the data the AI system consumes and produces.

    Risk 2.1: Training Data Poisoning

    Description: Malicious or corrupted data in the training set causes the model to learn wrong patterns.

    Likelihood: Low for third-party models (OpenAI, Anthropic control their training data). Medium for fine-tuned models. High for models trained on user-generated data.

    Mitigations:

  • Data provenance tracking
  • Input validation in fine-tuning pipelines
  • Anomaly detection in training data
  • Regular model evaluation against known-good benchmarks
  • Risk 2.2: PII Exposure

    Description: The AI system exposes personally identifiable information through its outputs or logs.

    Likelihood: Medium. Especially relevant for RAG systems that index documents containing PII.

    Potential Impact: Critical. GDPR, CCPA, and other privacy regulations impose significant penalties.

    Mitigations:

  • PII detection and scrubbing before ingestion
  • Access controls on vector stores and document indexes
  • Logging policies that exclude sensitive data
  • Data retention limits with automated deletion
  • DPAs with third-party AI providers
  • Risk 2.3: Data Leakage Between Tenants

    Description: In multi-tenant systems, one customer's data appears in another customer's AI responses.

    Likelihood: Medium. Common in shared vector stores or models with conversation memory.

    Potential Impact: Critical. Legal liability and loss of customer trust.

    Mitigations:

  • Tenant isolation at the vector store level (separate namespaces or separate stores)
  • Context window isolation (no shared conversation history)
  • Regular cross-tenant contamination testing
  • Audit logging of data access patterns

  • Risk Category 3: Operational Risks

    These are risks related to running AI systems in production.

    Risk 3.1: Model Availability and Degradation

    Description: The AI service becomes unavailable or degrades in quality.

    Likelihood: Medium. Third-party APIs have outages. Self-hosted models require infrastructure management.

    Mitigations:

  • Fallback behavior (rule-based, cached responses, or graceful degradation)
  • Multi-provider strategy (primary and fallback LLM providers)
  • Health monitoring with automated alerting
  • SLA-backed infrastructure for self-hosted models
  • Risk 3.2: Cost Overrun

    Description: AI inference costs exceed budget due to traffic spikes, prompt injection attacks, or misconfigured systems.

    Likelihood: Medium. Token costs are linear with usage. A 10x traffic spike means 10x the cost.

    Mitigations:

  • Per-user and per-endpoint rate limiting
  • Cost monitoring with daily alerts
  • Budget caps with automatic circuit-breaking
  • Caching to reduce redundant API calls
  • Risk 3.3: Model Drift

    Description: The model's performance degrades over time as input distribution changes or the world changes.

    Likelihood: High over long time horizons. The data your model was evaluated on six months ago may not represent current inputs.

    Mitigations:

  • Continuous evaluation against a golden test set
  • Drift detection on input and output distributions
  • Scheduled model review and refresh cadence
  • User feedback integration

  • Risk Category 4: Regulatory and Compliance Risks

    Risk 4.1: EU AI Act Non-Compliance

    Description: Failure to meet the EU AI Act's requirements for high-risk AI systems.

    Likelihood: Depends on system classification. Medium if proactively managed.

    Potential Impact: Critical. Fines up to EUR 35 million or 7% of global turnover.

    Key requirements (effective August 2, 2026):

  • Risk management system (this risk register)
  • Data governance and management
  • Technical documentation
  • Record-keeping and logging
  • Transparency and provision of information to users
  • Human oversight mechanisms
  • Accuracy, robustness, and cybersecurity requirements
  • Conformity assessment and CE marking
  • Mitigations:

  • Classify all AI systems by risk level now
  • Implement documentation requirements
  • Build human oversight mechanisms
  • Conduct conformity assessments
  • Register high-risk systems in the EU database
  • Risk 4.2: AI Transparency Failures

    Description: Users do not know they are interacting with AI, or do not understand how AI-generated content was produced.

    Likelihood: Medium. The EU AI Act requires transparency for AI-generated content and certain AI interactions.

    Mitigations:

  • Clear AI disclosure in all user-facing interfaces
  • Explainability features for consequential decisions
  • Documentation of model capabilities and limitations

  • Maintaining the Register

    A risk register is only useful if it is maintained. Here is the operational cadence:

    ActivityFrequencyOwner
    Full risk register reviewQuarterlyAI Governance Committee
    New deployment risk assessmentPer deploymentEngineering Lead
    Incident-triggered reviewPer incidentIncident Owner
    Regulatory landscape scanMonthlyLegal/Compliance
    Risk metric dashboard reviewWeeklyML Engineering
    ### Governance Structure

    Most successful 2026 AI governance models use a hybrid approach:

    • Centralized: Policy, risk appetite, and compliance standards
    • Federated: Execution, implementation, and day-to-day risk management by product teams

    The AI Governance Committee should include representatives from: Engineering, Legal/Compliance, Product, Data Science, and Security.


    Getting Started

    If you do not have an AI risk register yet, start with these five steps:

    1. Inventory all AI systems in production or development
    2. Classify each system by risk level (unacceptable, high, limited, minimal -- using the EU AI Act's framework)
    3. Document the top 5 risks for each system using the template above
    4. Assign an owner to each risk
    5. Schedule the first quarterly review

    The risk register is not paperwork. It is the document that prevents the incident you did not see coming and the fine you did not budget for.

    AI risk registerAI governanceAI risk managementEU AI Act compliance

    Ready to Transform Your Development?

    Let's discuss how AI-first development can accelerate your next project.

    Book a Consultation

    Cookie Preferences

    We use cookies to enhance your experience. By continuing, you agree to our use of cookies.