Playbooks · Strategy

AI Readiness Checklist for Business Leaders

AI projects fail when an organisation is not ready, not when the technology is not capable. This checklist walks through the five dimensions of readiness that actually predict whether an AI project will land.

11 min read Updated 2026-05-04By Clarvia Team
TL;DR

AI readiness is not a single thing. It is five things: data, process, team, compliance, and vendor. Most failed AI projects had a strong score on three or four of these and a hidden weakness on the other one or two. This checklist surfaces those weaknesses before you commit budget, so you can address them or scope around them.

What 'ready' actually means

Readiness is not maturity. A company can be highly mature in conventional engineering and unready for AI; another company can be early-stage in everything else and well-positioned to ship an AI feature this quarter. The difference is whether the specific conditions an AI project depends on are present, not whether the organisation is generally sophisticated.

The five dimensions below are conditions that, if absent, make AI delivery substantially harder or impossible. Strength in one dimension cannot compensate for weakness in another. A team with great data and weak process will ship a feature nobody uses. A team with strong process and weak data will ship a feature that does not work.

The checklist is diagnostic, not prescriptive. The point is not to score yourself; the point is to find the dimension that is going to bite you and address it before you start.

Dimension one: data readiness

AI features run on data. Without the right data, accessible to the right people, in a usable form, the project does not start. The check has three parts.

First, do you have the data the feature needs. This is more often a problem than people expect. Companies often assume they have data they do not have, or have data in a system nobody can extract from, or have data with known gaps that block the use case.

Second, is the data accessible. Many companies have the data but cannot release it to a build team because of legal, security, or political reasons that nobody mentioned in the kickoff meeting. The right time to discover this is now, not in week 4 of a build.

Third, is the data usable. Usability covers freshness, structure, labelling, and quality. A spreadsheet last updated three years ago is data; it is not usable data for a production AI feature.

Diagnostic checks
  • We have the data the feature requires (or a defensible plan to obtain it)
  • A named person has the authority to grant the build team access to the data
  • The data is fresh enough to reflect current operations (not stale)
  • We have a baseline for data quality (sample size, error rate, coverage)
  • Personally identifiable information has been classified and a handling plan exists

Dimension two: process readiness

AI features automate or augment a workflow. If the workflow itself is unstable, the AI feature inherits the instability. Process readiness is not the same as process maturity. A workflow can be informal but stable (consistent across people and time) and that is enough.

The diagnostic is whether two competent operators of the workflow would handle the same input the same way. If yes, the workflow is stable enough to define an evaluation set against, and an AI feature can be built. If no, the workflow needs definition before automation will produce consistent outputs.

Trying to automate an unstable workflow with AI usually results in the AI being blamed for inconsistencies that the workflow itself produces. We have seen this kill projects whose technology worked perfectly. The correct sequence is: stabilise the workflow, define what 'right' looks like, then automate.

Diagnostic checks
  • Two operators handling the same input would produce the same output
  • The workflow has a documented decision logic (even if informal)
  • Edge cases and exceptions are handled consistently across the team
  • The team can articulate what 'a good outcome' looks like for the workflow
  • Volume and seasonality are understood (not surprised by month-end or peak periods)

Dimension three: team readiness

AI features need an owner. Not a steering committee, not a project board: a named person with the authority to make decisions during the build and the accountability to operate the feature after launch. Without an owner, decisions stall and operating responsibility drifts.

The team also needs change capacity. AI features change how people work, even when they augment rather than replace. If the operations team does not have the bandwidth to learn a new workflow, you will ship a feature that nobody uses.

Buy-in matters more than enthusiasm. The user base does not need to be excited about AI; they need to not be hostile, and the leadership team needs to actively reinforce the project's importance for as long as it takes to land. Projects with leadership buy-in but no user buy-in struggle through adoption; projects with neither rarely succeed at all.

Diagnostic checks
  • A single named person is accountable for project decisions during the build
  • A single named person (often the same one) is accountable for the feature after launch
  • The operations team has bandwidth to participate in week 5 pilot and post-launch operation
  • Leadership has explicitly endorsed the project, not just approved it
  • The end users are not actively hostile to the change (passive resistance is fine)

Dimension four: compliance readiness

Compliance readiness is about whether the AI feature can pass the reviews it will need to pass. The reviews are: data privacy (GDPR, CCPA, sectoral rules), security (SOC 2, ISO 27001, your own stack), AI-specific regulation (EU AI Act, sectoral AI rules), and contractual constraints (vendor agreements, customer contracts, partner agreements).

The review you forgot is the one that will block you. We have seen builds completed and ready for production stall for two months because nobody asked the legal team about the customer contract clause that prohibited subcontractor data processing. The right time to ask is week 1, not week 6.

Compliance readiness is not a yes-or-no question; it is a 'can we satisfy each review with the time and budget we have' question. Sometimes the answer is 'yes, with two weeks of additional work' and that is fine if you scope it in. Sometimes the answer is 'no, this is incompatible with our SOC 2 boundary' and the project changes shape.

Diagnostic checks
  • We have classified the data the feature uses against our privacy framework
  • Security review (internal or external) understands the planned architecture
  • Customer or partner contracts have been checked for AI or subcontractor clauses
  • EU AI Act risk classification has been determined (where applicable)
  • The audit trail and logging requirements have been agreed before build starts

Dimension five: vendor readiness

AI features almost always involve external vendors: a model provider, a vector database, an evaluation platform, a monitoring service. Vendor readiness is about whether your procurement, security, and contract teams can move at the pace the project needs.

The most common surprise is that procurement timelines for AI vendors are slower than for conventional SaaS, because security teams are still building their AI vendor evaluation playbooks. A vendor that signs in two weeks for conventional software might take two months for AI services. Plan accordingly.

The second surprise is that AI vendors lock you in differently. Switching from one model provider to another is harder than switching SaaS vendors because prompts, fine-tunes, and evaluation sets are tuned to specific model behaviour. Build for portability where it matters and accept lock-in where it does not.

Diagnostic checks
  • Procurement understands the AI vendors required and has a path to onboard them
  • Security has reviewed (or has a process to review) the planned model provider
  • Data processing agreements with model providers are in place or in progress
  • The cost model for AI vendors at production volume has been built and reviewed
  • Vendor lock-in has been discussed and an exit plan exists for the components that matter

What to do when the checklist surfaces red flags

Red flags do not mean 'do not do the project'. They mean 'address this dimension before the project'. The right response depends on which dimension and how severe the gap is.

Data gaps usually mean a discovery phase to confirm what you actually have. Process gaps usually mean a workflow stabilisation effort before automation. Team gaps usually mean naming an owner and clearing capacity. Compliance gaps usually mean engaging legal early. Vendor gaps usually mean kicking off procurement before you need it.

The fatal mistake is acknowledging a red flag and proceeding anyway. AI projects that ignore a red flag almost always trip over it later, at a point when the cost of addressing it has multiplied. Address it now or scope around it now; do not assume it will resolve itself.

AI Readiness Diagnostic Worksheet

A printable worksheet covering the five readiness dimensions and 25 specific checks. Designed for a leadership team to fill in together in under an hour, surfacing the dimension that needs work before any AI build begins.

Related playbooks

Common questions

What is the difference between this checklist and your /ai-readiness tool?

The interactive tool at /ai-readiness gives you a quick score across five dimensions in about ten minutes, suitable for a single decision-maker. This playbook is the longer-form version designed for a leadership team to work through together, with specific diagnostic questions per dimension. The tool tells you the result; the playbook tells you why and what to do about it.

How many of the five dimensions need to be strong for the project to succeed?

All five need to be at least adequate. The pattern we see is that one weak dimension is enough to derail an otherwise well-prepared project. The good news is that adequate is a much lower bar than excellent; most teams are stronger than they think on most dimensions.

We are early-stage. Are we too small for this?

No, but the diagnostic shifts. Early-stage companies often have weak data and weak process but very strong team and compliance flexibility. The right scope for an early-stage AI project is usually narrower than for an enterprise: a single workflow, a tight feedback loop, a fast iteration cycle.

What if we run the diagnostic and decide we are not ready?

Good. That is the diagnostic working. The right next step is to identify which one or two dimensions are weakest and address them, then re-run the diagnostic in a quarter. Many of our most successful client engagements began with a 'not yet' six months earlier.

Can we hire our way to readiness?

Partially. Team readiness can be addressed by hiring or partnering. The other four dimensions are more structural: data depends on what your business has been doing, process depends on how it operates, compliance depends on your legal and contractual landscape, vendor readiness depends on your procurement function. Hiring helps; it does not replace the structural work.

Find the dimension that will derail your AI project.

Book a free 15-minute call. We will run the diagnostic with you and surface the readiness gap that matters most.

Cookie Preferences

We use cookies to enhance your experience. By continuing, you agree to our use of cookies.