What 'ready' actually means
Readiness is not maturity. A company can be highly mature in conventional engineering and unready for AI; another company can be early-stage in everything else and well-positioned to ship an AI feature this quarter. The difference is whether the specific conditions an AI project depends on are present, not whether the organisation is generally sophisticated.
The five dimensions below are conditions that, if absent, make AI delivery substantially harder or impossible. Strength in one dimension cannot compensate for weakness in another. A team with great data and weak process will ship a feature nobody uses. A team with strong process and weak data will ship a feature that does not work.
The checklist is diagnostic, not prescriptive. The point is not to score yourself; the point is to find the dimension that is going to bite you and address it before you start.
Dimension one: data readiness
AI features run on data. Without the right data, accessible to the right people, in a usable form, the project does not start. The check has three parts.
First, do you have the data the feature needs. This is more often a problem than people expect. Companies often assume they have data they do not have, or have data in a system nobody can extract from, or have data with known gaps that block the use case.
Second, is the data accessible. Many companies have the data but cannot release it to a build team because of legal, security, or political reasons that nobody mentioned in the kickoff meeting. The right time to discover this is now, not in week 4 of a build.
Third, is the data usable. Usability covers freshness, structure, labelling, and quality. A spreadsheet last updated three years ago is data; it is not usable data for a production AI feature.
- We have the data the feature requires (or a defensible plan to obtain it)
- A named person has the authority to grant the build team access to the data
- The data is fresh enough to reflect current operations (not stale)
- We have a baseline for data quality (sample size, error rate, coverage)
- Personally identifiable information has been classified and a handling plan exists
Dimension two: process readiness
AI features automate or augment a workflow. If the workflow itself is unstable, the AI feature inherits the instability. Process readiness is not the same as process maturity. A workflow can be informal but stable (consistent across people and time) and that is enough.
The diagnostic is whether two competent operators of the workflow would handle the same input the same way. If yes, the workflow is stable enough to define an evaluation set against, and an AI feature can be built. If no, the workflow needs definition before automation will produce consistent outputs.
Trying to automate an unstable workflow with AI usually results in the AI being blamed for inconsistencies that the workflow itself produces. We have seen this kill projects whose technology worked perfectly. The correct sequence is: stabilise the workflow, define what 'right' looks like, then automate.
- Two operators handling the same input would produce the same output
- The workflow has a documented decision logic (even if informal)
- Edge cases and exceptions are handled consistently across the team
- The team can articulate what 'a good outcome' looks like for the workflow
- Volume and seasonality are understood (not surprised by month-end or peak periods)
Dimension three: team readiness
AI features need an owner. Not a steering committee, not a project board: a named person with the authority to make decisions during the build and the accountability to operate the feature after launch. Without an owner, decisions stall and operating responsibility drifts.
The team also needs change capacity. AI features change how people work, even when they augment rather than replace. If the operations team does not have the bandwidth to learn a new workflow, you will ship a feature that nobody uses.
Buy-in matters more than enthusiasm. The user base does not need to be excited about AI; they need to not be hostile, and the leadership team needs to actively reinforce the project's importance for as long as it takes to land. Projects with leadership buy-in but no user buy-in struggle through adoption; projects with neither rarely succeed at all.
- A single named person is accountable for project decisions during the build
- A single named person (often the same one) is accountable for the feature after launch
- The operations team has bandwidth to participate in week 5 pilot and post-launch operation
- Leadership has explicitly endorsed the project, not just approved it
- The end users are not actively hostile to the change (passive resistance is fine)
Dimension four: compliance readiness
Compliance readiness is about whether the AI feature can pass the reviews it will need to pass. The reviews are: data privacy (GDPR, CCPA, sectoral rules), security (SOC 2, ISO 27001, your own stack), AI-specific regulation (EU AI Act, sectoral AI rules), and contractual constraints (vendor agreements, customer contracts, partner agreements).
The review you forgot is the one that will block you. We have seen builds completed and ready for production stall for two months because nobody asked the legal team about the customer contract clause that prohibited subcontractor data processing. The right time to ask is week 1, not week 6.
Compliance readiness is not a yes-or-no question; it is a 'can we satisfy each review with the time and budget we have' question. Sometimes the answer is 'yes, with two weeks of additional work' and that is fine if you scope it in. Sometimes the answer is 'no, this is incompatible with our SOC 2 boundary' and the project changes shape.
- We have classified the data the feature uses against our privacy framework
- Security review (internal or external) understands the planned architecture
- Customer or partner contracts have been checked for AI or subcontractor clauses
- EU AI Act risk classification has been determined (where applicable)
- The audit trail and logging requirements have been agreed before build starts
Dimension five: vendor readiness
AI features almost always involve external vendors: a model provider, a vector database, an evaluation platform, a monitoring service. Vendor readiness is about whether your procurement, security, and contract teams can move at the pace the project needs.
The most common surprise is that procurement timelines for AI vendors are slower than for conventional SaaS, because security teams are still building their AI vendor evaluation playbooks. A vendor that signs in two weeks for conventional software might take two months for AI services. Plan accordingly.
The second surprise is that AI vendors lock you in differently. Switching from one model provider to another is harder than switching SaaS vendors because prompts, fine-tunes, and evaluation sets are tuned to specific model behaviour. Build for portability where it matters and accept lock-in where it does not.
- Procurement understands the AI vendors required and has a path to onboard them
- Security has reviewed (or has a process to review) the planned model provider
- Data processing agreements with model providers are in place or in progress
- The cost model for AI vendors at production volume has been built and reviewed
- Vendor lock-in has been discussed and an exit plan exists for the components that matter
What to do when the checklist surfaces red flags
Red flags do not mean 'do not do the project'. They mean 'address this dimension before the project'. The right response depends on which dimension and how severe the gap is.
Data gaps usually mean a discovery phase to confirm what you actually have. Process gaps usually mean a workflow stabilisation effort before automation. Team gaps usually mean naming an owner and clearing capacity. Compliance gaps usually mean engaging legal early. Vendor gaps usually mean kicking off procurement before you need it.
The fatal mistake is acknowledging a red flag and proceeding anyway. AI projects that ignore a red flag almost always trip over it later, at a point when the cost of addressing it has multiplied. Address it now or scope around it now; do not assume it will resolve itself.