Intermediate35 minModule 4 of 5

Building AI Teams

Hiring, upskilling, center of excellence models, change management, AI OKRs.

Prerequisites: AI Strategy Frameworks

AI strategy is only as good as the team executing it. Building AI capability means hiring the right specialists, upskilling your existing workforce, choosing the right organizational model, and managing the cultural change that comes with AI adoption. This module covers how to assemble, structure, and lead AI teams that deliver real business outcomes — not just impressive demos.

Hiring for AI Roles

The AI talent landscape has evolved significantly. In 2024, companies competed fiercely for a small pool of ML researchers. By 2026, the talent need has diversified — you still need specialists, but you also need a much broader set of AI-literate roles.

Core AI Roles

RoleResponsibilitiesKey Skills
ML EngineerBuilding, training, and deploying machine learning models. Designing ML pipelines and infrastructure.Python, PyTorch/JAX, MLOps, distributed training, model optimization
AI/ML Platform EngineerBuilding the infrastructure that ML engineers use — training clusters, model serving, feature stores, experiment tracking.Kubernetes, cloud infrastructure, GPU management, CI/CD for ML
Data ScientistAnalyzing data to find patterns, building predictive models, designing experiments, measuring AI impact.Statistics, SQL, Python/R, experimental design, business acumen
AI Product ManagerDefining AI product vision, prioritizing use cases, managing the unique uncertainties of AI projects, bridging technical and business teams.Product strategy, technical literacy, user research, AI limitations awareness
AI Application DeveloperBuilding applications that integrate AI models via APIs. Implementing RAG systems, agent workflows, and AI-powered features.Full-stack development, prompt engineering, API integration, MCP, evaluation frameworks
Prompt Engineer / AI DesignerCrafting and optimizing system prompts, designing AI interactions, building evaluation suites, maintaining prompt libraries.Deep model knowledge, evaluation design, systematic testing, UX sensibility
The Role Evolution
The "prompt engineer" title is evolving. In 2024, it was a standalone role. By 2026, prompt engineering is increasingly a skill embedded in other roles (product managers write system prompts, developers design agent workflows). The new frontier is "AI application developer" — engineers who may not train models but are expert at building production systems with LLM APIs, RAG, agents, and MCP integrations.

Hiring Strategies

  • Prioritize AI application developers: Unless you are training custom models, you likely need more people who can build with AI APIs than people who can build AI from scratch. Focus on engineers who have shipped AI-powered features in production.
  • Hire for learning velocity: AI tools and best practices change quarterly. Someone who learned prompt engineering six months ago and hasn't updated their approach is already behind. Hire people who demonstrate continuous learning and adaptability.
  • Value domain expertise: An ML engineer who understands healthcare, finance, or your specific industry will outperform a technically stronger engineer who doesn't understand the domain. Domain knowledge is the hardest thing to teach.
  • Test with real projects: AI skills are best assessed through practical projects, not whiteboard interviews. Give candidates a real problem from your business and evaluate their approach to solving it with AI — from problem framing to implementation to evaluation.

Upskilling Existing Teams

Hiring alone cannot fill the AI skills gap. Most organizations need to systematically upskill their existing workforce to be effective with AI tools and to collaborate with AI specialists.

The AI Literacy Pyramid

Level 1: AI Awareness (All Employees)

Everyone in the organization should understand what AI can and cannot do, how your organization uses AI, what the AI acceptable use policy says, and basic prompt skills for approved tools. This level is about literacy and responsible use, not technical depth.

Training investment: 2-4 hours of initial training plus quarterly updates. Delivered via e-learning modules, lunch-and-learns, and internal documentation.

Level 2: AI Power Users (Knowledge Workers)

Employees in roles that benefit directly from AI tools — writers, analysts, marketers, project managers, customer success teams — should develop proficiency with AI for their specific workflows. This includes advanced prompting, building custom GPTs or Claude Projects, using AI for data analysis, and integrating AI tools with their daily work.

Training investment: 1-2 days of structured training plus ongoing coaching. Role-specific workshops showing how AI applies to their exact workflows.

Level 3: AI Builders (Technical Teams)

Developers, data analysts, and technical staff should learn to build AI-powered features — API integration, RAG systems, agent development, MCP servers, evaluation frameworks, and production deployment patterns. This turns your engineering team into an AI-capable engineering team.

Training investment: Structured learning programs (weeks to months), hands-on projects with real business data, mentorship from AI specialists, and access to courses from platforms like DeepLearning.AI, fast.ai, or cloud provider training programs.

Level 4: AI Specialists (Dedicated AI Team)

Your core AI team needs deep expertise — model fine-tuning, custom training, advanced evaluation, ML infrastructure, and research-level understanding of model capabilities and limitations. These roles typically require hiring or dedicated advanced training programs.

Training investment: Conference attendance, research paper reading groups, sabbatical-style deep learning programs, and collaboration with academic or research partners.

The AI Adoption Benchmark
As of early 2026, surveys show that over 80% of professional developers report using AI coding tools on a weekly basis. If your engineering team isn't at this level of adoption, you are leaving significant productivity on the table. Make AI tool adoption a managed initiative, not an organic hope.

AI Center of Excellence Models

How you organize AI capability within your company structure significantly impacts speed of adoption, quality of outputs, and long-term scalability. Three models dominate:

Model Comparison

ModelStructureStrengthsWeaknesses
CentralizedA dedicated AI team that all business units request work fromDeep expertise concentration, consistent standards, efficient resource useBottleneck risk, slow responsiveness, disconnect from business context
EmbeddedAI specialists placed directly within each business unit or product teamClose to business problems, fast iteration, deep domain contextInconsistent practices, duplicated effort, isolation from AI community
Hub-and-Spoke (Hybrid)Central AI team sets standards, tools, and platforms; embedded AI practitioners execute within business unitsBest of both — shared standards with business-unit responsivenessRequires strong coordination, dual reporting can create tension

The hub-and-spoke model has become the preferred approach for organizations with more than 50 employees. The central hub owns AI infrastructure (model access, shared tools, evaluation frameworks, governance policies), while embedded practitioners own execution within their teams.

The Hub Team's Core Functions
A central AI hub typically owns: (1) model vendor relationships and API management, (2) shared prompt libraries and best practices, (3) evaluation and testing frameworks, (4) governance and compliance, (5) training and upskilling programs, and (6) reusable AI infrastructure (RAG pipelines, agent frameworks, MCP servers). Everything else is better owned by the teams closest to the business problem.

Change Management for AI Adoption

Technology deployment without change management leads to expensive shelfware. AI adoption is particularly challenging because it changes how people do their daily work, raises fears about job displacement, and requires new mental models for human-AI collaboration.

The Change Management Playbook

  • Start with leadership sponsorship: Visible executive commitment signals that AI adoption is a priority, not a fad. Leaders should use AI tools themselves and share their experiences openly — including failures and learning moments.
  • Address fear directly: Acknowledge that AI changes roles. Frame AI as augmentation (making people more effective) rather than replacement. Be honest about which tasks will be automated and how roles will evolve. Provide clear reskilling paths.
  • Identify and empower champions: Find early adopters in every team who are enthusiastic about AI tools. Give them extra training, recognition, and a platform to share their wins. Peer influence is more powerful than top-down mandates.
  • Redesign workflows, not just add tools: Dropping an AI tool into an existing workflow often yields marginal gains. Redesign the workflow around AI capabilities — what can be automated entirely? What benefits from human-AI collaboration? What should remain purely human?
  • Measure and celebrate wins: Track adoption metrics (usage rates, time saved, quality improvements) and celebrate them publicly. Nothing drives adoption like seeing concrete results from peers.
  • Iterate based on feedback: Create channels for employees to report what works, what doesn't, and what they need. Use this feedback to refine tools, training, and policies.

OKRs for AI Initiatives

AI initiatives need measurable objectives. Vague goals like "become an AI-first company" are not actionable. Here are example OKR frameworks for different stages of AI maturity:

Example OKRs by Maturity Stage

Stage 1: Foundation (First 6 Months)

Objective: Establish AI capability and drive initial adoption across the organization.

KR1: 90% of developers using AI coding tools weekly (measured by tool analytics).

KR2: 100% of employees complete AI literacy training.

KR3: Launch 3 AI-powered internal tools with measurable productivity impact.

KR4: Publish and distribute AI acceptable use policy with 100% employee acknowledgment.

Stage 2: Scale (6-18 Months)

Objective: Embed AI into core business processes with measurable business impact.

KR1: Reduce customer support response time by 40% through AI-assisted triage and resolution.

KR2: Deploy AI features in 2 customer-facing products with positive NPS impact.

KR3: Establish AI governance framework with risk assessments completed for all Tier 3+ systems.

KR4: Achieve 50% reduction in time-to-first-draft for marketing content through AI tools.

Stage 3: Transform (18+ Months)

Objective: Achieve AI-driven competitive advantage in core business areas.

KR1: Generate measurable new revenue from AI-native product features.

KR2: Automate 60% of routine operational workflows with human oversight at critical checkpoints.

KR3: Deploy agent-to-agent capabilities for 2 key business processes (procurement, customer onboarding).

KR4: Achieve top-quartile AI maturity rating in industry benchmarking.

Building an AI-First Culture

An AI-first culture is not about using AI for everything — it's about habitually asking "could AI make this better?" before building or doing things the old way. It's a mindset shift, not a technology mandate.

  • Default to AI-assisted: Encourage teams to start every new project by exploring how AI could help — whether that's code generation, research synthesis, design exploration, or data analysis. Make "did you try AI?" a standard question in project reviews.
  • Share learnings openly: Create internal channels (Slack channels, wikis, show-and-tell sessions) where employees share AI tips, prompts that worked, and lessons from failures. Knowledge compounds when shared.
  • Invest in experimentation: Give teams time and permission to experiment with new AI tools and techniques. Google's 20% time concept applies well here — even 10% dedicated to AI exploration yields outsized returns.
  • Reward outcomes, not AI usage: The goal is better business outcomes, not higher AI utilization metrics. Celebrate teams that achieve great results, whether they used AI or not. AI is a means, not an end.
Avoid the AI Theater Trap
Some organizations fall into "AI theater" — visibly using AI for its own sake without genuine impact. Press releases about AI initiatives, AI-themed town halls, and mandatory AI tool usage metrics can create the appearance of transformation without the substance. Focus on outcomes: faster delivery, better quality, lower cost, happier customers. If AI helps achieve those, great. If not, do something else.

Resources

Key Takeaways

  • 1The AI talent need has diversified beyond ML researchers — AI application developers who build with APIs, RAG, and agents are now the highest-demand role for most organizations.
  • 2Hire for learning velocity over current skills. AI tools and practices change quarterly, so adaptability matters more than today's specific expertise.
  • 3Upskilling follows a pyramid: AI awareness for all employees, power user skills for knowledge workers, builder skills for technical teams, and specialist depth for the core AI team.
  • 4Over 80% of developers now use AI coding tools weekly. If your engineering team isn't there, treat AI tool adoption as a managed initiative with training and measurement.
  • 5The hub-and-spoke (hybrid) organizational model works best for most companies — a central team owns standards and infrastructure while embedded practitioners own execution.
  • 6Change management is 70% of AI adoption success. Address job displacement fears directly, identify team champions, redesign workflows (don't just add tools), and celebrate measurable wins.
  • 7Avoid AI theater — using AI visibly without genuine impact. Focus OKRs on business outcomes (faster, better, cheaper, happier customers) rather than AI usage metrics.

Test Your Understanding

Module Assessment

5 questions · Score 70% or higher to complete this module

You can retake the quiz as many times as you need. Your best score is saved.

Cookie Preferences

We use cookies to enhance your experience. By continuing, you agree to our use of cookies.