Five phases. Scoped to your organization. One complete picture.

The engagement is designed to move with urgency without cutting corners. From initial discovery to final findings, the pace is shaped by your organization's complexity and stakeholder availability, not an arbitrary clock.

01
30 min · First step

Discovery & Scoping

An initial conversation to understand your organization, your current AI state, and what you're hoping to achieve. We confirm scope, identify key stakeholders, and schedule the engagement. If there's not a strong fit, I'll tell you directly.

  • Understand your organizational context and current AI posture
  • Identify the right stakeholders for interviews
  • Agree on scope, timeline, and what "success" looks like
02
Async · Early phase

Document Review

Before any interviews, I review existing documentation. Architecture diagrams, data governance policies, org charts, current AI initiatives, vendor contracts, and any prior AI audit or strategy work. This grounds the interviews in evidence rather than impressions.

  • Technical architecture and infrastructure documentation
  • Data governance and quality policies
  • Existing AI/ML initiatives, tools, and vendor relationships
  • Organizational structure and relevant team compositions
03
4–6 sessions · 45 min each · Core phase

Stakeholder Interviews

Structured 45-minute interviews with key stakeholders across technical and non-technical functions. I ask the same core questions across roles to triangulate honest answers: what leadership says about AI literacy often differs sharply from what the team experiences.

  • CTO / CIO / Head of Engineering: technical foundation and infrastructure readiness
  • Head of Data / Data Engineering: data maturity and pipeline reliability
  • Lead Engineers / Senior ICs: ground-level capability and tooling reality
  • Business Unit Leaders: AI adoption, use case clarity, and change readiness
  • CISO / Legal / Compliance: governance, risk, and regulatory posture
04
Following interviews

Scoring & Analysis

Each of the 8 dimensions is scored 0–100 based on the evidence collected, not self-reported answers alone. I weight qualitative and quantitative signals to produce a composite score that reflects organizational reality, not optimistic estimates.

  • Per-dimension scoring with documented evidence for each rating
  • Gap identification: what's missing and how critical it is
  • Blocker analysis: what will stop progress if left unaddressed
  • Effort-to-impact assessment for each identified gap
05
Findings delivery · Final phase

Findings, Guidance & Strategy Session

The assessment findings are delivered following the analysis phase: scored, documented, and structured around what your organization needs to understand and act on. The format is calibrated to your situation: some organizations need a detailed written report; others need a tighter executive presentation. We agree on the right shape during scoping. We then walk through the findings together in a strategy session with your leadership team, focused on what to do next.

  • AI Readiness Score with per-dimension breakdown and tier classification
  • Written findings with evidence citations and gap analysis
  • Strategic guidance on where to move first, sequenced by effort and impact
  • Executive-ready summary of findings
  • Leadership walkthrough session
  • Follow-up access as you begin acting on findings

What gets measured, and why it matters.

The eight dimensions were developed from direct experience leading AI engineering teams, not from academic literature or vendor frameworks. They represent the categories where organizations consistently either succeed or fail.

01

Data Infrastructure

The foundation of everything. AI systems are only as reliable as the data that powers them. Organizations routinely overestimate their data readiness.

We evaluate
  • Data accessibility and discoverability
  • Data quality, freshness, and completeness
  • Pipeline reliability and observability
  • Data governance and cataloging maturity
02

Technology & Cloud

Your stack determines which AI patterns are feasible without major infrastructure investment. Many organizations have significant hidden rework ahead.

We evaluate
  • Cloud platform maturity and strategy
  • API-first architecture readiness
  • MLOps and model deployment infrastructure
  • Compute capacity and scaling capability
03

Engineering Capability

Building a proof of concept is not the same as productionizing AI. Most teams can prototype; far fewer can maintain, monitor, and improve models in production.

We evaluate
  • ML/AI skills depth across the engineering team
  • Ability to ship AI features reliably
  • Model monitoring and maintenance capability
  • Speed from experiment to production
04

AI Literacy

Non-technical staff who can't use AI tools confidently are a drag on transformation. Adoption is the last mile, and organizations consistently underinvest in it.

We evaluate
  • AI tool usage across non-technical functions
  • Prompt engineering and workflow integration skills
  • Trust, confidence, and feedback quality
  • AI training and onboarding programs
05

Strategy & Sponsorship

Without clear executive alignment and dedicated ownership, AI initiatives stall the moment they encounter the first meaningful obstacle, and they always do.

We evaluate
  • Executive alignment and AI ownership clarity
  • Dedicated AI budget and investment horizon
  • Existence and quality of an AI strategy or roadmap
  • Accountability structures for AI outcomes
06

Use Case Clarity

AI investments without prioritized, validated use cases produce sprawl, not results. Organizations need a framework for knowing where to start and how to evaluate return.

We evaluate
  • Identified and validated AI use case pipeline
  • ROI framework and measurement methodology
  • Prioritization criteria and decision process
  • Active pilots and their evaluation rigor
07

Governance & Risk

Organizations routinely deploy AI without policies in place. A single public incident around bias, data use, or model behavior can do lasting damage.

We evaluate
  • AI policy and responsible use framework
  • Bias detection and mitigation practices
  • Data privacy and compliance controls
  • AI security and adversarial risk awareness
08

Culture & Change

AI transformation requires people to change how they work. Organizations that underestimate the change management dimension consistently struggle to realize AI's potential, even after strong technical foundations are in place.

We evaluate
  • Organizational appetite and trust for AI change
  • Experimentation culture and failure tolerance
  • Change management capability and track record
  • Leadership communication on AI vision

What this is not.

There are a lot of AI assessments in the market. Most of them share the same limitations.

Not a self-service quiz I interview your actual stakeholders and review real documentation. The score reflects organizational reality, not how a leadership team describes itself in a multiple-choice survey.
Not a vendor pitch No preferred tools. No affiliate relationships. No kickbacks. Every recommendation is made because it's the right call for your situation, not because someone is paying for placement.
Not a theoretical framework Scored on evidence, not aspiration. What you've actually built, what your team can actually do, and what your data actually looks like, not what your strategy deck says.
Not a months-long engagement This isn't a six-month consulting engagement. The value is in the clarity of the findings and the quality of the guidance, not in billing hours. The goal is a picture clear enough to act on, quickly.

Practical details.

🗓️

Timeline

The engagement is designed to move with urgency. Actual timeline depends on your organization's size, stakeholder availability, and the depth of documentation review required. We'll establish a realistic schedule during the discovery call.

👥

Your Team's Time

Each stakeholder needs 45–60 minutes for their interview. One person should be available to coordinate document sharing and scheduling. Total ask on your team: 5–8 hours across 5–6 people.

🌐

Format

Conducted entirely virtually. Interviews are one-on-one over video, document review is handled via secure file share, and the strategy session includes your full leadership team.

A complete picture of your AI readiness, designed to move fast.

The assessment is structured to deliver clarity quickly, without cutting corners on what actually matters. You'll have the findings and a clear direction before most organizations have finished debating whether to start.

Book an Assessment →

Pricing is scoped to the size and complexity of your organization.