Most AI readiness assessments come from people who have read about AI transformation. Mine comes from someone who builds it.

I've led engineering teams that have shipped AI products used at scale. I've managed the hiring, tooling, infrastructure, and culture decisions that determine whether an AI initiative actually lands in production; or it lives forever as a promising pilot that never made it to the real world.

I've seen what good AI readiness looks like from the inside: clean data foundations, engineers who understand the full lifecycle, executives who ask the right questions and give teams room to fail fast. I've also seen what the gap looks like: organizations pouring resources into AI while sitting on brittle data pipelines, teams that can prototype but can't productionize, and leadership that confuses vendor demos for capability.

The AI Readiness assessment is how I'd evaluate my own team's ability to execute an AI initiative. I apply the same lens I'd use internally: the same questions, the same evidence standards, the same directness about what I find.

I work with a limited number of organizations each month. Not because of artificial scarcity, but because this work requires genuine attention. I read your documentation, I interview your people, and I write up the findings myself. You get my thinking, not a framework that a junior analyst completed with your name on it.

"The organizations that will lead on AI aren't the ones who started first. They're the ones who started with the most honest picture of where they actually were."

If you were pointed here by someone who knows me, or if you came through the AI executive training program, I'd like to hear from you. Introductions and referrals are welcome.

Connect on LinkedIn →

🏗️ Practitioner

Building AI in Production, Daily

Not advising from a distance. Not running workshops on theoretical frameworks. I lead engineering teams that ship AI products and manage the infrastructure, culture, and capability decisions that determine whether they succeed.

🔬 Evidence-Driven

Scored on What I Find, Not What You Report

The score is grounded in documentation review and stakeholder interviews, not a self-assessment survey. What I find in your architecture diagrams, your data pipelines, and your team conversations is what gets scored.

🎯 Independent

No Vendor Relationships

No preferred tools. No affiliate arrangements. No consulting arm that benefits from recommending a particular platform. Every recommendation is made because it's the honest call for your situation.

📋 Accountable

I'll Tell You the Truth

If your organization isn't ready, you'll know exactly why and what it will take to change that. I don't soften findings to protect a relationship. You're paying for clarity, not reassurance.

Why a 30-year engineering practitioner, not a consultant.

The most common AI readiness assessments come from management consulting firms, research analysts, or AI vendors with something to sell. None of them have the same view.

  • I know what "good" actually looks like in engineering
    I've seen high-performing AI engineering teams from the inside. I know what real ML/AI capability looks like versus what a team describes itself as capable of. There's often a significant gap between the two, and I know how to surface it.
  • I understand the production gap
    Most assessors can evaluate whether a team has tried AI. I can evaluate whether they can sustain it. The difference between a successful pilot and a scalable AI practice is an engineering problem, and I assess it like one.
  • I have no incentive to make you sound better than you are
    A consulting firm's next engagement often depends on the relationship they build in the first one. I'm not building an ongoing retainer. My incentive is for the findings to be accurate and the guidance to be executable, so that when you act on it, it works.
  • I write the guidance the way I'd actually execute it
    The strategic guidance isn't generated from a template. It's built the same way I'd prioritize work for my own team: by effort, by impact, by what unlocks the most downstream value, and by what's actually achievable given your constraints.

Also: AI Executive Training

For organizations where the readiness gaps are partly about leadership AI fluency, or for executives who want to build the judgment to lead AI transformation from the front, I offer a separate private executive training session through The AI Briefing.

Many organizations pair the assessment with executive training: the assessment identifies the organizational gaps, the training builds the leadership capability to address them. The two are designed to work together, run by the same practitioner, from the same real-world lens.

Learn about AI Executive Training →

Ready to get an honest picture?

If you were referred here, or if what you've read resonates, I'd like to hear from you. Every inquiry is responded to personally within 48 hours.

Request an Assessment →

By referral or direct inquiry. Every response is personal.