What Makes a Great AI Consultant (Beyond Technical Skills)

The most dangerous person in an AI consulting engagement is the one who knows how the model works but has never sat in a credit committee.

This isn’t a slight against technical depth — it matters enormously, and its absence is disqualifying in most meaningful AI engagements. But technical depth alone produces a particular failure mode: recommendations that are architecturally sound and organisationally stranded. Systems that could work, in an organisation that doesn’t exist. Proposals that answer the technical question while misreading the room so completely that they never make it past the first stakeholder review.

The skills that separate useful AI consultants from expensive ones have little to do with AI specifically.

The first is learning to read silence. Every organisation that commissions an AI engagement has people in it who have tried something similar before and failed. They’re usually in the room. They’re usually quiet — not apathetic, but cautious, in that particular way that comes from having invested in something that didn’t work and then watched someone new get excited about the same thing. Their silence carries specific information: what the organisation tried, where it broke down, what the political cost of the failure was. Engaging that silence early — not by pressing, but by creating enough space that there’s somewhere for it to go — changes the shape of everything that follows.

The second is mapping the real approval chain rather than the org chart version. Most AI initiatives fail not because of technical limitations but because they run out of momentum somewhere in the approval process. A project that can’t find a named owner — someone whose performance review is tied to its outcome — won’t survive the first prioritisation cycle. An initiative that requires sign-off from a function that wasn’t consulted during design will stall when it surfaces that function’s concerns for the first time at the approval stage. The org chart tells you the formal hierarchy. The real approval chain — who actually needs to be aligned, in what sequence, with what framing — is discovered by asking different questions, usually in one-on-one conversations that don’t happen in the official project kickoff.

The third is understanding the client’s relationship with failed projects. In some organisations, a visible failure ends careers. In others, a pilot that didn’t scale is unremarkable — a routine part of how the organisation learns. The risk appetite for AI isn’t just a formal policy statement. It’s embedded in stories people tell about what happened to the last person who launched something ambitious that didn’t work. Getting that calibration wrong produces either recommendations that are too cautious to be useful, or recommendations that are bold enough to be interesting and threatening enough to be killed.

An AI architecture recommendation that ignores the organisational context it has to live in isn’t a recommendation. It’s a slide. It might be a good slide. But it won’t get implemented.

The best AI consulting work I’ve seen — and the common thread in the engagements that actually deliver — involves spending the first two weeks asking questions that aren’t in the brief. Not primarily about the technical problem, though that matters. About the organisation. Who are the people with the most accurate understanding of why things succeed or fail here? What initiatives have looked similar to this one from the outside but felt different from the inside? What does this organisation do when it discovers that a project is harder than expected — does it resource up, scope down, or quietly deprioritise?

These questions produce answers that don’t fit neatly into a discovery report. But they determine whether the architecture recommendation is implementable, whether the roadmap is credible, and whether the client will still be engaging with the work six months after the kickoff.

Technical knowledge gets you in the room. Organisational intelligence determines whether the work lands.


P.S. The most useful question I’ve found for calibrating an organisation’s AI readiness has nothing to do with AI: “Walk me through what happened to the last major initiative that didn’t go as planned.” The answer tells you more about how the organisation will behave under pressure than any maturity assessment.