skip to content
Terry Li

Daniel Miessler made a point this week that I think is correct and incomplete in the same breath. The reason most companies are failing to get value from AI, he argued, is not that the technology is too primitive or too expensive. It is that the companies themselves cannot describe what they do. Asked to articulate their problems, goals, metrics, blockers, and workstreams, most enterprises would either stare blankly or take three months to assemble an answer that contradicts what they assembled the previous quarter. AI executes; it cannot help the executor that does not know what to execute. The enterprises AI is helping, he observed, are the same enterprises that were already legible to themselves before AI existed. That is correct. It is also the smaller half of the problem.

The larger half is that the same legibility test applies, with even sharper edges, to AI governance. A company that cannot describe its workstreams cannot describe its AI workstreams. A company that cannot articulate which problems it is solving for customers cannot articulate which AI deployments are touching those customers, in which way, with what failure surface. The Miessler diagnosis ends at AI cannot help you optimise what you cannot describe. The corollary, which I think matters more for the next two years, is that you cannot govern what you cannot describe either, and the governance failure shows up first because it surfaces faster than the optimisation failure. A poorly aimed AI deployment quietly produces nothing for a quarter. A poorly aimed AI deployment that breaches a customer outcome, or a regulatory expectation, or a control boundary, produces a phone call from the regulator inside a fortnight.

This is why the framing of AI risk as a technology problem keeps slipping out of focus. The companies most exposed to AI safety incidents are not the companies with the most AI. They are the companies whose AI sits inside operational chaos that predates AI. The deployment is the trigger; the absence of self-description is the underlying condition. Someone inside the firm ships a model that does something unintended, and the post-mortem reveals that nobody could have stopped it because nobody could have written down, in advance, what the system was for, what it was not for, who owned the boundary, and which control was supposed to hold. The AI did not cause this. It exposed it.

What follows from this, if you take it seriously, is a different question to ask before any AI strategy meeting. The question is not whether the company has the right cloud, the right data foundation, or the right vendor relationships. The question is whether the company can produce, in writing, in under a week, a description of itself that would survive being read by a regulator and by a smart fifteen-year-old at the same time. If the answer is no, then no amount of AI investment will be safe, and almost none of it will be productive. The work to do first is not AI work. It is the work of becoming legible, to yourself and to the systems that govern you. That work is unglamorous, slow, and political, which is precisely why most companies will not do it, and why the companies that do will inherit the field.

The smaller-company advantage is real and the implication is that the existing competitive moat of being a large incumbent is partially inverted. Scale used to compensate for incoherence. With AI, scale amplifies incoherence. The smaller firm that can answer the legibility questions in an afternoon now has access to the leverage that used to require a thousand-person operations function. The larger firm whose answers shift every quarter now has a tool that scales the shifting. That is the asymmetry that will reorder the next decade, and the firms that survive it will be the ones whose first AI initiative was not a model and not a platform, but a written account of themselves clear enough that an outside reader could say, here is what this company is for, and here is what it is not.

There is one more thing worth saying, because it is where the diagnosis becomes actionable. Legibility is not a deliverable, it is a discipline. It is not produced by a strategy offsite or a consultant deck or a single OKR cascade. It is produced by the boring habit of writing things down clearly and revising them when they become wrong. The companies that have it have it because someone, somewhere, refused to accept ambiguity as the natural state of a meeting. The companies that lack it lack it because everyone has tacitly agreed that ambiguity is more comfortable than disagreement. AI does not fix this and was never going to. What AI does is make it newly expensive to keep avoiding it.

· · ·

Keep reading