I can now extract a structured decision framework from an LLM in two minutes that would have taken me a day of reading to assemble myself. Five distinct mechanisms for a problem, failure modes, key distinctions, a developmental timeline — the kind of thing a subject matter expert carries in their head after years in the field.
This is genuinely useful. It’s also a problem for anyone whose job is “I know things you don’t.”
The 80% of domain expertise that lives in textbooks, papers, and accumulated professional literature is already in the model’s weights. Extractable by anyone who knows how to interview it properly. The value of having read everything — of being the person in the room who’s studied the field — is collapsing toward zero.
So what’s left?
Field experience. “I’ve seen this fail at three banks and here’s the pattern the literature doesn’t capture.” The model knows what the research says about change management. It doesn’t know that your specific CFO will sabotage any initiative that wasn’t his idea, or that the last two data governance programmes at this bank died because middle management treated them as compliance theatre. Field experience is the residue of having been wrong in enough contexts to recognise the shape of failure before it arrives.
Judgment under ambiguity. The framework says X. But this situation has a wrinkle — a political constraint, a timing issue, a cultural factor — that means X will backfire. Knowing when the framework doesn’t apply is harder than knowing the framework. It requires the kind of pattern recognition that comes from having lived through the consequences of following the framework when you shouldn’t have.
Accountability. Someone has to own the decision, not just the analysis. A model can give you five options with trade-offs. It can’t sit in the room when the option you chose goes wrong and explain why you chose it. The human job isn’t just selecting — it’s standing behind the selection with your reputation and your career.
Taste. Twenty heuristics in a framework, but only three matter for this client, this quarter, this political moment. Knowing which three is not analysis. It’s judgment shaped by context that no prompt can fully convey.
The anxiety about AI replacing knowledge workers is usually framed as “can the model do my job?” Wrong question. The right question is: “which part of my job is already in the weights?”
If the answer is “most of it” — if your value proposition is having read the literature, knowing the frameworks, being able to structure thinking about a domain — then yes, that part is increasingly extractable without you. Not perfectly. Not without risk of hallucination. But good enough that paying you for it is hard to justify.
The part that isn’t in the weights is the scar tissue. The pattern recognition from having been wrong. The political awareness from having watched good ideas die for bad reasons. The conviction to make a call when the data is ambiguous and own it when it doesn’t work out.
That’s the human job description going forward. Not “I know things.” The model knows things. Not “I can structure thinking.” The model structures thinking. But “I’ve been wrong enough times in enough contexts to know when the structured thinking doesn’t apply.”
The uncomfortable corollary: if you’ve spent your career accumulating knowledge without accumulating judgment — reading without doing, studying without deciding, advising without owning — the weights already have everything you’ve got.
This changes what’s worth learning. Stop optimising for recall — the model recalls better than you. Optimise for the ability to reconstruct a framework from the problem it solves. Memorising “AI governance has six dimensions” is retrieval. Understanding why a bank needs those six answers, so you can rebuild the framework in context and adapt it when challenged — that’s reconstruction. The model has the bullet points. You need the understanding that tells you which bullets matter here.