Building My Own Consulting Toolkit Before Day One

Most people starting a new role spend their first few weeks learning the tools from colleagues — what the firm uses, how it’s organised, what the unwritten conventions are. I spent the month before my start date building a personal toolkit instead.

Not to avoid the onboarding. To show up ready to contribute to it.

The gap I was trying to close is specific to the consulting context. Consulting firms have strong institutional knowledge about their client domains and their internal methodologies. They have frameworks, templates, knowledge repositories, and colleagues who’ve seen variations of most problems. What they don’t have, usually, is a way to help a new person develop the personal operational fluency that determines whether you’re effective in the room — the quick mental models, the reference knowledge, the ability to synthesise under time pressure without looking like you’re synthesising under time pressure.

That’s the toolkit I was building: an AI-augmented system for thinking, not for working.

The structure is simple in principle and depends on specific tools in practice. A local knowledge base seeded with everything I could legitimately export from my previous role — anonymised case patterns, frameworks I’d developed, lessons from projects that failed, reference material for the domain knowledge that would be relevant. An AI assistant with access to that base through MCP integration, so questions I ask during preparation draw on accumulated context rather than starting cold. A set of skill files that define how I want the assistant to approach specific tasks — building briefing documents, stress-testing analyses, generating the “awkward question” someone in the room might ask.

The seeding process was more valuable than the system it produced. Going through years of work and extracting what was worth carrying forward forced a clarity about what had actually been useful versus what had just been done. The patterns that had worked across different problems. The mental models that had predicted outcomes correctly. The questions that, when asked early, had consistently shifted the direction of the work productively. Writing these down, in enough detail to serve as prompts for an AI assistant, made them explicit in a way that years of experience hadn’t.

There’s something specific about articulating knowledge for an AI interlocutor that forces precision. A vague intuition — “check whether the client actually owns the data governance problem they’re asking you to solve” — becomes specific when you have to write it as a skill: what does “actually owns” mean in practice, what signals indicate ownership versus nominal responsibility, what questions surface the distinction. The articulation is the value, regardless of whether the skill file ever gets used.

The first few weeks confirmed that this was worth doing. Not because I arrived with all the answers — the specific institutional context is always new, always requires learning from colleagues — but because the time that would otherwise go to recovering knowledge I already had could go to learning what was genuinely new. The onboarding questions that matter most are the ones about context you don’t have. The ones about frameworks and domain knowledge you bring with you shouldn’t take the same time.

The meta-lesson is about the nature of knowledge transfer across roles. The institutional knowledge of a new firm is genuinely new and has to be learned through the firm. The personal expertise accumulated over years is portable and has to be actively carried. Most people do the first well and the second badly — arriving at the new role with years of useful knowledge that sits latent because it wasn’t externalised in a form that survives the context switch.

Building the toolkit before day one is how you carry the knowledge across.


P.S. The most useful thing in the toolkit turned out to be the log of questions that had changed the direction of work when asked early enough. Not the answers — the questions. The answers depend on context. The questions that surface the important context are more transferable.