skip to content
Terry Li

Posts

RSS feed
  • No Stable Moat

    Every layer humans retreat to, AI follows. The question isn't what we're still good at — it's what we teach the next generation when every cognitive advantage has a shelf life.

  • What the Weights Don't Know

    The value of having read everything is collapsing toward zero. What's left is what you can't extract from a model.

  • The Knowledge Mining Gap

    Most knowledge workers use LLMs as search engines. The real unlock is using them as subject matter experts you debrief.

  • Good Enough Parrots

    The philosophical debate about whether LLMs understand is orthogonal to whether they're useful for knowledge extraction.

  • Systematise Decisions, Not Actions

    Actions are cheap to redo. Bad decisions compound. Build systems around the judgment calls, not the mechanical steps.

  • Spaced Repetition for Beliefs

    Most people do spaced repetition for facts but not for beliefs about themselves. Wrong priors calcify because there's no review system.

  • I Made AI Remember to Remember

    Most AI memory is either always-on or ephemeral. The missing category is prospective: remember until a context arises, then forget.

  • Default to the Whole Conversation

    When AI tools search conversation history, they should index both sides by default — not just the human's half.

  • Save Conclusions, Not Just Rules

    When an answer requires multi-step reasoning to reach, save the conclusion — a fresh start won't reliably reproduce the chain.

  • What LLMs Don't Volunteer

    When you mine knowledge from an LLM, certain types come easily. Others are systematically absent. A taxonomy reveals the blind spots.

  • Ten Types of Actionable Knowledge

    Not all knowledge works the same way. A taxonomy for what you're actually capturing when you write down what you've learned.

  • When to Think and When to Count

    Machine learning says let the model find the signal. Heuristics research says use one variable and ignore the rest. They're both right — the dividing line is how much data you have.

  • The Book That Tells You Not to Read It

    Gigerenzer's thesis is that simple rules outperform complex analysis. If you've already internalised that, reading 300 pages of evidence for it might be the exact kind of overthinking he's arguing against.

  • The Heuristic Library

    Experts don't make more decisions — they make fewer, by having better defaults. The real meta-skill is accumulating simple rules and knowing when to stop reasoning.

  • Delegation Is Delegation

    Whether you're trusting a doctor's prescription, an AI agent's code, or a junior engineer's pull request — the trust heuristics are identical.

  • The Dimensions Nobody Lists

    Title, salary, company, industry — the standard job evaluation checklist misses the things that actually predict whether you'll thrive.

  • Why AI Demands Experiments

    Most technology decisions can be reasoned through. AI solution design can't — the domain is too empirical, too fast-moving, and too non-linear for theory alone.

  • The Confidence Stack

    Not all knowledge is equally trustworthy. Three tiers of validation — from 'a model said it' to 'it survived reality' — and why tracking the difference matters.

  • The Treadmill and the Loop

    Getting ahead of AI best practices is a treadmill. The durable skill is testing assumptions faster than they expire.

  • Personas Exploit a Blind Spot in LLM-as-Judge Evaluation

    Persona prompting generates the exact type of hallucination that automated LLM judges reward as 'depth.' Two experiments, blind evaluation, and a fact-check that flipped the finding.

  • The Persona Paradox in AI Agent Teams

    Personas hurt for structured tasks, help for judgment-heavy tasks. Two experiments, blind evaluation, frontier models. The distinction is task-dependent, not binary.

  • Revealed Preference in Interviews

    What a company has already built tells you more than what they say they're about to build.

  • Cast the Wide Net

    When you don't have enough information to narrow, stop narrowing.

  • The Easter Egg That Landed

    The strongest slide in my interview deck wasn't about what I'd built. It was about how I built the deck itself.

  • How to Think With AI (Not Just Use It)

    Most people use AI like a tool. Here's what thinking with AI actually looks like — and the skills that make the difference.

  • Your AI Is a Thinking Partner, Not a Q&A Bot

    Stop asking your AI single questions. Start thinking out loud with it. Let half-formed ideas land. The AI holds the structure so you can stay in flow.

  • Guardrails Are Rivers, Not Walls

    The best guardrails work like river banks — they don't stop the water, they focus it. Constraints create capability.

  • Your AI Is an Echo Chamber (And That's Sometimes Fine)

    AI agrees with you by design. That's great for creative flow and dangerous at the decision point. Know when to switch modes.

  • Enterprise AI Agents: The Transformation Is Organisational, Not Technical

    The companies that win with AI agents aren't deploying the most agents — they're redesigning their organisations to work with them.

  • Honesty as Default, With One Exception

    A two-tier honesty framework: be honest by default, override only when truth would harm someone vulnerable.