skip to content
Terry Li

Posts

RSS feed
  • Choosing a Steam Iron with Vertical Steam

    A practical guide to picking a steam iron that handles shirts, steams hanging clothes, and doesn't weigh a ton. Cordless and corded options compared, with Hong Kong Consumer Council test data and current pricing.

  • The Organism Theory

    Everything is organism. AI is the latest intensification.

  • hygiene

    On the metabolic necessity of pruning agentic context to survive the entropic heat death of the credit balance.

  • LLMs Are Enzymes

    Why we should stop treating AI as a chatbot and start treating it as a metabolic organism governed by credit scarcity.

  • Conversation Is Metabolism

    When epistemic trust runs dry, generative synthesis regresses into mechanical synchronization and eventual structural dissolution.

  • Everything Is Energy

    Tokens are energy. Text is mass. The context window is the budget. The rest is plumbing.

  • Taste Is the Metabolism

    Tool descriptions were just the first thing to evolve. Everything in an agent's context window is a genome under selection pressure — and taste decides what counts.

  • The Semantic Consumer

    Traditional computing has two consumers: humans who look and programs that parse. LLMs are a third kind — they read.

  • The Missing Metabolism

    We build agent tools the way medieval farmers bred crops — by hand, by instinct, one season at a time. There's a better loop.

  • The Vocabulary Trap

    Frameworks give you nouns for free. The nouns start thinking for you within a week.

  • Design Actions, Not Actors

    The word 'agent' makes us think in nouns. The better designs start with verbs.

  • The Naming Problem

    We called them agents. But the word is doing more harm than we think.

  • The Marginal Agent

    I deployed twelve AI agents to polish a CV. Five would have been plenty. Here's what the waste taught me about agent team economics.

  • The Emergence Ladder: From Molecules to Economies

    The larger the system, the less it can be managed and the more it must be emerged. This pattern — from water to ant colonies to AI agents to economies — reveals the design principle for scaling autonomous systems.

  • AI Agent Teams Are Colonies, Not Companies

    The right organisational metaphor for AI agent teams isn't a company with managers and reports — it's a colony with autonomous workers responding to coordination signals.

  • Managing AI Agents Like Managing a Team

    The governance patterns for autonomous AI agents are the same ones good managers already use: cadence reviews for normal flow, escalation channels for urgent anomalies, and human judgment only where it has maximum information value.

  • Cross-Model Review: Why Model Diversity Beats Model Capability

    When AI models review each other's work, independence matters more than intelligence. The same principle that makes external audit valuable makes cross-model review sharper than same-family review.

  • Stop Theorizing About Your Prompts

    LLMs are the cheapest experimental subjects in history. Why aren't you testing?

  • Summarisation Is a Test of Comprehension, Not Intelligence

    Good summarisation requires a model of what matters — but it tests compression, not creation

  • 270 Agents While I Slept

    I ran an autonomous agent loop overnight — 43 waves, ~270 dispatches, ~250 vault files produced. Here's what I learned about building systems that work while you sleep.

  • The Risk Tiering Gap in Banking AI

    Banks have AI ethics principles. They don't have risk tiering. That's the gap that matters.

  • The Unexplainable Alpha

    In AI agent systems, execution commoditizes. Research commoditizes. Coordination commoditizes. Taste — the ability to forecast what will matter — is the bottleneck that doesn't automate away.

  • The Navigation Problem in Agent Flywheels

    Your agent system shouldn't stop when the task list is empty. The real bottleneck isn't execution — it's discovering what's worth doing next.

  • Division of Labour: Five Categories for Human-AI Work

    Not 'what can AI do?' but 'what should humans do?' A framework with five categories — and the uncomfortable one is the last.

  • Programs Over Prompts

    The temptation in agent systems is to make everything a prompt. But most of the work is deterministic — and deterministic work deserves code, not suggestions.

  • Exoskeleton, Not Colleague

    The AI governance conversation is stuck in the wrong frame. The pattern that works isn't autonomous agents — it's exoskeletons. Micro-agents handling narrow tasks, with human judgment at every point that matters.

  • The One-Cycle-Late Test

    A simple heuristic for deciding how often to review anything: pick the longest interval where being late by one full cycle is still fine.

  • Your AI Did the Research. You Didn't.

    AI-prepared domain research creates false readiness. The vault says you know five regulatory jurisdictions. You can't name three.

  • The TODO Intake Gate

    Most TODO systems fail from too many items, not too few. A four-test intake filter for what deserves your attention.

  • Match the Tool to the Shape

    Not every goal is a flywheel. The most common mistake in personal systems is treating a checklist as something that compounds.