Posts
RSS feed- Compounding: The Only Mental Model
If you could only keep one mental model, keep compounding. It applies to skills, reputation, writing, and tools.
- Over-Capture, Then Cull
Don't filter during capture. Capture is cheap. Ideas are expensive. The cull is where quality happens.
- Hooks Are Life Infrastructure
Event-driven hooks in AI coding tools aren't just for linting — they're programmable triggers for life routines, habits, and systems.
- Play for You, Work for Others
Naval's edge: find the thing that feels like play to you but looks like work to others.
- Composure Is a Skill
Rushing is a habit, not a response to reality. You break it by deliberately not rushing when you could.
- Your AI Tools Should Watch You Fumble
The best time to improve a CLI isn't when it breaks — it's when you review the breakage log at the end of a work session.
- Mining Your LLM
Your AI already knows things that would make it better at helping you. The trick is extracting that knowledge and making it permanent.
- Building a Bus Alert System in One Session
How a real need on a Hong Kong bus turned into a GPS-powered alert system in under two hours
- The Debate Round Is Where Value Lives
Independent parallel reviews produce overlapping findings. The cross-critique round produces resolution. That's where multi-agent value actually emerges.
- The Confidence Trap
Why the thinkers who make you feel like hard questions are resolved deserve the most scrutiny.
- When LangGraph Earns Its Keep
LangGraph is the SAP of agent orchestration — powerful at scale, overkill for most. Here's the line.
- Your AI Pipeline Is Probably MapReduce
Most AI workflows are parallel-then-aggregate, not agent graphs. Knowing the difference saves you from framework theatre.
- The Expert Illusion
Why 'you are an expert' is the most popular and least useful prompt engineering technique
- Planning Needs Eyes
A 3-pass AI planning pipeline caught 0 out of 6 design issues. The same planning done in-session with tool access caught 2.5. Planning isn't a prompt problem — it's a tools problem.
- What If Your Vault Had Residents?
Not tools that search your notes — personalities that live in them, form opinions, and disagree with each other.
- Put the Rule Where It Fires
Documenting a rule is half a loop. The rule only works when it fires at the moment of decision — not when it sits in a file nobody reads.
- What Human Memory Teaches AI Agents (and What It Doesn't)
A calculator doesn't simulate forgetting — it manages its context budget. What to cherry-pick from cognitive science for AI agent memory, and what to leave behind.
- When to Make Your Pipeline Agentic
Most LLM pipelines don't need agents. The ones that do share a specific pattern — the step needs to decide what to do next, not just process what it's given.
- The MTEB Leader Barely Beats a Free Model on Agent Memory
I benchmarked 10 memory backends and multiple embedding models on actual agent memory retrieval. The results challenge common assumptions about what matters.
- China's AI Stack Is Now Hardware-Deep
DeepSeek V4 launching on Huawei Ascend NPUs signals that China's AI ecosystem is decoupling at the silicon layer — deeper and more durable than model-level divergence.
- AI Vendors Are Not Neutral Infrastructure
The DoD-Anthropic dispute reveals a new category of operational risk: foundation model vendors can unilaterally revoke access based on their own values, not just SLA violations.
- Three APAC Regulators Are Converging on AI Governance — Banks Should Build One Framework
MAS, PBOC, and HKMA are independently arriving at similar AI governance requirements. Banks regulated by all three have a narrow window to build one superset framework instead of three silos.
- The Agent Governance Gap Is Already Here
Agentic AI isn't a future governance problem — it arrived ungoverned, and this week saw the first enforcement action.
- Your Agent Pays the Cold-Start Tax Every Morning
Agent memory isn't knowledge management. It's performance infrastructure — and the gap between a stateless agent and one that accumulates context is measurable.
- The Knowledge That Disappears When You Try to Capture It
Enterprise AI keeps promising to capture institutional knowledge. The most valuable kind resists capture by design.
- What AlphaSense Charges Ten Thousand Dollars For
I built an AI landscape intelligence pipeline for zero marginal cost. Here's what it does and what it can't.
- The Eval Gap
The scarce AI skill isn't building — it's knowing if what you built actually works.
- The CLI Boundary
Which parts of an AI dev workflow can be wrapped in a CLI, and which can't — learned the hard way by building the wrong thing and measuring it.
- Progressive Trust: How to Give AI Agents Autonomy Without Gambling
The debate about AI agent autonomy is wrong. It's not a binary choice — it's a graduated trust system with observability.
- Don't Be Impressed by Fluency
AI can reproduce smart arguments on demand. I'm not sure that's different from thinking. But the uncertainty itself is worth sitting with.