Posts about ai
-
Human Oversight Doesn't Scale
Every AI governance framework demands human-in-the-loop. Nobody does the maths on what that means at enterprise scale.
-
The Maker-Checker Trap
Most AI maker-checker implementations capture the correction but not the reason. That's a feedback loop with no signal.
-
The Global Minimum of Governance
Governance isn't about catching every failure — it's about proving your process was reasonable when one happens. The real skill is knowing what to deliberately not monitor.
-
The Annotation Model: What AI Journaling Gets Right
Most AI writing tools want to chat with you. The better model is annotation — AI that reads what you wrote and leaves margin notes.
-
Agentic Search Ate RAG
When AI agents can grep, read, and reason iteratively, most RAG infrastructure becomes unnecessary middleware.
-
The AI Trading System You Should Build But Never Use
The best use of AI in investing isn't picking stocks — it's building the pipeline that teaches you why you can't.
-
The Interlocutor Mode
Most people use AI transactionally. The real unlock is conversational — thinking with the model, not through it.
-
Reconstruction Over Retrieval
In a world where AI has perfect recall, the skill that matters is rebuilding frameworks from first principles — not memorising them.
-
No Stable Moat
Every layer humans retreat to, AI follows. The question isn't what we're still good at — it's what we teach the next generation when every cognitive advantage has a shelf life.
-
What the Weights Don't Know
The value of having read everything is collapsing toward zero. What's left is what you can't extract from a model.
-
The Knowledge Mining Gap
Most knowledge workers use LLMs as search engines. The real unlock is using them as subject matter experts you debrief.
-
Good Enough Parrots
The philosophical debate about whether LLMs understand is orthogonal to whether they're useful for knowledge extraction.
-
What LLMs Don't Volunteer
When you mine knowledge from an LLM, certain types come easily. Others are systematically absent. A taxonomy reveals the blind spots.
-
When to Think and When to Count
Machine learning says let the model find the signal. Heuristics research says use one variable and ignore the rest. They're both right — the dividing line is how much data you have.
-
The Heuristic Library
Experts don't make more decisions — they make fewer, by having better defaults. The real meta-skill is accumulating simple rules and knowing when to stop reasoning.
-
Delegation Is Delegation
Whether you're trusting a doctor's prescription, an AI agent's code, or a junior engineer's pull request — the trust heuristics are identical.
-
Why AI Demands Experiments
Most technology decisions can be reasoned through. AI solution design can't — the domain is too empirical, too fast-moving, and too non-linear for theory alone.
-
The Confidence Stack
Not all knowledge is equally trustworthy. Three tiers of validation — from 'a model said it' to 'it survived reality' — and why tracking the difference matters.
-
How to Think With AI (Not Just Use It)
Most people use AI like a tool. Here's what thinking with AI actually looks like — and the skills that make the difference.
-
Your AI Is a Thinking Partner, Not a Q&A Bot
Stop asking your AI single questions. Start thinking out loud with it. Let half-formed ideas land. The AI holds the structure so you can stay in flow.
-
Guardrails Are Rivers, Not Walls
The best guardrails work like river banks — they don't stop the water, they focus it. Constraints create capability.
-
Your AI Is an Echo Chamber (And That's Sometimes Fine)
AI agrees with you by design. That's great for creative flow and dangerous at the decision point. Know when to switch modes.
-
Enterprise AI Agents: The Transformation Is Organisational, Not Technical
The companies that win with AI agents aren't deploying the most agents — they're redesigning their organisations to work with them.
-
Mining Your LLM
Your AI already knows things that would make it better at helping you. The trick is extracting that knowledge and making it permanent.
-
When LangGraph Earns Its Keep
LangGraph is the SAP of agent orchestration — powerful at scale, overkill for most. Here's the line.
-
Your AI Pipeline Is Probably MapReduce
Most AI workflows are parallel-then-aggregate, not agent graphs. Knowing the difference saves you from framework theatre.
-
The Expert Illusion
Why 'you are an expert' is the most popular and least useful prompt engineering technique
-
What If Your Vault Had Residents?
Not tools that search your notes — personalities that live in them, form opinions, and disagree with each other.
-
The MTEB Leader Barely Beats a Free Model on Agent Memory
I benchmarked 10 memory backends and multiple embedding models on actual agent memory retrieval. The results challenge common assumptions about what matters.
-
Your Agent Pays the Cold-Start Tax Every Morning
Agent memory isn't knowledge management. It's performance infrastructure — and the gap between a stateless agent and one that accumulates context is measurable.