Posts
RSS feed- Your AI Risk Tier Is Probably Wrong
List-based and process-based approaches to AI risk classification both fail in predictable ways. The failure mode depends on which you chose.
- Human Oversight Doesn't Scale
Every AI governance framework demands human-in-the-loop. Nobody does the maths on what that means at enterprise scale.
- The Maker-Checker Trap
Most AI maker-checker implementations capture the correction but not the reason. That's a feedback loop with no signal.
- Governance Is a Tax
The most useful reframe I've found for AI governance in financial services
- The Due Test
The difference between protecting a commitment and hoarding optionality
- Your Ground Truth Is Someone Else's Process Outcome
When your model's labels come from human decisions rather than reality, you're not measuring what you think you're measuring.
- The Global Minimum of Governance
Governance isn't about catching every failure — it's about proving your process was reasonable when one happens. The real skill is knowing what to deliberately not monitor.
- Human-in-the-Loop Is an Architecture Decision
It's not enough to say humans are in the loop. You need to show the loop is in the system.
- Impossibility Theorems as Consulting Tools
Mathematical impossibility results are the best meeting-room weapons I know.
- Local-First Embeddings for Regulated Industries
24MB download, <100ms per batch, nothing leaves the machine. For banks with air-gapped environments, this changes the conversation.
- Model Routing Is a Design Decision
Your AI budget question isn't which model — it's which phase of the workflow needs depth, and which just needs speed.
- When Your AI Advisor Is Also Your AI Vendor's Partner
What does the Frontier Alliance actually mean for advice quality?
- Progressive Disclosure for AI Agents
Search returns summaries. Get returns detail. The model decides what to expand. 75% context savings.
- Shadow Agents Are Coming for Your Org
Open-source agent adoption can outpace enterprise security controls by weeks. Governance teams need a policy before the agents arrive uninvited.
- The Failures That Look Like Success
The most dangerous AI failures are the ones that look fine on the surface.
- The Fairness Impossibility Is Not a Bug
Every AI fairness debate is secretly a values debate disguised as a technical question.
- The Four Layers of Every AI Agent
Interaction, inference, orchestration, tooling. The boundaries between them must be enforcement points, not design principles.
- The Integration Layer Is the Moat
MCP decouples the tool from the model. Once that happens, the durable asset isn't the model — it's which systems you've exposed.
- The Production Gap: Why AI Pilots Fail
The consulting question isn't how to build AI — it's how to get it past the 62% graveyard.
- The Specificity Trap
Adding detail to a deliverable doesn't fix credibility — it creates new interrogation targets.
- What Chinese AI Labs See That Western Ones Don't
A strand of multi-agent research — latent-space inter-agent communication — is thriving in Chinese labs and almost invisible in the West.
- Your AI Roadmap Is Already Obsolete
A 3-year AI roadmap designed around today's model capabilities may be solving last year's problem by year 2.
- The Pipeline Paradox
Monitoring systems need consumers before they need features
- The Annotation Model: What AI Journaling Gets Right
Most AI writing tools want to chat with you. The better model is annotation — AI that reads what you wrote and leaves margin notes.
- When Your Life OS Becomes the Life
The real risk of building a personal AI operating system isn't that a better tool appears — it's that your system's complexity becomes the thing you maintain instead of the thing that maintains you.
- Agentic Search Ate RAG
When AI agents can grep, read, and reason iteratively, most RAG infrastructure becomes unnecessary middleware.
- Don't Optimise for the Proxy
When you have both a credential and real work in the same domain, route effort through the real work.
- The AI Trading System You Should Build But Never Use
The best use of AI in investing isn't picking stocks — it's building the pipeline that teaches you why you can't.
- The Interlocutor Mode
Most people use AI transactionally. The real unlock is conversational — thinking with the model, not through it.
- Reconstruction Over Retrieval
In a world where AI has perfect recall, the skill that matters is rebuilding frameworks from first principles — not memorising them.