Posts about governance
-
Governance Is a Design Problem
Compliance-first governance produces paperwork. Design-first governance produces systems you can actually explain to a regulator.
-
Managing AI Agents Like Managing a Team
The governance patterns for autonomous AI agents are the same ones good managers already use: cadence reviews for normal flow, escalation channels for urgent anomalies, and human judgment only where it has maximum information value.
-
Inference Cost Collapse Is a Governance Liability
When AI agent calls approach zero cost, the natural rate-limiter on decision volume disappears — and oversight frameworks designed for prediction models break.
-
The AI/DLT Conflation Trap in HKMA's March 2026 Strategic Review Mandate
HKMA's new strategic review circular bundles AI inference risk and smart contract risk into one workstream — a governance design flaw that will cause banks to under-govern both.
-
Model Risk Management Was Not Built for This
SR 11-7 assumes models are tools that produce outputs for human review. AI agents are actors that take actions autonomously. Every assumption breaks.
-
Your AI Risk Tier Is Probably Wrong
List-based and process-based approaches to AI risk classification both fail in predictable ways. The failure mode depends on which you chose.
-
Human Oversight Doesn't Scale
Every AI governance framework demands human-in-the-loop. Nobody does the maths on what that means at enterprise scale.
-
The Maker-Checker Trap
Most AI maker-checker implementations capture the correction but not the reason. That's a feedback loop with no signal.
-
Your Ground Truth Is Someone Else's Process Outcome
When your model's labels come from human decisions rather than reality, you're not measuring what you think you're measuring.
-
The Global Minimum of Governance
Governance isn't about catching every failure — it's about proving your process was reasonable when one happens. The real skill is knowing what to deliberately not monitor.
-
The Agent Governance Gap Is Already Here
Agentic AI isn't a future governance problem — it arrived ungoverned, and this week saw the first enforcement action.
-
Why AI Assistants Make Us Dumber (And What Governance Should Do About It)
The cognitive offloading problem is real. The governance response mostly isn't. There's a specific mechanism at work, and it has a specific fix.
-
Don't Ask Your AI to Find Problems
Ask for bugs and you'll get bugs — whether they exist or not. Sycophancy is a design feature, and the fix isn't better prompting.