Posts
RSS feed- Philosophy Isn't the Opposite of Practical
The people who examine the system they're inside tend to make better decisions within it.
- The Market Prices Leverage, Not Value
After a decade in financial services, I've stopped believing that what you earn reflects what you contribute.
- What Is Understanding?
I use AI every day. I genuinely can't tell if it understands anything. That question is harder than it looks.
- Where Gen AI Is Actually Transformative (And Where It Isn't)
I work in AI in financial services. The honest list of where gen AI is real is shorter than the industry wants you to think.
- You Can Know the Game Is Unfair and Still Play It
Supporting a family in a system you see clearly isn't selling out. It's the most honest position there is.
- You Can't A/B Test Your Life
My career looks like a plan in retrospect. It wasn't. It was a series of pushes, wrong calls, and adjustments.
- Your Wage Reflects Your Scarcity, Not Your Worth
The most successful piece of propaganda in modern economics is the idea that what you earn is what you deserve.
- The Assistant Is a Character
People confuse the LLM with the helpful AI assistant. They're not the same thing. The LLM is a prediction engine. The assistant is a role it's playing. The distinction changes how you use it.
- The Black Box That Responds to Role Play
An LLM can't feel accountability pressure. But structured role-play — simulated rejection, persona assignment, adversarial review — produces measurably better output. The mechanism is opaque; the effect is real.
- The 'Are You Sure?' Loop
AI's first 'I'm done' is almost never its best work. Simulated accountability pressure — just asking 'are you sure?' — surfaces blind spots that self-review misses.
- Redundancy Is the Only Honest AI Research Strategy
I ran the same question through 6 AI tools and scored them against peer-reviewed evidence. Every tool got something wrong that another got right.
- The Calculator Analogy
Nobody practises arithmetic speed anymore. The same thing is happening to prose, research, and analysis — and it changes what humans should get good at.
- What Feels Like Play
Naval's famous line is easy to nod at. The hard part is actually identifying yours — and being honest about what isn't.
- Your Body Doesn't Care What You're Thinking About
30 days of Oura data showed activity type doesn't predict stress. Meetings do.
- The Thirty-Year Gap Between Faking and Understanding Natural Language
From AppleScript's rigid English-like syntax to LLM tool-calling — what changes when the computer actually understands you.
- Not Every Cron Job Is a Feedback Loop
Automation that collects without learning is just a cron job. The difference is a feedback signal — a number that goes up or down.
- The Loop Is the Product
Karpathy's autoresearch and every useful AI tool share the same pattern: the code is trivial, the feedback loop is the product.
- The Bootstrap Problem in AI Tooling
You need the tool to build the tool. The answer is: build the dumb version first, use it once, then have it build its replacement.
- Why Nobody Builds Cross-Vendor AI Orchestration
Every AI lab builds single-vendor orchestration. The cross-vendor layer is a gap — and it's a gap for a reason.
- The Orchestration Layer Is Knowledge, Not Code
Multi-agent AI orchestration frameworks are commodity. The competitive advantage is knowing which agent to use when, what breaks, and how to recover.
- Is Insight an Illusion
When pattern-matching feels like wisdom, what are we actually experiencing?
- The Grey Areas Are the Whole Thing
Ethics isn't about knowing the answer — it's about feeling the tension
- Why Be Nice
The question I can't fully answer for my son
- The Fluency Trap
When AI conversations feel insightful because the language model is good at producing insight-shaped text
- Why Nobody Benchmarks Memory
The things that matter most in production are the things that get benchmarked least
- The Byproduct Trap
When the paper becomes more interesting than the answer you set out to find
- AI Agents Need Notebooks, Not Just Memories
The missing layer in enterprise AI isn't smarter models — it's structured memory that humans can actually review.
- Guardrails Beat Guidance
Prompt instructions are suggestions. Hooks are constraints. One survives a model swap.
- Taste Works for Small Bets
The 'ship and calibrate' loop works beautifully for reversible decisions. For the big ones, you're mostly guessing and then making the guess true.
- Your Output Is Your Selections
AI commoditises execution. What remains is taste — the 'that's the one' reflex. And the only way to sharpen it is to ship and see what reality says back.