Posts about ai
-
The Knowledge That Disappears When You Try to Capture It
Enterprise AI keeps promising to capture institutional knowledge. The most valuable kind resists capture by design.
-
What AlphaSense Charges Ten Thousand Dollars For
I built an AI landscape intelligence pipeline for zero marginal cost. Here's what it does and what it can't.
-
The Eval Gap
The scarce AI skill isn't building — it's knowing if what you built actually works.
-
Don't Be Impressed by Fluency
AI can reproduce smart arguments on demand. I'm not sure that's different from thinking. But the uncertainty itself is worth sitting with.
-
Philosophy Isn't the Opposite of Practical
The people who examine the system they're inside tend to make better decisions within it.
-
What Is Understanding?
I use AI every day. I genuinely can't tell if it understands anything. That question is harder than it looks.
-
Where Gen AI Is Actually Transformative (And Where It Isn't)
I work in AI in financial services. The honest list of where gen AI is real is shorter than the industry wants you to think.
-
You Can't A/B Test Your Life
My career looks like a plan in retrospect. It wasn't. It was a series of pushes, wrong calls, and adjustments.
-
Your Wage Reflects Your Scarcity, Not Your Worth
The most successful piece of propaganda in modern economics is the idea that what you earn is what you deserve.
-
The Assistant Is a Character
People confuse the LLM with the helpful AI assistant. They're not the same thing. The LLM is a prediction engine. The assistant is a role it's playing. The distinction changes how you use it.
-
The Black Box That Responds to Role Play
An LLM can't feel accountability pressure. But structured role-play — simulated rejection, persona assignment, adversarial review — produces measurably better output. The mechanism is opaque; the effect is real.
-
The 'Are You Sure?' Loop
AI's first 'I'm done' is almost never its best work. Simulated accountability pressure — just asking 'are you sure?' — surfaces blind spots that self-review misses.
-
Redundancy Is the Only Honest AI Research Strategy
I ran the same question through 6 AI tools and scored them against peer-reviewed evidence. Every tool got something wrong that another got right.
-
The Calculator Analogy
Nobody practises arithmetic speed anymore. The same thing is happening to prose, research, and analysis — and it changes what humans should get good at.
-
The Thirty-Year Gap Between Faking and Understanding Natural Language
From AppleScript's rigid English-like syntax to LLM tool-calling — what changes when the computer actually understands you.
-
Not Every Cron Job Is a Feedback Loop
Automation that collects without learning is just a cron job. The difference is a feedback signal — a number that goes up or down.
-
The Loop Is the Product
Karpathy's autoresearch and every useful AI tool share the same pattern: the code is trivial, the feedback loop is the product.
-
The Bootstrap Problem in AI Tooling
You need the tool to build the tool. The answer is: build the dumb version first, use it once, then have it build its replacement.
-
Why Nobody Builds Cross-Vendor AI Orchestration
Every AI lab builds single-vendor orchestration. The cross-vendor layer is a gap — and it's a gap for a reason.
-
The Orchestration Layer Is Knowledge, Not Code
Multi-agent AI orchestration frameworks are commodity. The competitive advantage is knowing which agent to use when, what breaks, and how to recover.
-
Is Insight an Illusion
When pattern-matching feels like wisdom, what are we actually experiencing?
-
The Fluency Trap
When AI conversations feel insightful because the language model is good at producing insight-shaped text
-
Why Nobody Benchmarks Memory
The things that matter most in production are the things that get benchmarked least
-
The Byproduct Trap
When the paper becomes more interesting than the answer you set out to find
-
AI Agents Need Notebooks, Not Just Memories
The missing layer in enterprise AI isn't smarter models — it's structured memory that humans can actually review.
-
Cross-Cutting Is Just Another Word for Optional
In AI agent architecture, calling something a 'cross-cutting concern' without naming an owner and a gate is just a polite way of saying nobody owns it.
-
The second pass finds more
When red-teaming a document with multiple AI models, the second review — run on the edited version — consistently finds more than the first. Here's why, and what it means for how many rounds to run.
-
RAG Solved the Wrong Problem
The retrieval pipeline was built for systems that couldn't reason about their own information needs. Agents can.
-
This Year's DeepSeek
An open-source AI agent framework became the fastest-growing project in GitHub history — mostly in China. The pattern is the same as last year. So is the security panic.
-
The Accidental Life OS
I spent an afternoon researching AI tools for personal life management. The conclusion was that I should stop looking.