I started with a practical question: why does financial services pay so much? I followed it honestly and ended up somewhere I didn’t expect — asking what “understanding” means, whether AI can have it, and whether my salary reflects anything real about my contribution to the world.
That sounds like a detour. It wasn’t.
Most people in my industry never ask these questions. They optimise, they earn, they don’t examine the system they’re operating inside. That works. You can have a perfectly successful career without ever wondering whether your compensation reflects your value or just your leverage. But I’ve noticed something about the people who do ask — they tend to be sharper. Harder to bullshit. Better at seeing through vendor hype, institutional theatre, and their own self-deception.
In AI specifically, this matters more than it should. The field is drowning in confident claims. Every vendor has a “transformative” solution. Every consultancy has an “AI strategy.” Most of it is sophisticated pattern-matching — not by the AI, but by the people selling it. They’ve learned what sounds impressive and they reproduce it fluently.
The person who has genuinely thought about what “understanding” means — who has sat with the discomfort of not knowing whether the tool they use every day actually comprehends anything — that person will evaluate AI use cases differently. Not because philosophy gives you a decision framework, but because the habit of questioning gives you a better detector for when something is real versus when it merely sounds real.
This is the same skill, applied at different altitudes. “Does this LLM understand regulatory risk?” is a practical question. “What is understanding?” is a philosophical one. But the second question makes you better at answering the first.
The bad version of philosophy is using it to avoid action — endless “but what does it really mean” as procrastination. The good version is following a practical question until you hit something genuinely hard, sitting with it long enough to sharpen your thinking, and then going back to work with better judgment.
I don’t have answers to the big questions. I don’t know what understanding is. I don’t know if AI will ever have it. I don’t know whether the market’s valuation of my work reflects anything meaningful. But I’m better at my job for having asked — and I suspect most people would be too, if they gave themselves permission to think about things that don’t have immediate practical payoff.
The most practical thing you can do is occasionally be impractical.