skip to content
Terry Li

The Lens Trick: Why One AI Review Isn't Enough

/ 2 min read

I spent an evening stress-testing an RFP through multiple AI models. Five rounds of the same question produced diminishing returns by round three. But then I changed the question.

Instead of asking the models to review the document again, I asked them to review it as someone else. A procurement officer. Then an AI safety testing expert. Then a vendor reading it for the first time. Then a Second Line assurance reviewer. Then a regulatory affairs specialist.

Each persona found things the others missed entirely. The procurement lens caught Saturday submission deadlines and undefined pricing phases. The AI expert caught that we needed a machine-readable regression test corpus, not just a PDF report. The vendor lens revealed that our IP clause was ambiguous enough to scare off top bidders. The assurance reviewer found we had no mechanism for independent challenge sessions. The regulatory specialist noticed we had forgotten to mention fairness and consumer outcomes, which are non-negotiable under FCA Consumer Duty.

The overlap between lenses was almost zero. Nine rounds total, but the last five found as much as the first four combined, because they were asking fundamentally different questions of the same text.

The reason this works is that each persona has a different loss function. A procurement officer optimises for process integrity and bid comparability. A vendor optimises for commercial risk and deliverability. A regulator optimises for supervisory defensibility. They literally cannot see each other’s blind spots because those blind spots fall outside their evaluation criteria.

This feels like a general pattern. Any document that will be read by multiple audiences with different concerns should be reviewed through each of those lenses separately. The compound review catches things that no single perspective, no matter how expert, would surface on its own.

· · ·

Keep reading