skip to content
Terry Li

Most AI governance frameworks draw their perimeter around the model. Is it biased. Is it hallucinating. Is it leaking training data. They do not draw the perimeter around the toolchain that surrounds the model — the evaluation platforms, the observability services, the prompt registries — and that is where Vercel got breached.

On April 19, Vercel disclosed that attackers had accessed customer environment variables through a compromise that started not at Vercel but at Context.ai, a third-party AI evaluation tool. A Context.ai employee had been infected with infostealer malware months earlier. The attackers used stolen OAuth tokens to reach a Vercel employee’s Google Workspace without triggering MFA, because OAuth tokens bypass MFA by design. From there they pivoted to internal systems and extracted API keys, credentials, and configuration data belonging to Vercel’s customers.

The malware and the lateral movement are ordinary. The shape of the attack surface is not. One employee connected one AI tool. That tool had broad OAuth scopes to Google Workspace. The tool’s own employee got compromised. And the blast radius reached Vercel’s customers. Every access control, every MFA gate, every monitoring dashboard that Vercel maintained was irrelevant because the breach entered through a trust relationship that sat outside the governance perimeter entirely.

This matters for banks because the AI adoption pattern in banks makes this exact attack surface inevitable. A team gets approval to use an LLM. Evaluation, observability, and prompt management follow. Each connection involves an OAuth grant or an API key with scopes that were reviewed once during onboarding and never revisited. The AI governance team reviews the model. Nobody governs the constellation of tools that orbit it, each with its own employees, its own security posture, and its own OAuth tokens sitting quietly in someone’s Google Workspace.

The organisations that will avoid this are not the ones that ban third-party AI tools. They are the ones that govern connections with the same rigour they govern models. OAuth grants to AI tools treated like vendor data-sharing agreements: scoped, time-limited, monitored, and revocable. A register of AI tool integrations maintained alongside the register of AI use cases. The use case register tells you what models you are running. The integration register tells you who else can see your data.

Vercel’s CEO advised customers to rotate their non-sensitive keys. That is the remediation. The lesson is upstream: the governance surface for AI is not the model. It is every system that touches the model, and every system that touches those systems.

· · ·

Keep reading