The pattern has played out enough times now that it has a recognisable shape.
An open model or framework drops. Inference costs are near zero — either because the weights are free, or because someone bundled them with a cheap backend. An enthusiast community forms almost immediately, because the activation energy is low enough that anyone with a laptop and an afternoon can get something running. Forks proliferate. Then a government somewhere endorses it — sometimes formally, sometimes just by not blocking it. And then, with a delay of weeks rather than months, the security reckoning arrives: tens of thousands of exposed instances, prompt injection advisories, regulatory warnings, incident reports.
I watched this happen with reasoning models. I’m watching it happen with open agent frameworks. It will happen again with whatever comes next.
The structure is always the same, and it’s worth naming why. When inference costs drop toward zero, the barrier to deployment is no longer cost — it’s knowledge. And knowledge diffuses faster in an open community than security controls can be written, reviewed, approved, and rolled out through an enterprise change management process. The fork wave isn’t enthusiasm for the technology. It’s platform preservation: once something is cheap enough to run, every team with a use case and a weekend has an incentive to build. The enterprise’s governance process wasn’t designed for that clock speed.
What this means practically: by the time a security advisory lands in your inbox, there are already people inside your organisation running the thing it’s about. Not maliciously. They weren’t waiting for approval. They saw a tool, the tool was free, the use case was obvious, and they shipped.
This is what I’d call shadow agent adoption, and it’s the next iteration of shadow IT — except agents have broader reach. A shadow spreadsheet lives on someone’s desktop. A shadow agent can call APIs, read inboxes, write to shared drives, and make decisions. The blast radius of an unmanaged agent deployment is larger than an unmanaged data extract.
The governance question isn’t whether this is happening in your organisation. Assume it is. The question is whether you have a policy for it before the next advisory arrives.
A useful policy has three parts. First, an inventory posture: some mechanism — a questionnaire, a lightweight approval workflow, a declared amnesty period — that surfaces deployments that already exist. You can’t govern what you can’t see, and the goal at this stage is visibility, not prosecution. Second, a risk threshold: a clear statement of what can run without a security review, and what can’t. Agents with read-only access to non-sensitive data are a different risk profile from agents that write to production systems or process customer data. The line doesn’t have to be perfect; it has to exist. Third, a fast lane: if your security review process takes eight weeks, teams will route around it. A lightweight track for low-risk, internally-hosted agent deployments — with sensible defaults on scope, logging, and data access — is how you keep governance relevant. A policy that’s too slow to use isn’t a policy; it’s a ritual.
None of this requires predicting which specific tool will arrive next. The pattern is the thing to respond to, not the tool. Open weights, cheap inference, government tailwind, fork wave, security reckoning — the sequence is consistent enough that you can build for it generically.
The teams that are in the best position right now didn’t wait for a specific threat. They built a posture: a classification framework for agent risk, a lightweight intake process that doesn’t take months, and an expectation that some agents will arrive uninvited. They treat shadow agent adoption as a managed phenomenon rather than a surprise.
The teams that aren’t ready are the ones waiting for the advisory before they write the policy. By then, the agents are already inside.