The breach didn’t break in. It walked in through a door you opened.
Vercel's breach didn't open a new attack path — it walked through an approved one. When AI vendors are compromised, good and bad access look identical.
.webp)
Featured event: A CISO’s take
Join Jim Alkove and Ramy Houssaini to learn how forward-thinking security teams are addressing Enterprise AI Copilot risks.
In our companion piece on OpenAI Atlas, we wrote about what happens when the AI you deploy turns into a shadow identity on your network — 40,000+ exposed agents, OAuth tokens leaking into the open internet, and no playbook to revoke access that was never governed in the first place.
That piece was about the AI you deploy.
This one is about the AI your vendor deploys.
And it took less than a week for the market to hand us the headline proof point.
What happened at Vercel
On April 19, Vercel — the cloud platform behind Next.js and millions of production web deployments — confirmed a breach of internal systems. The root cause, per Vercel’s own CEO: an employee’s Google Workspace account was compromised through Context.ai, a third-party AI platform Vercel had integrated into its environment.
Context.ai had been granted Google Workspace OAuth scopes — not through a procurement-sanctioned enterprise agreement, but through an individual employee who signed up for a consumer extension with their corporate account and approved an "Allow All" OAuth prompt. A single click, no central visibility, no procurement review. Classic shadow AI. When Context.ai itself was breached, the attacker didn’t have to find a new way in — they used the way Vercel had unknowingly left open. From there, the attacker enumerated environments, and a threat actor is now reportedly shopping access keys, source code, NPM tokens, and GitHub tokens on a hacking forum for $2M.
This is the part that deserves more attention than it’s getting.
The breach wasn’t a new attack path. It was an unmonitored one.
Most breach narratives are about attackers finding a way in — a missing patch, an exposed endpoint, a phish. Those are different access paths than the legitimate ones, and with enough telemetry you can usually tell them apart.
This wasn’t that.
The attacker didn’t open a door. They walked through a door an employee had unlocked for Context.ai — without security, IT, or identity teams ever seeing the key change hands. The OAuth token Context.ai used to do its legitimate work is the same OAuth token the attacker used to enumerate Vercel’s environment. Same scopes. Same endpoints. Same consent. Same destination.
When your approved path and your attack path use the same credentials, distinguishing good from bad becomes nearly impossible — and without visibility into how that access is actually being used, it becomes unattainable.
This is the uncomfortable new reality of the AI era. We are granting AI vendors deep, durable, privileged access so they can do useful work on our behalf. Then we watch them get breached — the attacker inherits the exact same trusted posture the vendor had.
Policy helps — a strict Google Workspace OAuth allowlist would have blocked this specific grant. But policy is only as strong as the exceptions you grant and the shadow IT you don't see. You can’t out-SSO this. You can’t even out-MFA this, because the compromise isn’t happening at your authentication boundary — it’s happening at the vendor’s or at the employee's desk. What you can do is change what you’re watching. You can stop watching permissions and start watching usage — because visibility is the safety net that catches what policy misses.
AI companies are losing their own critical assets
Context.ai is not an outlier. It’s the latest entry in a parade of AI-adjacent incidents we have watched roll in over the last twelve months:
- OpenAI’s macOS signing certificate was put at risk by a poisoned npm package in a GitHub workflow.
- OpenAI Atlas exposed Mojo IPC to all *.openai.com origins, letting researchers capture OAuth tokens across sites.
- Moltbook, the social layer built for OpenClaw agents, leaked 1.5 million API tokens and 35,000 email addresses from a misconfigured database.
- Over 40,000 OpenClaw instances ended up exposed to the public internet, many with trivial or no authentication.
- And now, Context.ai — an enterprise AI platform designed to be trained on a customer’s institutional knowledge — becomes the initial access vector into one of the most strategically important developer platforms on the internet.
Each of these is a reminder that AI vendors are not inherently more secure than anyone else. Many of them are startups shipping at the speed the market demands, with security postures that haven’t caught up to the privileged access they’re asking for.
And yet they’re being granted deployment-level OAuth scopes, Google Workspace domain-wide delegation, GitHub org-level tokens, CI/CD access, and long-lived credentials to do their job.
The uncomfortable truth: while AI companies continue to lose their own critical assets, they may end up losing yours too.
Why visibility is the whole game
If you cannot see the difference between Context.ai’s normal behavior and an attacker wearing Context.ai’s credentials, you do not have a detection problem — you have a visibility problem. No amount of alerting on the permission layer will fix it, because the permissions on both sides are the same.
Three things have to be true for identity teams to even have a chance here, and almost no organization has all three today:
1. You know every third-party AI vendor holding OAuth scopes into your environment. Not the ones procurement knows about. All of them. Including the one a an employee OAuth'd into their corporate account last Tuesday — and the one a well-meaning engineer approved for a pilot nine months ago that nobody has touched since.
2. You know what each of them is actually doing with that access. Not the permissions they were granted. The resources they are touching, the frequency they touch them, and the patterns that constitute “normal” for that specific integration. Without a baseline, you have nothing to compare an anomaly against.
3. You can answer “who accessed what, when, and how” at the resource and activity level — in seconds. Because when the next vendor calls you to say they’ve been breached, you have somewhere between hours and days to figure out what they touched inside your environment. That is not a question permission-level tooling can answer.
This is exactly the gap Oleria’s Identity Context Graph was built to close. Human, non-human, and AI identities in one composite graph. Permissions correlated with actual usage. Resource-level visibility down to the individual OAuth scope on the individual app. The ability to answer “who accessed what, when, and how” — in seconds, not days.
What identity and security leaders should do this week
If you’re a CISO, a security engineer, or an IAM owner reading this in the wake of the Vercel disclosure, here is the work worth doing in the next 48 hours:
- Inventory every third-party AI tool, copilot, agent platform, and “small but useful” integration that holds an OAuth grant into your identity provider, your code repositories, your cloud control planes, and your collaboration suite. If you can’t produce that inventory in under an hour, that is your first finding.
- For each one, capture the scopes, the grantor, the grant date, the last time the scope was used, and the resource footprint. Flag anything with deployment-level, domain-wide, or *:write scope as a high priority.
- Force a re-consent or rotation on any AI vendor that is no longer in active use, or that cannot articulate how its OAuth tokens are stored, rotated, and monitored on their side.
- Make sure your detection and response playbook has a path for “third-party AI vendor compromise” that doesn’t require the vendor to notify you first. When the approved path and the attack path look identical, the only early signal you’ll get is a shift in the usage pattern. Your team should know what that looks like in their telemetry.
- And longer term — stop treating AI vendors as SaaS. They aren’t. A SaaS vendor stores your data. An AI vendor operates on your behalf, with your scopes, inside your environment. The governance model has to match that reality.
The bigger pattern
Every AI incident of the last year has pointed at the same thing: identity is the attack surface for the AI era. Not the model. Not the prompt. The identity the AI is operating under — and whether you can tell, in real time, when that identity is being used by someone other than who you approved.
Context.ai’s breach is not the last time this will happen. It’s the first time it has happened at this scale, to a vendor this visible, with customer impact this traceable. More are coming. The question every identity leader should be sitting with this week is simple:
If my most-privileged AI vendor were compromised tonight, could I tell the difference between their normal activity and an attacker using their credentials — and could I act on it before it mattered?
If the answer is anything other than “yes, in minutes,” we should talk.


