I spent RSAC 2026 doing what I do every year: walking the floor, talking to vendors, and — more importantly — listening to the security leaders who stopped by the Kiteworks booth.
What struck me this year wasn’t the volume of announcements. It was the consensus. Vendor after vendor, conversation after conversation, the same word kept surfacing: agents.
- Cisco announced MCP policy enforcement and agent discovery.
- CrowdStrike launched AI agent discovery across endpoints, SaaS, and cloud.
- Palo Alto Networks introduced Prisma AIRS 3.0 to secure the full agentic AI lifecycle.
- BeyondTrust rolled out endpoint privilege enforcement for AI coworkers.
- The Cloud Security Alliance established an entirely new foundation — CSAI — with a stated mission of securing the agentic control plane.
- Even Nvidia weighed in, explaining that its OpenShell runtime enforces constraints at the infrastructure level rather than at the model layer.
The industry has arrived at a shared diagnosis. The question that kept coming up in our booth conversations was sharper: Where does governance actually belong?
The floor confirmed what our research already showed
When we published the Kiteworks 2026 Data Security, Compliance & Risk Forecast Report last December, the headline finding felt almost too stark: 100% of organizations surveyed have agentic AI on their roadmap. Zero exceptions.
Walking the RSAC floor, that number no longer surprises anyone. What surprised the people I spoke with were the numbers underneath it:
- Sixty-three percent of organizations cannot enforce purpose limitations on their AI agents.
- Sixty percent cannot terminate an agent that’s misbehaving.
- Fifty-five percent cannot isolate AI systems from their broader networks.
These aren’t obscure technical gaps — they’re the basic containment controls that prevent an autonomous system from exceeding its authorized scope. And yet, 33% of organizations are already planning autonomous workflow agents that act without human approval, with another 24% building decision-making agents that will access sensitive data independently.
That’s the gap I kept hearing practitioners describe in different words at the booth: we can observe our agents, but we can’t stop them. Our Forecast quantifies it as a 15–20 point gap between governance controls (monitoring, human-in-the-loop) and containment controls (purpose-binding, kill switches, network isolation).
The industry has invested in watching. It hasn’t invested in stopping.
Discovery is necessary — it isn’t sufficient
Several of the strongest RSAC announcements targeted the discovery problem.
Astrix introduced four-method AI agent discovery. CrowdStrike extended shadow AI detection from endpoints to SaaS and cloud. Nudge Security announced AI agent discovery at the point of creation. Snyk launched Agent Security to surface shadow AI across development pipelines. BeyondTrust’s Phantom Labs published research showing that most enterprises run shadow AI agents with privileged access invisible to security teams.
This matters. You cannot govern what you cannot see. But discovery alone doesn’t close the governance gap — it illuminates it.
Our Forecast found that shadow AI ranks as a top-five security concern at 23%, yet few organizations have the discovery tools to even identify unauthorized usage. The vendors launching discovery capabilities at RSAC are addressing a real and urgent need. The question is what happens after discovery: once you find the agents, how do you enforce policy on the data they access?
That’s where the conversations at our booth got specific. CISOs weren’t asking whether agents are a risk. They were asking how to govern what agents do with regulated data — across HIPAA, CMMC, PCI, SOX — without building a separate governance stack for every AI platform they adopt.
Only 43% of organizations have a centralized AI data gateway today, according to our research. The remaining 57% are fragmented, partial, or flying blind. Several of the CISOs I spoke with described exactly that fragmentation: different controls for different AI tools, no unified audit trail, no way to produce evidence that satisfies an auditor.
Audit trails: The infrastructure nobody talks about on stage
Here’s something you won’t find in the RSAC keynotes: 33% of organizations lack evidence-quality audit trails entirely, and 61% have fragmented logs scattered across disconnected systems.
Our research consistently shows that audit trail quality is the single strongest predictor of AI governance maturity. Organizations without audit trails are half as likely to have AI training data recovery, 20 points behind on purpose binding, and 26 points behind on human-in-the-loop controls.
The audit trail isn’t a compliance artifact. It’s the foundation on which the rest of the governance architecture is built.
This is what I kept emphasizing at the booth: every AI agent interaction with regulated data needs to be authenticated, policy-governed, encrypted, and logged in a tamper-evident trail that feeds your SIEM — regardless of which model or agent framework is doing the asking.
Regulators don’t distinguish between a human analyst and an autonomous agent accessing protected health information or controlled unclassified information. The compliance obligation is identical. The evidence standard is identical. And 33% of organizations can’t meet it today.
The architectural bet: Data layer, not model layer
The RSAC announcements revealed a strategic fork in the industry’s approach to AI governance.
Some vendors are securing at the model or runtime layer — through prompt filtering, agent sandboxing, and behavioral guardrails. Others, including Kiteworks, are enforcing governance at the data layer. Nvidia’s description of OpenShell — applying security at the environment level rather than the model or application layer — signals that this architectural principle is gaining traction beyond our own positioning.
Our bet is that data-layer governance will prove more durable. Model prompts can be bypassed. Agent runtimes will evolve. But data access controls — identity verification, ABAC policy enforcement, FIPS 140-3 encryption, and tamper-evident audit logging — operate independently of whatever model or framework is making the request.
That’s why Kiteworks Compliant AI enforces all four checkpoints at the data access layer via the open Model Context Protocol standard, ensuring governance remains consistent regardless of which AI platform an organization adopts today or migrates to tomorrow.
The practitioners I spoke with at RSAC understand this intuitively. They’re not looking for an AI security product for each AI tool. They’re looking for a governed data layer that works across all of them. Third-party AI vendor data handling is the number-one security concern in our research at 30%, yet only 36% have visibility into how partners handle data in AI systems.
When the AI platform changes — and it will — the governance must persist. That only works if governance lives at the data layer.
What I’m taking home from San Francisco
RSAC 2026 confirmed three things.
First, the industry has reached consensus that agentic AI governance is an urgent, unsolved problem — the sheer density of agent-focused announcements from Cisco, CrowdStrike, Palo Alto, BeyondTrust, Wiz, and dozens of others makes that unmistakable.
Second, discovery and runtime protection are outpacing the foundational infrastructure — audit trails, centralized gateways, and containment controls — that make governance enforceable and auditable.
Third, the security leaders I talked with at the booth aren’t waiting for the market to sort itself out. They’re making architectural decisions now about how AI agents access regulated data, and those decisions will lock in governance models — or governance gaps — for years.
The window is open. The question is whether your organization will govern the data before agents make decisions for you.
Stay vigilant: the recent Crunchyroll breach shows how attackers exploited a third-party vendor to access millions of user records, reinforcing why supply chain security can’t be overlooked.

