Here’s a question I’ve been asking CISOs over the past few weeks. Have you scanned your environment for OpenClaw?
Most of the time, I get a pause. Then something like, “We haven’t deployed OpenClaw.” That’s the wrong answer to the wrong question. I didn’t ask whether IT deployed it. I asked whether it’s running in the environment. Those are very different things.
OpenClaw is an open-source AI agent that runs locally on a laptop. It doesn’t require administrator privileges to install. It doesn’t phone home to a central server that your network monitoring would flag. It connects to email, Slack, Teams, WhatsApp, calendars, developer tools, and file systems through standard integrations. And it has persistent memory, meaning it accumulates access and context across sessions.
When Jensen Huang stood on stage at Nvidia’s GTC 2026 and called OpenClaw “the most important software release ever,” he wasn’t making a prediction. He was describing something that has already happened. OpenClaw surpassed Linux’s 30-year adoption curve in three weeks. It is the most downloaded open-source project in GitHub history.
Your developers almost certainly know about it. Many of them are probably running it.
This is shadow IT on a completely different scale
Security teams have spent the last decade building playbooks for shadow IT. Employees adopt a new SaaS tool, someone notices, the tool gets evaluated, and eventually it’s either sanctioned or blocked. The cycle takes weeks or months, and the blast radius is usually limited to the data within that specific application.
Shadow AI agents break that model in three ways.
The scope of access is fundamentally different. A shadow SaaS tool contains its own data silo. A shadow AI agent connects to everything the employee has access to — email, file shares, calendars, messaging platforms, and developer tools. It’s not a new silo. It’s a new accessory for every existing silo.
The persistence is different. A SaaS tool session ends when the browser closes. An OpenClaw agent runs continuously, building persistent memory across sessions. Every day it runs, it accumulates more context, more access patterns, and more organizational knowledge. If that agent is compromised, the attacker inherits all of it.
The visibility is different. Your endpoint security sees processes running but doesn’t understand agent behavior. Your network monitoring sees API calls but can’t distinguish legitimate agent automation from a compromised agent executing attacker instructions. Your identity systems see OAuth grants but don’t flag AI agent connections as unusual. Traditional security tooling is nearly blind to this category of risk.
Five major security vendors independently sounded the alarm. That doesn’t happen for theoretical threats.
Within weeks of OpenClaw going viral, CrowdStrike published a detailed risk analysis and released an enterprise-wide search-and-removal content pack through Falcon for IT. Microsoft’s security team published guidance recommending that OpenClaw be treated as “untrusted code execution with persistent credentials” and deployed only in fully isolated environments.
Cisco used OpenClaw as its primary case study for AI agent security risks, calling it “an absolute nightmare” from a security perspective. Sophos classified it as a potentially unwanted application and released detection signatures. Trend Micro published a research paper documenting how the same architectural features that make OpenClaw useful make it fundamentally dangerous in enterprise environments.
That level of coordinated response from competing security vendors doesn’t happen for hypothetical concerns. It happens when the threat is real, present, and spreading faster than traditional security processes can contain.
The numbers tell a story your endpoint logs won’t
Bitsight researchers found over 30,000 OpenClaw instances exposed on the public internet, leaking API keys, chat histories, and account credentials. Koi Security discovered that 12% of all skills on ClawHub — OpenClaw’s public marketplace — were confirmed malicious, distributing keyloggers on Windows and Atomic Stealer malware on macOS.
The Moltbook platform, a social network built for AI agents, was discovered to have an unsecured database exposing 35,000 email addresses and 1.5 million agent API tokens.
Meanwhile, seven CVEs were disclosed in rapid succession — ranging from one-click remote code execution to command injection, SSRF, authentication bypass, and path traversal. The attack chain for the most severe vulnerability can take effect within milliseconds of a victim visiting a single malicious webpage.
These are not vulnerabilities in a niche tool used by a handful of developers. This is the most popular open-source project in the world, running on employee machines across every industry, connecting to enterprise systems that contain your most sensitive data.
Banning won’t work. Governing will.
The first instinct for many security teams will be to ban OpenClaw.
I understand the impulse, but I’ve seen this movie before with cloud, with mobile devices, and with every other technology that employees adopted before IT was ready. Bans don’t eliminate the technology. They eliminate your visibility into it.
The employees running OpenClaw aren’t doing it out of malice. They’re doing it because it saves them hours of work every day. Block it on managed devices, and they’ll run it on personal laptops connected to the same email and the same Slack workspace. The productivity incentive is too strong for a ban to hold.
The approach that works is the same one that eventually worked for cloud and mobile. Don’t try to control the agent. Control the data the agent can access.
This means governing at the data layer, independent of the agent, the model, and the device.
Every request an agent makes for sensitive data should be authenticated — not just the agent, but the human who authorized it. Access should be evaluated against policies that account for the data’s classification, the purpose of the request, and the specific operation. Data should be encrypted with validated cryptography. And every interaction should be logged in a record that your security operations team can monitor and your compliance team can produce on demand.
The Kiteworks 2026 Forecast found that 57% of organizations lack a centralized gateway for AI data governance. That gap is the opportunity — and the risk. Close it, and you become the CISO who safely enabled AI adoption. Leave it open, and you’re the CISO who missed the biggest shadow deployment in your organization’s history.
The CISO’s real OpenClaw strategy
The organizations getting this right are treating AI agent governance the same way they treat employee onboarding. They’re not trying to make the agent smarter or the model safer. They’re governing what the agent can touch, under what rules, with what evidence trail.
That’s a CISO problem, not a data science problem. And the CISOs who solve it — who build the governance layer that lets AI adoption happen safely — are the ones who earn a seat at the AI strategy table. The ones who just say no will be bypassed, just as they were during cloud and mobile.
Jensen Huang told every company to build an OpenClaw strategy. Your employees already did. The question is whether you’re going to govern it or pretend it isn’t happening.
Also read: From AI “token factories” to trillion-dollar infrastructure bets, Jensen Huang’s GTC keynote shows how compute is becoming the new currency of power in the AI economy.

