Every coding agent running on your machine right now has access to your terminal, your files, your SSH keys, and your network. Most developers know this. Most choose not to think about it too hard.
NVIDIA forced the conversation at GTC 2026 last week.
Jensen Huang announced OpenShell, an open-source runtime that sandboxes AI agents with kernel-level enforcement. Two commands install it. Your agent runs inside a locked container where the filesystem is frozen at creation, the network is blocked by default, and API keys never touch disk. Security policies are defined in YAML and enforced at the infrastructure layer, not by the agent itself.
That last distinction matters more than it sounds.
Current agent frameworks enforce their own permissions. The agent checks whether it should do something, then decides whether to do it. If a malicious prompt or compromised skill hijacks the agent process, those permission checks become meaningless. The fox is guarding the henhouse.
OpenShell moves enforcement outside the agent’s address space entirely. A separate process evaluates every action against four criteria: binary, destination, method, and path. The agent cannot access, modify, or kill this process. Even a fully compromised agent hits a wall at the sandbox boundary.
The practical architecture is straightforward. OpenShell runs a K3s Kubernetes cluster inside a single Docker container. Every outbound connection from the agent hits the policy engine, which does one of three things: allow the request, route it through a privacy layer that strips credentials, or deny it and log the attempt.
NVIDIA paired this with a Privacy Router that solves a problem we hear constantly from enterprise clients. How do you use powerful cloud models without sending sensitive data to third-party APIs? The router intercepts every inference call. Sensitive context gets routed to local models like Nemotron running on-device. Complex reasoning tasks go to cloud models. The agent never makes direct outbound API calls.
For a healthcare company, that means patient data stays on local hardware while general reasoning tasks use Claude or GPT-5.4. For a law firm, privileged communications never leave the building. The routing is policy-defined and fully auditable.
The partner list tells you where NVIDIA thinks this is heading. Adobe, Atlassian, SAP, Salesforce, ServiceNow, Siemens, Cisco, CrowdStrike, and Red Hat are all integrating Agent Toolkit components. IQVIA has already deployed over 150 agents across internal teams and 19 of the top 20 pharma companies. Salesforce built a reference architecture where Slack becomes the orchestration layer for its Agentforce agents, pulling from both on-premises and cloud data stores.
The agentic AI market is projected to grow from $9.14 billion to $139 billion by 2034, a 40.5% compound annual growth rate. But those numbers only materialise if enterprises actually deploy agents in production. And procurement teams, legal departments, and CISOs have been saying the same thing for months: show us the governance layer.
Now it exists.
What this means for businesses evaluating AI agent deployment is concrete. Stop treating agent security as an application-level concern. It is an infrastructure problem, and it needs infrastructure solutions. If your agents run with the same permissions as your user account, you have a blast radius problem. OpenShell is open-source and framework-agnostic. It works with Claude Code, Codex, and any agent built on common frameworks.
The companies that deploy agents first with proper security will capture the productivity gains while everyone else waits for their compliance team to say yes. The governance layer is no longer the bottleneck. The question now is whether your organisation can move fast enough to use it.
Leave a Reply