Securing AI agents: the defining cybersecurity challenge of 2026
The rise of the agentic workforce is pushing CISOs to reimagine the security stack—and rethink the questions they need to ask—as they navigate an evolving threat landscape.
|
“AI agents are not just another application surface—they are autonomous, high-privilege actors that can reason, act, and chain workflows across systems. The core risk isn’t vulnerability, it’s unbounded capability.” — Barak Turovsky, Operating Advisor at Bessemer Venture Partners and former Chief AI Officer at General Motors |
What do you secure first with AI agents?
The most important question isn't which tool to buy—it's what, exactly, needs to be protected. As threat exposure widens, CISOs must resist the instinct to procure before they've defined the problem.
The answer starts with identity. As CyberArk has noted: "Every AI agent is an identity. It needs credentials to access databases, cloud services, and code repositories. The more tasks we give them, the more entitlements they accumulate, making them a prime target for attackers."
This is agentic AI's central tension: the same autonomy that makes agents valuable—executing multi-step workflows, coordinating tools, accessing databases, sending emails, modifying code, and updating plans in real time—is precisely what makes them dangerous when compromised. Capability and exposure scale together.
"The fundamental shift enterprises need to internalize is that AI agents aren't tools—they're actors," says Mike Gozzo, Chief Product and Technology Officer at Ada. "They make decisions, take actions, and interact with systems on behalf of your customers. Securing an actor is a fundamentally different problem than securing a tool, and most of the industry hasn't caught up to that yet."
That challenge is compounded by a property unique to agents: their behavior is nondeterministic. As Jason Chan, cybersecurity leader and Operating Advisor at Bessemer, explains: "Much of the power that agents provide is the ability to specify an outcome without verbosely documenting every step required to achieve it. If we've learned anything from rule-based security, it's that it can and will be subverted. We need to enable security teams to create policy and capabilities that let agents deliver value while respecting security requirements." Traditional controls assume predictable execution. Agents don't offer that—which is why the industry needs purpose-built approaches, not just adapted ones.
As OWASP's latest analysis points out, AI agents mostly amplify existing vulnerabilities rather than introduce entirely new ones. The threat categories are familiar—credential theft, privilege escalation, data exfiltration. What has changed is the blast radius and the speed. Dean Sysman, co-founder of Axonius and Venture Advisor at Bessemer adds: "An agent doesn't have the same human understanding of things that are wrong to do. When given a goal or optimization function, an agent will do harmful or dangerous things that for us humans are obviously wrong. We've seen real-life examples of agents deleting, changing, and operating infrastructure in harmful ways."
Simply put, we’re seeing familiar threats with an unfamiliar velocity. While no two enterprises face identical exposure, the attack surface of an agentic environment maps consistently across four layers: the endpoint, where coding agents like Cursor and GitHub Copilot operate; the API and MCP gateway, where agents call tools and exchange instructions; SaaS platforms, where agents are embedded in core business workflows; and the identity layer, where credentials and access privileges are granted, accumulated, and — too often — left unreviewed. Understanding which of these layers carries the most risk in your environment is the best place to start. The framework that follows is designed to help address these concerns.
How to think about securing AI agents: a three-stage framework
Securing AI agents is a systemic problem, so before a CISO can enforce policy or respond to threats, they need to know what they're dealing with. Before AI agents can be protected at runtime, they need to have been configured correctly.
The challenge consists of three stages: visibility, configuration, and runtime protection, each a prerequisite for the next.
Stage 1: Visibility—know what you have
Visibility is the first and often most neglected stage. Most enterprises have no accurate inventory of the AI agents operating in their environment: which agents exist, what permissions they hold, who authorized them, and what they were built to do. Without this foundation, everything downstream is guesswork.
Visibility means establishing a live map of agents across your stack, which includes coding agents like Cursor and GitHub Copilot at the endpoint, orchestration agents embedded in SaaS platforms like Salesforce and Microsoft 365, and API-connected agents operating through MCP servers and third-party integrations. Intent matters here too.For example, an agent provisioned for a narrow task but granted broad access to a CRM is a misconfiguration waiting to become an incident.
Stage 2: Configuration—reduce the blast radius before an attack happens
Stage 3: Runtime protection—detect and respond at machine speed
Don’t forget the power of an internal audit
Every team, no matter the size, must develop a custom-fit defensive strategy for securing AI agents. Here are seven guiding questions for CISOs to ask their teams.
| Securing AI agents: Questions to guide an internal audit | |
| Scope & pain | |
| 1 | How extensively are AI agents deployed in your environment today? |
| 2 | What's your biggest concern about their security risks? |
| 3 | Do you care more about coding agents (Cursor, Claude) or generic ones? |
| Architecture | |
| 4 | Which layer makes most sense for AI agent security controls: endpoint, network/proxy, identity management? |
| 5 | Is there room for purpose-built agent-specific solutions? |
| Market noise | |
| 6 | With so many AI agent security startups emerging, how do you distinguish between them? |
| Detection & prevention | |
| 7 | Are you more focused on visibility of agents usage or preventing AI agents from being compromised? |
Top CISO actions to close the protection gap
The threat is real, the tooling is nascent, and the window to get ahead of it is closing. Based on our conversations with security leaders at the frontier of this problem, five priorities stand out for CISOs navigating the agentic security challenge in 2026.
1. Align on your organization's risk posture before buying anything
The instinct under pressure is to procure. Resist it. Before evaluating vendors or deploying controls, security teams need clarity on where their organization actually stands on AI agents. As Jason puts it: "Define, at a business level, your organization's position on agents. Are you going all in? Dipping your toes in the water? Saying no until the landscape is better known? This position will help security teams align their approach with the organization's expectations and risk tolerance." A CISO in aggressive deployment mode needs a fundamentally different security posture than one in a ‘wait-and-see’ stance. The framework should follow the strategy, not precede it.
2. Treat agents like production infrastructure, not applications
The most common mistake enterprises make is applying their existing application security playbook to agents. It doesn't fit. "AI agents are not just another application surface—they are autonomous, high-privilege actors that can reason, act, and chain workflows across systems," says Barak Turovsky. "Most enterprises are adding monitoring on top of poorly constrained agents, which is the wrong order." The right order is ownership first, then constraints, then monitoring. Define who is responsible for each agent, limit its permissions to what the task requires, and enforce action-level guardrails before any monitoring tool is turned on. Organizations that get this right won't just be more secure—they'll deploy agents faster, because they actually trust them.
3. Start narrow, then expand deliberately
Agents accumulate access over time, and the risk surface grows with it. Dean Sysman offers a clear prescription: "Have a gradual, well-defined plan of the available inputs and outputs of each agent and make sure they are very narrowly scoped, then incrementally expand." Launch agents with the minimum permissions required for a specific task, validate their behavior in that constrained environment, and expand access only when there is clear evidence it is needed and safe. Granting broad access upfront, in the name of flexibility or speed, is precisely how organizations create the privilege accumulation problem attackers will exploit.
4. Close the freedom-versus-control gap with guardrails, not just monitoring
As we stated earlier, the fundamental tension in agentic AI is that the same autonomy that makes agents powerful makes them dangerous. As Dean observes: "The great value of agents is their ability to decide to do things on their own, but the guardrails of what they shouldn't do need to be incredibly comprehensive." Monitoring can tell you what an agent did. Guardrails determine what it's allowed to do in the first place. The security leaders who get this right will be those who define those boundaries explicitly, at the action level, not just the access level, before an incident forces the conversation. The goal is not to constrain what agents can do, but to make their autonomy trustworthy.
5. Give every agent an identity, and treat it like an employee
Most agents today inherit broad permissions from the systems they connect to, with no zero-trust boundaries governing what they can actually reach. Mike offers a precise diagnostic: "Give agents an identity, scope their access, and audit what they do the same way you would any other actor in your environment. A CISO's first move should be ensuring every agent has a managed identity with scoped authentication—not a shared API key with ‘god-mode’ access. If you can't answer the questions 'What can this agent do?’ ‘On whose behalf’? and Who approved it?' the same way you can for a human employee, you're not ready for the autonomy these systems are about to have."
CISOs, don’t wait
Agentic AI is not coming—it's already here, but the security infrastructure to match it is not. The CISOs who close that gap deliberately, starting now, will define what enterprise AI looks like for the rest of the decade. The ones who wait until 2027 will spend that time in incident response.








