Controlling Your AI Agents Is a Matter of Security
AI Agents Can Potentially Be High-Value Targets for Cyberattackers Very Soon

Being smarter, faster, and more economical is often top of mind for any business decision-maker. So, it is no surprise that organisations are redesigning operational structures to integrate AI agents, as they can operate with minimal human supervision.
Gartner predicts at least 15% of day-to-day work decisions will be made autonomously through agentic AI by 2028, up from 0% in 2024. In addition, 33% of enterprise software applications will include agentic AI by 2028, up from less than 1% in 2024.
Specialised agents can enhance threat hunting, surface emerging threats more quickly, generate secure-by-design code, and write or validate policies. However, as these agents grow more autonomous and embedded in our daily lives, their expanded capabilities will introduce new and unprecedented cybersecurity challenges.
With full memory and access to sensitive data, AI agents will become high-value targets for cyber attackers, requiring organisations to adopt a proactive security posture. In fact, this trend has already been flagged, with the CyberArk 2025 Identity Security Landscape global study finding that it is inadvertently creating a new identity-centric attack surface.
From Bots to AI Agents: A Quick Transition
It has not been that long since robotic process automation (RPA) bots spread across organisations so quickly that security teams were caught off guard. Many lacked the tools to properly authenticate, monitor, or govern them. Now, we are approaching a similar inflection point with AI agents, which are evolving at an even faster pace and with significantly greater consequences.
The key difference is that AI agents now possess a form of agency. They can make independent decisions, interact dynamically with systems, access sensitive information, and initiate transactions, all with minimal or no human oversight. Their identity type exists on the same level as humans and traditional machines, requiring their own identity framework.
Additionally, AI agents will interact with each other, potentially forming unpredictable decision-making networks. In such an environment, traditional security models fall short. Without a well-designed identity and access architecture that is tailored for agentic systems, organisations cannot enforce fundamental safety measures, such as limiting permissions, monitoring intent, or including kill switches for AI agents.
Multifaceted Risks
AI agents will become high-value targets for cyber threats, requiring a proactive security posture. Organisations must ensure that AI agents operate under the right controls, with full IT approval and governance. This means establishing clear policies for how AI agents are deployed, what systems they can access, and how their behaviour is monitored over time. Just as human users require identity verification and access permissions, AI agents should be subjected to the same level of scrutiny, given their potential autonomy and reach within digital ecosystems.
Minimising the threat surface of agentic AI requires a deep understanding of the risks they introduce, such as unauthorised access to systems, permissions that exceed their intended function, and the potential for privilege escalation (where an agent gains higher-level access through loopholes or weak controls). There is also the risk of lateral movement across systems (allowing agents to traverse an organisation’s digital infrastructure undetected) and the speed at which AI agents operate, outpacing existing security measures.
Why (and How to) Control Your AI Agents
To effectively manage the agentic AI systems, enterprises must prioritise securing all identities—human and machine—to establish proper authentication and authorisation protocols.
Securing identities forms the foundation for controlling how agents interact with systems and data. For example, emerging standards such as the Secure Production Identity Framework for Everyone (SPIFFE) can offer a strong baseline. SPIFFE helps establish trust between systems by assigning a unique, cryptographically verifiable identity to each service within a system.
Alongside identity security, security teams should implement a set of foundational controls to mitigate risks, such as:
- Implementing zero standing privileges that ensure agents only receive the right level of access when necessary and for a limited time
- Continuous monitoring which allows real-time visibility into agent behaviour
- Step-up challenges that add verification layers before high-impact actions
- Behavioural analytics to detect anomalies or emerging threats
- Kill switch capabilities to ensure organisations can immediately disable rogue agents when needed
The Opportunity for a More Interconnected World
AI agents have the potential to evolve into proactive partners that enable a smarter, more efficient and interconnected world. Organisations that embed identity into the foundation of their AI strategy will better defend against threats, enable secure autonomy, and set the bar for responsible AI innovation. The question is not if AI agents will proliferate throughout the organisation, but when—and whether teams are prepared to respond when that happens.