Okta’s ‘AI at Work 2025’: Data Privacy, Cybersecurity Top of Mind Concerns in Business Use of AI
But Decision-Makers and Higher-Ups Still View AI as Crucial to Business Strategies

Okta has the revealed findings from ”AI at Work 2025,” a new global survey of executives from companies of all sizes on their sentiments, concerns, and business priorities regarding Artificial Intelligence (AI). The insights highlighted the tension between the rush to introduce transformational technology and the need to manage the risks associated with doing so.
The overriding theme of “AI at Work 2025” is that two-thirds of leaders regard AI as key to their business strategy. Broad adoption of AI can be attributed to its long list of applications and use cases.
For the second year in a row, automation and process optimisation use cases are most common, with 84% of respondents indicating that their organisation employs AI in this manner.
But looking beyond the top spot, the “AI at Work 2025” survey revealed quite a bit of year-over-year change:
- Coding and software development rose from fourth place in 2024(56%) to second place (74%), on the strength of an 18-percentage-point increase (the largest exhibited).
- Content generation and creativity use cases jumped from fifth to third.
- Moving in the other direction, predictive analytics and forecasting fell from second to fifth, declining seven percentage points—notably, this is the only use case to experience decreased adoption.
At the heart of the AI transformation are AI agents—autonomous software systems that leverage Large Language Models (LLMs), Machine Learning (ML), and Application Programming Interface (APIs) to perform tasks without direct human intervention. Unlike traditional software, they can interpret and respond to natural language inputs, analyse real-time data, and take actions on behalf of users.
More than any other type of non-human identity (NHI), AI agents are not only transforming how people work at a tactical level but also causing companies to change their strategies to survive and thrive in a dynamic, global market.
Blowing past even recent projections, a staggering 91% of respondents reported to “AI at Work 2025” that their organisation is currently using AI agents—not investigating, not planning, but actually already using.
Focusing on AI agents, specifically, respondents reported a mean of nearly five (4.8) use cases within their respective organisations. Task automation was cited most frequently (81% of respondents), likely a result of its broad appeal and range of applications.
Organisations are also deploying agents in support of specific teams and functions, led by enhancing customer service or support (65%), providing IT support (55%), and coding agents (51%), and accompanied by several more in the long tail. These more targeted use cases indicate that AI agents are proliferating widely within organisations, increasing their impact.
Tangible Benefits Will Continue to Drive AI Agent Rollouts
Importantly, leaders reported to “AI at Work 2025” that AI agents are delivering meaningful benefits. Increased productivity (cited by 84% of respondents) and cost savings (60%) lead the way, but these outcomes represent only the tip of the iceberg.
Nearly half of respondents reported using AI agents to enhance customer experiences and streamline workflows, while over one-third realised gains in decision-making, scalability, and innovation with AI agent adoption.
Data Privacy and Security Risks Top Leaders’ AI Concerns
To fulfill the diverse range of use cases listed previously, AI agents may require access to an organisation’s data, systems, and resources. However, increased access brings increased risk: Poorly built, deployed, or managed AI agents can present new attack vectors, including prompt injection and account takeovers. Even absent malicious intent, unexpected behaviours can potentially result in breaches, reputational damage, and non-compliance with regulatory requirements.
Respondents to “AI at Work 2025” placed data privacy and security risks as first and second as primary concerns, by both severity (ranked as their top concern) and frequency (cited most often). Unsurprisingly, 85% of leaders regard identity and access management (IAM) as vital to the successful adoption and integration of AI
Managing AI agent identity is different from managing human user identities due to key distinctions in definition, lifecycle, and governance.
AI agents:
- Lack accountability to a specific person.
- Have short, dynamic lifespans requiring rapid provisioning and de-provisioning.
- Rely on various non-human authentication methods like API tokens and cryptographic certificates.
- Need very specific and granular permissions for limited periods and often access privileged information, making robust control crucial to preventing prolonged escalated access.
- Often lack traceable ownership and consistent logging, which complicates post-breach audits and remediation.
Leaders’ understanding and appreciation of these qualities are reflected in their responses about their organisations’ most pressing NHI-related security concerns. Controlling NHI access and permissions (selected by 78% of respondents) is No. 1, but concerns about lifecycle management (69%), poor visibility (57%), and remediating risky NHI accounts (53%) were also selected by a majority of respondents.
It is a bit surprising, then, that leaders regard identity and access management (IAM) as a vital part of their AI strategy. In fat, 85% of survey respondents (a seven-percentage-point increase over last year) indicated that IAM is either “very important” (52% of respondents) or “important” (33%) to the successful adoption and integration of AI within their organisation.
Going one layer deeper, respondents to “AI at Work 2025” pointed to a long list of reasons why IAM is so crucial. The duo of data security and privacy was the most frequently cited reason, followed by compliance and regulation.
“AI at Work 2025” Finds Governance Gap
Continuing in the governance line of thought, leaders’ top two AI agent-related security concerns over the next three years are:
- AI governance and oversight (selected by 58% of respondents)
- Compliance and regulatory requirements (50%)
But while these are top-of-mind issues, there are strong indications that AI rollouts are outpacing organisations’ ability to keep up with governance, oversight, compliance, and regulatory requirements. For example, “AI at Work 2025” uncovered the following:
- Only 10% of respondents reported that their organisation has a well-developed strategy or roadmap for managing NHIs.
- Only 32% of organisations always treat digital labour forces with the same degree of governance as human workforces.
- Only 36% of organisations currently have a centralised governance model for AI.
Within those organisations that make up the latter 36%, the most common key stakeholders in AI governance, according to “AI at Work 2025,” are CISOs, CIOs, and legal/compliance teams. However, we also see the welcome involvement of data scientists, AI teams, line-of-business leaders, and Chief Data Officers (CDOs)—all of whom can bring informed perspectives to this increasingly important and ever-complicated subject.