Press ReleaseArtificial IntelligenceCyber Crime & ForensicCyber SafetyThreat Detection & Defense

Tenable Warns that Open-Source AI Tools Widen Cybersecurity Gaps as Adoption Outpaces Cloud Security Readiness

Tenable’s latest AI Cloud Risk Report warns that rapid adoption of open-source AI tools and managed cloud services is widening security gaps across APAC.

As businesses rush to harness artificial intelligence (AI) for competitive advantage, Tenable®, the exposure management company, warns that organisations may be overlooking the mounting risks embedded in the open-source tools and cloud services powering their AI development. New research from Tenable’s Cloud AI Risk Report 2025 finds that the pace of AI adoption is far outstripping security preparedness, with vulnerabilities, cloud misconfigurations and exposed data quietly accumulating across cloud environments.

The surge in AI usage is undeniable. A McKinsey Global Survey found that 72 percent of organisations worldwide had integrated AI into at least one business function by early 2024, up from just 50 percent two years prior. Yet, while businesses focus on building AI capabilities, Tenable’s research highlights the growing complexity and risk of securing the sprawling ecosystem of open-source packages, libraries and managed services supporting AI workloads.

Tenable Cloud Research analysed real-world cloud workloads across Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) between December 2022 and November 2024.

What emerged is a systemic pattern. AI development environments are heavily reliant on open-source packages, many of which are downloaded and integrated rapidly, often without adequate review or security checks. Tools such as Scikit-learn and Ollama were among the most deployed, found in nearly 28 percent and 23 percent of AI workloads respectively. While these frameworks accelerate machine learning development, they also introduce hidden vulnerabilities due to their open-source nature and dependency chains.

Compounding this risk, many AI workloads run on Unix-based systems known for their widespread use of open-source libraries. This increases the potential for unpatched vulnerabilities to persist in environments where attackers could exploit them to access sensitive data or manipulate models.

Tenable’s research further revealed that AI adoption is tightly linked to heavy use of managed cloud services, which come with their own security trade-offs. Among organisations using Microsoft Azure, 60 percent had configured Azure Cognitive Services, 40 percent deployed Azure Machine Learning and 28 percent used Azure AI Bot Service. On AWS, 25 percent of users configured Amazon SageMaker, while 20 percent deployed Amazon Bedrock. Vertex AI Workbench was similarly active in 20 percent of GCP environments.

These configuration rates suggest that while AI capabilities are being embraced at scale, they are also increasing the complexity of securing cloud environments. Improper configurations or excessive permissions, often enabled by default settings, leave critical systems and sensitive AI training data vulnerable to attack.

“Organisations are rapidly adopting open-source AI frameworks and cloud services to accelerate innovation, but few are pausing to assess the security impact,” said Nigel Ng, Senior Vice President at Tenable APJ. “The very openness and flexibility that make these tools powerful also create pathways for attackers. Without proper oversight, these hidden exposures could erode trust in AI-driven outcomes and compromise the competitive advantage businesses are chasing.”

To help organisations navigate the unique risks posed by AI in the cloud, Tenable recommends the following mitigation strategies:

  • Manage AI exposure holistically: Continuously monitor cloud infrastructure, workloads, identities, data and AI tools to gain contextual visibility and prioritise risk mitigation.
  • Classify AI assets as sensitive: Include AI models, datasets and tools in asset inventories and treat them as high-value targets requiring constant scanning and protection.
  • Stay updated on AI regulations and best practices: Map AI data stores, implement strict access controls and ensure secure-by-design development practices aligned with frameworks such as NIST’s AI Risk Management Framework.
  • Enforce least-privilege access: Review permissions regularly, reduce excessive privileges and tightly manage cloud identities to prevent unauthorised access to AI models and data.
  • Apply and verify cloud provider security recommendations: Recognise that default settings may be overly permissive and ensure configurations align with best practices.
  • Prioritise remediation of critical vulnerabilities: Focus on vulnerabilities with the highest impact potential using advanced tools that reduce alert fatigue and improve remediation efficiency.

“AI will shape the future of business, but only if it is built on a secure foundation,” Ng added. “Open-source tools and cloud services are essential, but they must be managed with care. Without visibility into what is being deployed and how it is configured, organisations risk losing control of their AI environments and the outcomes those systems produce.”

CSA Editorial

Launched in Jan 2018, in partnership with Cyber Security Malaysia (an agency under MOSTI). CSA is a news and content platform focusing on key issues in cybersecurity in the region. CSA is targeted to serve the needs of cybersecurity professionals, IT professionals, Risk professionals and C-Levels who have an obligation to understand the impact of cyber threats.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *