Cloud SecurityDevice & IoTIdentity & AccessPress ReleaseThreat Detection & Defense

Singapore Organisations Embrace Generative AI Tools Amid Security Concerns

New research from Zscaler, Inc. (NASDAQ: ZS), the leader in cloud security, suggests that organisations are feeling the pressure to rush into generative AI (GenAI) tool usage, despite significant security concerns. According to its latest survey “All eyes on securing GenAI” – that surveyed over 900 global IT decision makers globally – although 90% of Singaporean organisations consider GenAI tools like ChatGPT to be a potential security risk, all of those surveyed are already using them in some guise within their businesses.

Even more worryingly, 24% of this user group aren’t monitoring the usage at all, and over 3 in 10 (31%) have yet to implement any additional GenAI-related security measures – though many have it on their roadmap.

“GenAI tools, like ChatGPT, offer Singaporean businesses the opportunity to improve efficiencies, innovation and the speed in which teams can work,” said Heng Mok, Chief Information Security Officer, Asia Pacific and Japan at Zscaler. “But we can’t ignore the potential security risk of some tools and the potential implications of data loss. It is encouraging to see the Singapore government driving principle-based initiatives like fairness, ethics, accountability and transparency (FEAT) and Project MindForge to tackle the risks that come with GenAI.”

The rollout pressure isn’t coming from where people might think, however, with the results suggesting that IT has the ability to regain control of the situation. Despite mainstream awareness, it is not employees who appear to be the driving force behind current interest and usage – only 3% of respondents said it stemmed from employees. Instead, 64% said usage was being driven by the IT teams directly.

“It should be reassuring for business leaders in Singapore to see IT teams leading the charge,” Heng Mok added. “It demonstrates they are using AI tools with business objectives and security top of mind. With the fast-paced nature of GenAI, it is essential that businesses continue to prioritise educating employees and implement security measures in response to emerging technologies.”

With nearly half of all respondents (48%) anticipating a significant increase in interest in GenAI tools before the end of the year, organisations need to act quickly to close the gap between use and security.

Here are a few steps business leaders can take to ensure GenAI use in their organisation is properly secured:

  • Develop an acceptable use policy to educate employees on valid business use cases while protecting organisational data.

  • Implement a holistic zero-trust architecture to authorise only approved AI applications and users.

  • Conduct thorough security risk assessments for new AI applications to clearly understand and respond to vulnerabilities.

  • Establish visibility via a comprehensive logging system for tracking all AI prompts and responses.

  • Enable zero trust-powered Data Loss Prevention (DLP) measures for all AI activities to safeguard against data exfiltration.

CSA Editorial

Launched in Jan 2018, in partnership with Cyber Security Malaysia (an agency under MOSTI). CSA is a news and content platform focusing on key issues in cybersecurity in the region. CSA is targeted to serve the needs of cybersecurity professionals, IT professionals, Risk professionals and C-Levels who have an obligation to understand the impact of cyber threats.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *