Executive interviewsCyber Crime & ForensicDevice & IoTIdentity & AccessThreat Detection & Defense

Harnessing AI for Cybersecurity: Insights From Zscaler’s Heng Mok

AI in Cybersecurity - Unleashing Potential, Navigating Risks

Imagine a world where Artificial Intelligence (AI) predicts and stops cyber threats before they even happen. Exciting, right? But what if the same technology falls into the wrong hands? As AI transforms cybersecurity, it brings both incredible opportunities and daunting challenges. To find answers, Cybersecurity Asia interviewed Heng Mok, Chief Information Security Officer (CISO) of Zscaler, to learn how organisations can effectively harness AI’s potential while staying ahead of its risks.

Upcoming AI Threats and Predictions

Heng Mok, Chief Information Security Officer (CISO) of Zscaler

Zscaler’s ThreatLabZ 2024 AI Security Report outlines several potential AI threats that organisations must prepare for. Heng Mok predicts an increase in attacks facilitated by generative AI tools, which enable hackers to identify vulnerabilities and orchestrate sophisticated attacks rapidly. This surge in cyber breaches will likely lower the barrier of entry for adversaries and streamline their operations.

AI-assisted threats, such as sophisticated phishing campaigns, evasive malware, and amplified attacks, are also on the horizon. Heng Mok advises organisations to invest in AI-powered security solutions to detect and respond to these evolving threats in real time. Leveraging AI for threat intelligence and predictive analytics can help organisations stay one step ahead of cyber adversaries.

The emergence of deepfake technology poses significant threats, including election interference and the spread of misinformation. Heng Mok notes that AI-generated robocalls have already impacted voter turnout in the US, and state-sponsored entities may exploit AI to undermine trust in electoral processes. Recent viral deepfake images highlight the real-world impact and rapid spread of manipulated content.

Addressing AI-Driven Cybersecurity Risks

Given the ability of generative AI to swiftly produce malicious code, even novice hackers can exploit software vulnerabilities with ease. Heng Mok underscores the necessity for CISOs and security teams to adapt and stay ahead in the battle of AI.

Heng Mok advocates for a zero-trust architecture, which assumes that no user or system within a network is automatically trusted. This approach reduces the attack surface, prevents lateral movement of threats, and lowers the risk of data breaches.

Zero-trust architecture operates on the principle of “never trust, always verify.” It employs least-privileged access controls, granular micro-segmentation, and multi-factor authentication continuously to verify identities and devices. This proactive approach is crucial for building a secure foundation against the evolving threat landscape.

To address the sophisticated nature of AI-powered phishing attacks, organisations must provide comprehensive training and awareness programs for employees, particularly those involved in AI implementation and data handling. Heng Mok highlights the importance of leveraging AI/ML to support employees in detecting threats before they reach them.

Harnessing AI for Cybersecurity: Insights From Zscaler’s Heng Mok

Zscaler employs several AI-powered solutions to enhance cybersecurity:

  • AI-powered phishing and Command & Control (C2) detection.
  • AI-powered sandboxing for malware and zero-day threat prevention.
  • Dynamic risk-based policies for continuous analysis of user, device, application, and content risk.
  • AI-powered segmentation to minimise the attack surface and prevent lateral movement.
  • AI-powered browser isolation to create a safe gap between users and malicious content.

Blocking AI Transactions: A Growing Trend

Enterprise AI adoption has surged globally, leading many organisations to block AI and ML transactions due to data and security concerns. According to Heng Mok, the report reveals that enterprises are increasingly taking steps to block AI transactions, driven by significant data and security concerns. Notably, global enterprise AI adoption has surged by 600%, leading to a dramatic increase in blocked AI and Machine Learning (ML) transactions. Currently, enterprises are blocking approximately 18.5% of all AI transactions, reflecting a staggering 577% increase from April to January, amounting to over 2.6 billion blocked transactions.

Heng Mok also mentioned that the most frequently blocked AI tool is ChatGPT, which is recognised as both the most-used and most-blocked AI application. This trend suggests that the popularity of these tools is prompting enterprises to take proactive measures to safeguard against potential data loss and privacy issues. Additionally, another significant observation is that Bing.com, which features AI-enabled Copilot functionality, has also seen blocking measures implemented from April to January, contributing to 25.02% of all blocked AI and ML domain transactions.

For a more detailed list of top blocked AI tools, see the table below, provided by Zscaler:

Top blocked AI tools Top blocked AI domains
  1. ChatGPT
  1. Bing.com
2. OpenAI 2. Divo.ai
3. Fraud.net 3. Drift.com
4. Forethought 4. Quillbot.com
5. Hugging Face 5. Compose.ai
6. ChatBot 6. Openai.com
7. Aivo 7. Qortex.ai
8. Neeva 8. Sider.ai
9. infeedo.ai 9. Tabnine.com
10. Jasper 10. securiti.ai

With that being said, Heng Mok topped it all off by advising organisations to ensure deep visibility into employee AI app usage, enable granular access controls, and implement data security measures specific to each AI app. Enabling Data Loss Prevention (DLP) and maintaining appropriate logging of AI prompts and queries are also crucial steps for securing AI adoption.

Balance Productivity and Security

Organisations face a crossroads in deciding whether to enable AI applications to boost productivity or block them to protect sensitive data. Heng Mok suggests that enterprises must answer critical questions to make informed decisions:

  • Do we have visibility into employee AI app usage?
  • Can we create granular access controls for AI apps?
  • What data security measures do specific AI apps enable?
  • Is DLP enabled to protect key data from being leaked?
  • Do we have appropriate logging of AI prompts and queries?

Organisations face a crossroads in deciding whether to enable AI applications to boost productivity or block them to protect sensitive data.

Henceforth, it is essential to conduct regular assessments to ensure the AI systems and security measures remain updated in order to protect against the latest cyber threats, such as phishing. Patch management, vulnerability scanning, tabletop exercises, automated assurance quality checks, and red teaming must also be included in these steps in order to identify and address any weaknesses proactively.

Other factors to consider include:

  • Ensuring that the use of AI tools complies with relevant laws and ethical standards, including data protection regulations and privacy laws.
  • Establishing clear accountability for AI tool development and deployment, including defined roles and responsibilities for overseeing AI projects.
  • Maintaining transparency when using AI tools by justifying their use and communicating their purpose clearly to stakeholders.

With these factors being taken into consideration, enterprise and security experts can work to embrace AI to drive innovation and stay competitive, as well as ensure their data only powers the business, not breaches!

Navigating the Complexities of AI in Cybersecurity

So where do we go from here? The implementation of AI in cybersecurity is what we can consider a “paradoxical situation,” offering both remarkable opportunities and daunting challenges. Heng Mok’s insights provide a great roadmap for balancing these dynamics, and his perspective could not be more relevant as organisations continue to integrate AI tools into their operations.

First, let’s talk about the benefits. AI can process massive amounts of data faster than any human, helping organisations identify and respond to threats in real time. This kind of speed is crucial when dealing with cyber threats that evolve rapidly. Think of it as having a super-intelligent guard dog that not only watches over your digital assets but also learns and anticipates potential breaches before they happen. Heng Mok emphasises this advantage, pointing out that AI-driven analytics can turn the tide in favour of defenders.

However, there’s no denying the risks. AI’s ability to generate data is incredible, but poor data quality can lead to incorrect decisions, as AI can also open new attack surfaces. For example, if an AI system is compromised, it could be used to launch attacks from within. This would mean that cybercriminals have gained unauthorised access to it. Once they have control, they can manipulate the AI to perform malicious activities from inside the network such as stealing sensitive data, spreading malware, or even disabling security measures. Heng Mok highlights these risks while also pressing that this is where security measures, such as the zero-trust framework, become essential.

Moreover, the misuse of AI is not just a future threat; it’s already happening. AI-powered phishing campaigns are common and harder to detect. A recent report by IBM found that AI can be used to create highly personalised phishing emails that are nearly indistinguishable from legitimate communications. Heng Mok underscores the importance of educating employees to recognise these sophisticated attacks and having AI systems in place that can detect and mitigate these threats in real time.

“With Great Power, Comes Great Responsibility”

Overall, implementing AI in cybersecurity is all about balancing innovation and vigilance. Heng Mok’s vision of a proactive and comprehensive approach aligns perfectly with this. By staying informed, continuously improving and fostering a culture of security, companies can harness AI’s full potential while monitoring its risks. It’s a challenging path, but with the right strategies, it’s entirely possible to turn AI into a powerful ally in the fight against cybercrimes. As we move forward, let us embrace AI’s opportunities to build a faster, more resilient digital landscape for businesses.

Nik Faiz Nik Ruzman

Nik Faiz Nik Ruzman is a passionate and driven journalist currently serving as a Junior Tech Journalist at Asia Online Publishing Group. With a strong foundation in journalism, online journalism, and copy editing, he excels in writing, reviewing, and updating content for various digital platforms. His experience spans conducting in-depth research and interviews, participating in webinars, and covering significant events and conferences.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *