Charting the Course of AI & Security Priorities
Artificial Intelligence and Machine Learning (AI and ML) are recognised as important parts of the future of cyber security and cloud security. But how integrated are these technologies in cyber security functions currently? A recent survey by Check Point Software and Cybersecurity Insiders asked hundreds of professionals from across different industries how they’ve been using AI so far, how much of a priority it is for their companies, and how it has impacted their workforces.
Where Does AI In Cyber Security Stand Right Now?
Several questions on the survey asked respondents about the state of AI in their organisations’ cyber security plans as of today, including how fully implemented it is and how that implementation is going. Their responses paint a picture of an industry that is moving slowly and cautiously and perhaps hasn’t gone as “all-in” on AI as some may expect. Organisations still seem to be evaluating the benefits and risks associated with AI and ML tools, and businesses are moving carefully to establish firm best practices that comply with relevant regulations.
When asked to describe their organisation’s adoption of AI and ML in cyber security, 61% of respondents described it as being either in the “planning” or “development” stages – significantly more than the 24% who categorised it as “maturing” or “advanced.” Additionally, 15% of those surveyed said that their organisations haven’t implemented AI and ML into their cyber security efforts at all. Clearly, while the selling points of AI for cyber security efforts are persuading many businesses to start exploring their potential, few businesses have fully embraced them at this point.
Another question on the survey got more specific, asking respondents “Which cyber security (cloud) functions in your organisation are currently enhanced by AI and ML?” The answers are illuminating, with malware detection leading the way at 35%, with user behaviour analysis and supply chain security following right behind. Towards the bottom of the list, fewer organisations look to be using AI for security posture management or adversarial AI research. Taken together with the responses to the previously discussed question about the overall state of AI, the data shows that individual applications of AI and ML in cyber security are still far from being universal.
One reason that AI adoption hasn’t raced along at a faster pace is the challenge of navigating a rapidly shifting regulatory landscape. In these early days, laws and government guidance are still evolving around AI and cyber security. Businesses can’t afford to take risks when it comes to compliance and keeping up with these rapid changes can be complex and resource-intensive.
How Are Organisations Approaching AI for Cyber Security Going Forward?
Despite the slow and cautious adoption of AI in cyber security so far, it’s almost universally regarded as an important priority going forward with 91% ranking it as a priority for their organisation, and only 9% of those surveyed said it’s a low priority or not a priority at all. Respondents clearly see the promise of AI to automate repetitive tasks and improve the detection of anomalies and malware, with 48% identifying that as the area with the most potential. Additionally, 41% see promise in reinforcement learning for dynamic security posture management using AI – especially interesting when compared to the only 18% who are currently using AI for this function. The excitement is obvious – but there are challenges in the way of realising this potential.
Beyond specific applications, respondents were asked to identify what they see as the biggest benefits of incorporating AI into cyber security operations. The most popular answers included vulnerability assessment and threat detection, but cost efficiency was the least popular answer, at just 21%. Likely due to the pricey challenge of regulatory compliance and the cost of implementation, AI isn’t currently viewed as a significant money-saving tool for most who answered.
Concerns and Conflicting Attitudes Around AI in Cybersecurity
Additional questions on the survey provided insight into professional concerns and a lack of clarity about some of the fundamentals of AI and cyber security. On the subject of the impact of AI on the cybersecurity workforce, it’s apparent that this is still an open question without a clear answer yet. 49% identified new skills being required by AI, and 35% noted redefined job roles. And while 33% said that their workforce size has been reduced as a result of AI, 29% said that their workforce size has actually increased. Implementing AI into cyber security is clearly a work in progress, and while greater efficiency is a promise that might be realised in the future, for now, many businesses are actually having to hire more people to integrate the new tech.
Notably, there was a significant split in the answers to the question: “Do you agree with the following statement: Our organisation would be comfortable using Generative AI without implementing any internal controls for data quality and governance policies?” While 44% disagreed or strongly disagreed with the statement, 37% said that they would agree or strongly agree. It’s very rare to see such a substantial split on a question like this on a professional survey, and that split seems to indicate a lack of consensus – or perhaps simply a lack of awareness regarding the importance of internal controls and governance policies when AI is involved.
The Check Point Perspective
It is clear that AI plays a crucial role in enhancing cyber security measures and asset protection, especially when integrated with our product portfolio, allowing us to automate repetitive tasks, improve threat detection and response, and provide significant value to customers. This technology is going to define the future of cyber security.
It’s important to note that the successful implementation of AI requires thoughtful integration and governance. To see the combination of increased efficiency and accuracy that AI can offer, organisations must carefully consider how they integrate AI into their existing systems and processes. Appropriate governance mechanisms are crucial to ensure that AI is used responsibly and effectively.