BylinesArtificial IntelligenceCyber Safety

AI Cyber Risk Predictions for 2025

By Brad Jones, CISO, VP of Information Security at Snowflake

Generative AI takes centre stage as businesses’ personal security experts.

While there is a lot of talk about the potential security risks introduced through generative AI, and for good reason, there are real and beneficial applications already happening today that people neglect to mention. As AI tools become more versatile and more accurate, security assistants will become a significant part of the SOC, easing the perennial manpower shortage. The benefit of AI will be to summarise incidents at a higher level — rather than an alert that requires analysts to go through all the logs to connect the dots, they’ll get a high-level summary that makes sense to a human, and is actionable.

Of course, we must keep in mind that these opportunities are within a very tight context and scope. We must ensure that these AI tools are trained on an organisation’s policies, standards, and certifications. When done so appropriately, they can be highly effective in helping security teams with routine tasks. If organisations haven’t taken note of this already, they’ll be hearing it from their security teams soon enough as they look to alleviate workloads for understaffed departments.

AI models themselves are the next focus of AI-centred attacks.

Last year, there was a lot of talk about cybersecurity attacks at the container layer — the less-secured developer playgrounds. Now, attackers are moving up a layer to the machine learning infrastructure. I predict that we’ll start seeing patterns like attackers injecting themselves into different parts of the pipeline so that AI models provide incorrect answers, or even worse, reveal the information and data from which it was trained. There are real concerns in cybersecurity around threat actors poisoning large language models with vulnerabilities that can later be exploited.

Although AI will bring new attack vectors and defensive techniques, the cybersecurity field will rise to the occasion, as it always does. Organisations must establish a rigorous, formal approach to how advanced AI is operationalised. The tech may be new, but the basic concerns — data loss, reputational risk and legal liability — are well understood and the risks will be addressed.

Concerns about data exposure through AI are overblown.

People putting proprietary data into large language models to answer questions or help compose an email pose no greater risk than someone using Google or filling out a support form. From a data loss perspective, harnessing AI isn’t necessarily a new and differentiated threat. At the end of the day, it’s a risk created by human users where they take data not meant for public consumption and put it into public tools.This doesn’t mean that organisations shouldn’t be concerned. It’s increasingly a shadow IT issue, and organisations will need to ratchet up monitoring for unapproved use of generative AI technology to protect against leakage.

Brad Jones

CISO, VP of Information Security at Snowflake

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *