Addressing AI Security in an Age of Infinite Information: Strategies for Success
by Grant Bourzikas, Chief Security Officer, Cloudflare
The number of businesses in Asia Pacific leveraging AI to improve their operations and efficiency is expected to increase by 45% year-on-year compared to 2023 (according to IDC). AI’s transformative potential, paired with its accelerated usage is stirring a technology revolution larger than anything ever witnessed.
Recent developments in AI have opened doors to a broad range of practical use cases, leading to its adoption across a variety of business functions. AI was one of the hottest areas of investor interest in the region in 2023, with AI fintech funding in Singapore totalling US$481.2 million across 24 deals in the sector (according to KPMG’s Pulse of Fintech Report).
But, as with any new technology, AI introduces new risks. Recent conversations around these risks have centred on concerns including ethics, privacy, cybersecurity and the future of work. We are at a critical inflection point today, where if we fail to clarify the conceptual versus the tangible impacts of AI for businesses, we will increase our chances of falling victim to the risks posed by the technology.
Across the region, governments are increasingly interested in leveraging and regulating AI effectively. In Singapore, the AI Verify Foundation and Infocomm Media Development Authority (IMDA) have developed a draft Model AI Governance Framework for Generative AI, to holistically address new issues, which in turn, hopes to create a trusted environment for end-users to use AI confidently and safely. Similarly, Malaysia and Indonesia are looking to introduce regulations to drive economic growth and to develop a code of ethics and governance for AI. Thai authorities have also unveiled a national AI strategy and action plan to upskill and reskill its local workforce in AI literacy. All of these regulations and efforts are to come to a greater understanding of AI and its impact.
‘Real’ AI is rarely understood and often elicits reactions that range from fantastical to doom and gloom. Despite the attention-grabbing headlines, AI on its own will not solve the world’s most critical problems – especially if we don’t make an effort to understand its limitations. But as the hype has grown, organisations have raced to build AI into their businesses to maintain a competitive edge at any cost. But in this rush to innovate, businesses often fail to bake in protective security measures and analyse potential risks at the start.
While there are many ‘signal flares’ to be wary of when it comes to misunderstanding and assessing the risks of AI, the below are what chief security officers of any organisation – of every size and industry – should keep top of mind:
-
In the world of AI, data is the only currency and organisations that have the most will win. AI on its own will not solve the world’s most critical problems. The successful implementation and use of AI depend on the quantity and quality of data. However, collecting vast amounts of quality data isn’t the end of the AI lifecycle. Organisations must also be able to extract the data and transform it into insights. The race that was once building AI is evolving. To outperform competitors, organisations must now continually train AI models on the most up-to-date, relevant data to avoid hallucination and model drift.
-
The knowledge gap between security professionals who understand AI and those who do not will be the main reason for any shift in the balance of power to threat actors. Whether or not the use of AI is giving attackers a leg up is the wrong question to be asking. AI is here to stay, so the right question is whether or not security leaders and professionals possess the skills required or will invest the time to upskill and learn how to handle what is becoming the largest revolution ever seen in technology. Both harnessing the power of this technology and defending against it, hinges on the ability to understand it and its limitations. If the security industry fails to demystify AI and its potential malicious use cases, the coming years will be a field day for threat actors.
-
The only way to fight against AI is with AI – but you must master the basics first. Defending against AI ultimately means defending against all human knowledge indexed. Information sharing exists at an order of magnitude faster and more efficiently exchanged than ever before. Security pros must protect their organisations in the era of infinite information and face challenges never seen before. But if the industry has historically struggled with doing the basic things well, over-pivoting to solve issues using AI will be mostly benign. The best way to mitigate attacks is to ensure the foundational controls in security have been solved. We often chase shiny objects like AI and the best defence for AI is strong foundational controls.
-
The secure-by-design conversation will evolve to not only encapsulate but heavily focus on AI tools and models. The conversation around productionalising AI often only crosses into the security realm after the model has been developed and exists – e.g., maintaining the integrity of the model. But if AI is the inevitable future of the way organisations and critical infrastructure do business, operate, develop their services, etc. – security must be built in from the start. AI must be engineered and implemented in a way that addresses many of the concerns that cyber has historically focused on.
Securing AI models will be key for future development
Many business leaders ignore the above signal flares and assume that AI will solve issues instantaneously, automatically and that it magically works. The truth is that models doing the ‘right thing’ is dependent upon accuracy, and accuracy is dependent upon training models and decreasing the chance of hallucination. If we continue to prioritise creativity and speed over accuracy in AI, large language models will be shackled to hallucination.
Securing models across the lifecycle proactively – building, testing and production – should be a massive focus for security teams and regulators in the years to come. Especially since only 38% of businesses in Asia Pacific consider themselves highly prepared to handle a cybersecurity incident, and 63% reported a financial impact of at least $1 million (US) from an incident in the past year.
However, the good news is that we are still in the ‘smoke and mirrors’ phase of AI, testing its power and exploring use cases. We can either take the time now to educate ourselves on the technology to recognise and address the potential signal flares or become low-hanging fruit for hackers to exploit. The bottom line is that we can’t let innovation and excitement outweigh security and resilience.