BylinesArtificial IntelligenceCyber SafetyThreat Detection & Defense

Everything Old is New Again: AI-Driven Development and Open Source

By: Fred Bals, Senior Security Researcher, Black Duck

Remember how quickly open source software went from niche to normal? The new “Global State of DevSecOps” report from Black Duck argues that there are clear parallels between the current surge in AI-assisted development and the historic embrace of open source software by developers.

As the report notes, both movements have helped to revolutionise software development, but both have introduced unique security challenges. The report, based on a survey of over 1,000 software security stakeholders, highlights that while AI adoption by development teams is nearly universal, securing AI-generated code lags, mirroring the early days of unmanaged—and unsecured—open source use.

AI Coding Adoption and Security Concerns

Just as open source challenged traditional software development models, AI-assisted coding is transforming how code is written and used. Both movements have disrupted established software development practices, promising increased efficiency and development speed. The open source revolution democratised software development by providing freely available code and collaborative platforms. Similarly, AI coding assistants are democratising programming knowledge, making it easier for developers of all skill levels to tackle complex coding tasks.

However, the report underscores the fact that using AI coding assistants introduces risks when not properly managed, much like the early days of open source adoption. Just as with open source use, bringing AI-assisted coding tools into software development presents unique intellectual property (IP), licensing, and security challenges that without careful management by development teams can really trip up unprepared organisations.

For example, both unmanaged open source and AI-generated code can create ambiguity about IP ownership and licensing—especially when the AI model uses datasets that might include open source or other third-party code without attribution. If an AI coding assistant suggests a code snippet without noting its license obligations it can become a legal mine field for anyone using that code. Although it might only be a snippet, users of the software must still comply with any license associated with the snippet.

AI-assisted coding tools also have the definite potential to introduce security vulnerabilities into codebases. A study by researchers at Stanford University found that developers who used AI coding assistants were more likely to introduce security vulnerabilities into their code. This mirrors the concerns long associated with open source software, where the “many eyes” approach to security doesn’t always prevent vulnerabilities from slipping through. One researcher cited in the report flatly concludes that “autogenerated code cannot be blindly trusted, and still requires a security review to avoid introducing software vulnerabilities.”

According to the report, over 90% of organisations are now using AI tools in some capacity for software development. Yet 21% of respondents admit that their teams bypass corporate policies to use unsanctioned AI tools, making oversight difficult (if not impossible). This echoes the early days of open source use, when few executives were aware that their development teams were incorporating open source libraries into proprietary code, let alone the extent of that use.

Tool Proliferation: Amplifying the Noise

The Black Duck report also highlights a significant challenge in application security testing: tool proliferation. 82% of respondents stated that their organisations use between 6 and 20 different security testing tools. While intended to ensure comprehensive security coverage, the more tools introduced into the development workflow, the more complex that workflow becomes. One major issue caused by tool proliferation is an increase in “noise”—irrelevant or duplicative results that bog down development teams. The report reveals that 60% of respondents consider over 20% of their security test results to be noise. The result is a significant drain on efficiency, as security teams struggle to sift through irrelevant findings and distinguish genuine threats.

Security Testing and Development Speed: A Balancing Act

The report acknowledges the persistent tension between robust security testing and maintaining development speed. 86% of respondents reported that security testing slows down their development process to some degree. This finding underscores the challenge organisations face in integrating security practices into increasingly fast-paced development cycles, especially with the added complexities of AI-generated code.

The report highlights that even though automation in security testing is increasing, manual processes in managing security testing queues directly correlate with perceptions of security testing slowing down development. Organisations relying entirely on manual processes for their testing queues were significantly more likely to perceive a severe impact on development speed compared to those using automated solutions. The finding suggests that while security testing is often seen as a bottleneck, optimising processes through automation can significantly alleviate the friction between security and development speed.

Navigating the Future of DevSecOps in the Age of AI

The 2024 Global State of DevSecOps report urges its readers to view the outlined challenges not as insurmountable obstacles, but as opportunities for positive change.  To effectively navigate the evolving landscape of DevSecOps, the report recommends several key strategies:

  • Tool Consolidation and Integration: Reducing reliance on a multitude of disparate security tools can significantly mitigate the issue of noise and improve efficiency. Organisations should prioritise integrating their security tools to streamline processes and centralise results for better analysis.
  • Embracing Automation: Automating security testing processes, particularly the management of testing queues and the parsing and cleansing of results, can significantly reduce the burden on security teams and minimise the impact on development speed.
  • Establishing AI Governance: With the widespread adoption of AI tools, organisations must establish clear policies and procedures for their use in development. This includes investing in tools specifically designed to vet and secure AI-generated code, addressing concerns about vulnerabilities and potential licensing conflicts.

As AI becomes increasingly intertwined with software development, the need for robust and adaptable security practices becomes paramount. The report’s findings serve as a timely reminder that while AI holds immense potential for innovation, it also presents unique security challenges. By embracing automation, streamlining toolsets, and establishing clear AI governance policies, organisations can pave the way for a future where security and development speed coexist, rather than collide.

Fred Bals

Senior Security Researcher, Black Duck

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *