In drawing parallels in the two software development methods, this author offers tips on strengthening DevSecOps in the age of AI
According to our research, there are clear parallels between the current surge in AI-assisted software development and the historic embrace of Open Source software by developers.
In our opinion, both movements have helped to revolutionize software development, but both have also introduced unique security challenges.
While AI adoption by development teams is nearly universal, securing AI-generated code lags, mirroring the early days of unmanaged (and unsecured) Open Source use.
AI coding adoption and security concerns
Just as Open Source challenged traditional software development models, AI-assisted coding is transforming how code is written and used.
Both movements have disrupted established software development practices, promising increased efficiency and development speed. The Open Source revolution democratized software development by providing freely available code and collaborative platforms. Similarly, AI coding assistants are democratizing programming knowledge, making it easier for developers of all skill levels to tackle complex coding tasks.
However, when not properly managed, the use of AI coding assistants can introduce risks (unique intellectual property, licensing, and security challenges), much like the what happened in the early days of Open Source adoption. For example:
- Both unmanaged Open Source and AI-generated code can create ambiguity about intellectual property ownership and licensing, especially when the AI model uses data sets that may include Open Source or other third-party code without attribution.
- If an AI coding assistant suggests a code snippet without noting its license obligations it can become a legal minefield for anyone using that code. Although it may only be a snippet, users of the software must still comply with any license associated with the snippet.
- AI-assisted coding tools also have the definite potential to introduce security vulnerabilities into code bases. This mirrors the concerns long associated with Open Source software, where the “many eyes” approach to security does not always prevent vulnerabilities from slipping through. The code will still require a security review to avoid introducing software vulnerabilities.
- Teams developing with AI code generation may bypass corporate policies and use unsanctioned tools, making oversight difficult if not impossible. This echoes the early days of Open Source, when few executives were aware that their development teams were incorporating such source libraries into proprietary code, let alone the extent of that use.
- In our experience, client organizations use between six and 20 different security testing tools. While intended to ensure comprehensive security coverage, the more tools introduced into the development workflow, the more complex that workflow becomes. One major issue caused by today’s tool proliferation trend is an increase in “noise”: irrelevant or duplicative results that bog down development teams. The result could be a significant drain on efficiency as security teams will need to sift through irrelevant findings and distinguish genuine threats.
Balancing security testing and development speed
We believe there is persistent tension between robust security testing and maintaining development speed. Organizations that resonate with this challenge face difficulties in integrating security practices into increasingly fast-paced development cycles, especially with the added complexities of AI-generated code.
Even though automation in security testing is increasing, manual processes in managing security testing queues directly correlate with perceptions of security testing slowing down development.
Organizations relying entirely on manual processes for their testing queues may more likely perceive a severe impact on development speed compared to those using automated solutions.
This suggests that while security testing is often seen as a bottleneck, optimizing processes through automation can alleviate the friction between security and development speed.
Managing tomorrow’s DevSecOps
Development teams should view the challenges above not as insurmountable obstacles, but as opportunities for positive change. To effectively navigate the evolving landscape of DevSecOps, we recommend several key strategies:
- Tool consolidation and integration: Reducing tool sprawl can significantly mitigate the issue of noise, streamline processes and centralize results for better analysis and efficiency.
- Embracing automation: Automating security testing processes, particularly the management of testing queues and the parsing and cleansing of results, can significantly reduce the burden on security teams and minimize the impact on development speed.
- Establishing AI governance: With the widespread adoption of AI tools, organizations must establish clear policies and procedures for their use in development. This includes investing in tools specifically designed to vet and secure AI-generated code, to address concerns about vulnerabilities and potential licensing conflicts.
While AI holds immense potential for innovation, it also presents unique security challenges. With the above strategies, organizations can pave the way for a future where security and development speed will coexist, rather than collide.