In drawing parallels in the two software development methods, this author offers tips on strengthening DevSecOps in the age of AI

However, when not properly managed, the use of AI coding assistants can introduce risks (unique intellectual property, licensing, and security challenges), much like the what happened in the early days of Open Source adoption. For example:

  • Both unmanaged Open Source and AI-generated code can create ambiguity about intellectual property ownership and licensing, especially when the AI model uses data sets that may include Open Source or other third-party code without attribution.
  • If an AI coding assistant suggests a code snippet without noting its license obligations it can become a legal minefield for anyone using that code. Although it may only be a snippet, users of the software must still comply with any license associated with the snippet.
  • AI-assisted coding tools also have the definite potential to introduce security vulnerabilities into code bases. This mirrors the concerns long associated with Open Source software, where the “many eyes” approach to security does not always prevent vulnerabilities from slipping through. The code will still require a security review to avoid introducing software vulnerabilities.
  • Teams developing with AI code generation may bypass corporate policies and use unsanctioned tools, making oversight difficult if not impossible. This echoes the early days of Open Source, when few executives were aware that their development teams were incorporating such source libraries into proprietary code, let alone the extent of that use.
  • In our experience, client organizations use between six and 20 different security testing tools. While intended to ensure comprehensive security coverage, the more tools introduced into the development workflow, the more complex that workflow becomes. One major issue caused by today’s tool proliferation trend is an increase in “noise”: irrelevant or duplicative results that bog down development teams. The result could be a significant drain on efficiency as security teams will need to sift through irrelevant findings and distinguish genuine threats.

Development teams should view the challenges above not as insurmountable obstacles, but as opportunities for positive change. To effectively navigate the evolving landscape of DevSecOps, we recommend several key strategies:

  • Tool consolidation and integration: Reducing tool sprawl can significantly mitigate the issue of noise, streamline processes and centralize results for better analysis and efficiency.
  • Embracing automation: Automating security testing processes, particularly the management of testing queues and the parsing and cleansing of results, can significantly reduce the burden on security teams and minimize the impact on development speed.
  • Establishing AI governance: With the widespread adoption of AI tools, organizations must establish clear policies and procedures for their use in development. This includes investing in tools specifically designed to vet and secure AI-generated code, to address concerns about vulnerabilities and potential licensing conflicts.