While containers ensure separation in the application development lifecycle for security, how do we secure the platform itself?

The endless quest for automating everything started with Virtual Machines (VMs).

The ability to run several logical servers on the same physical device unlocked the consolidation of computing resources to reduce waste. Leading companies in IT have quickly jumped on the bandwagon to leverage the massive economies of scale enabled by this automation and consolidation.

VMs are still the base layer of IT infrastructures today. However, the need for automation and consolidation have moved a notch higher from the operating system to the application itself. Leading IT practitioners began leveraging Continuous Integration and Continuous Deployment (CI/CD) to reduce the efforts of deploying applications, enabling fast iterations and increased reliability.

Containers are the cornerstone of this capability: they allow developers to ship lightweight self-sufficient application packages providing enough consistency and replicability for scalability, and ensuring separation for security.

But how different is securing containers than securing other DevOps or application development environments? CybersecAsia sought the expert views and insights of Jerome Walter, Field CISO, APJ, Pivotal.

What are the security risks organizations should look out for when deploying containers in their application development process?

Walter: There are generally two approaches to containers: 

  • Container platforms (i.e. Kubernetes, Docker) provide the underlying infrastructure (operating systems, orchestration) and offer developers and operators the choice of the content of the containers and networking. Developers either build their image or use a prepackaged “Docker image”.
  • Application platforms (i.e. CloudFoundry / Pivotal Application Service) build the content of the container via the use of prepackaged “buildpacks”: developers only provide the code of their application.

From a risk perspective, we can identify five areas that security teams should keep an eye on:

  • The content of container images
  • Privileges granted to the container
  • Network segregation
  • Protection of the registry
  • The security of pipelines and code repositories

First and foremost, it is now notorious that the images found on public repositories often contain vulnerabilities, or worse, malicious programs (e.g. https://thenewstack.io/new-cryptojacking-worm-found-in-docker-containers/). Security teams of organizations using container platforms should be aware of this critical risk and put in place controls to monitor all containers for vulnerabilities and validate the content of images.

Please share some best practices to keep containers secure and to get the best out of them.

Walter: Containers do not enable the speed and scalability required for DevOps by themselves. Obtaining a mature platform which can be used by developers to build business value requires a combination of a multitude of components. Organizations need to balance between the need to secure and the imperative to go faster. Thus, they must focus on providing their developers with components that are secure and ready-to-use.

An enterprise-grade container platform would provide out of the box mitigations for most of the challenges we have identified above:

  • Hardened operating systems and containers
  • Reduction of privileges in the container and the control plane
  • A segmented registry, only accessible through APIs
  • A mechanism to scan the images and report vulnerabilities

However, to really leverage the value of containers and cloud-native applications for security, there is a need to rethink some of the traditional practices and culture which are considered common practice.

Traditional servers were usually built for the long term, with security patches applied during specific time windows, manual changes which need review, and obstructive network defenses to compensate for the risk and the difficulty to detect compromise. Regular breach reports remind us of the inefficacy of this practice: despite extensive security programs, the overwhelming majority of the breaches are still due to unpatched software, credentials leaked or configuration. 

On the other hand, the ephemeral nature of the workloads typically running in containers and the extensive automation provides a fresh approach on security. Applications designed for continuous changes enable the emergence of new security practices which leverage IT production tools rather than a separate toolset built around them.

We can see the following practices being deployed successfully among the cloud-native community:

  • Repaving: With automation, servers and containers can be rebuild from source regularly. This increase the resilience of the platform and wipes clean any potential snowflake configuration or malicious persistent threat. Some of the biggest banks in the work are practicing a systematic repave of their platforms on a weekly basis.
  • Removing and rotating credentials: As mentioned earlier, service credentials left in the code remain an important source of breaches. Scanning the code for credentials and keys, and automating the generation of credentials will help significantly reduce the risk of leakage or unauthorized access. It also reduces the risk in case a malware steals the code from a developer’s desktop.
  • Continuous assurance as code: As applications are changed continuously, traditional paper-based reviews are not efficient. The security teams of leading companies are developing their own programs to test and scan the configuration and find potential weaknesses, then working with the platform operators to remediate the risk.
  • Continuous adversarial testing (Bug Bounties, Red teaming, Chaos engineering): While they can significantly improve resilience and security, microservices also increase the complexity of the overall architecture. Leading companies embrace this continuous state of change and focus on improving their velocity to detect vulnerabilities, weaknesses and attacks. 

How should we bridge the gap between application developers and security practitioners within an organization?

Walter: The core of the change is cultural. Organizations embracing containers want to go faster, often because of a business imperative to become more competitive. Thus, while security professionals must understand the benefits and risks of container architecture, they should also be careful not to impede speed and focus instead on measuring and improving security outcomes with the tools used to deliver value.

As the key metrics of DevOps highlighted by “Accelerate State of DevOps” report from the DevOps Research and Assessment (DORA) have demonstrated, focusing on improving shared metrics triggers different teams to work on underlying bottlenecks in their own processes.

This is the fundamental learnings of Lean. Focusing the organization’s efforts on improving its capacity to find and repair vulnerabilities, repaving servers more frequently, rotating credentials, and detect attacks, could also create the shared understanding to close the divide between developers and security.