Newly digitalized organizations may balk at the sheer volume of data coming in, and this can complicate cloud security management.

As we transition to the Cloud and distributed workforces, our approach to security must change. While the conventional security model based on perimeter security still has some validity, the tools and techniques that worked for on-premises IT are no longer fit for purpose.

Although the security operations center model is necessary, the days of increasing headcount alone to handle potential threats are gone. Due to a shortage of skilled manpower simply adding more people is not a scalable model.

Also, the deluge of data and the scale of potential threats means that security teams must modernize their security operations, security information and event management, and security orchestration, automation and response approaches.

Automating security workflows is the obvious next step, but this relies on getting the right process and the right data in place.

Understanding cloud security responsibilities

Firstly, understand your attack surface. This covers all the assets, technology, applications and data that continually make up your IT, and will include an inventory of all the different points of entry that could be used to access your information. Information security should also incorporate post-pandemic work styles like remote- and hybrid- working.

For cloud-based applications, the attack surface will include new points to consider alongside your existing inventory. To evaluate this, the following questions need to be considered:

  • Access to the infrastructure: who has access and what can they do?
  • Network ingress from the internet and intranet: how does data move around the cloud, and is this access available solely on private networks or public ones?
  • APIs with public and private access: who can access your APIs and where from?
  • Problems within your application: are there any bugs in the software code that can be exploited?
  • Bugs and vulnerabilities in software libraries: are those libraries monitored and updated regularly?
  • Traffic that is not appropriately validated and controlled: can unauthorized individuals send packets to your cloud-based infrastructure, and what happens if they do?

This inventory will give a complete picture of your organization’s infrastructure and data movement and your responsibility levels for these areas.

Metrics to track

The first essential metric to track is failed access attempts, especially those that occur rapidly or methodically over time and cover any entry point that has an authentication requirement.

Reviewing normal access patterns for your APIs or other resources such as file stores and databases, you can see where there are deviations and investigate. Similarly, you can limit access requests and apply multi-factor authentication for sensitive assets, so that attackers will find it harder to automate their attacks. Then:

  • Monitor the amount of traffic moving around your systems. Sudden spikes in traffic volume or pattern anomalies can alert you to potential malicious activity. When attackers do get access, they may wait to begin exfiltrating data, so this pattern-based approach can show up other issues that will have to be investigated.
  • Understanding spikes and bursts in network activity can help as well. Sudden bursts starting where there is no event expected to trigger an increase, should initiate further investigations. It may be an unexpected wave of demand, or it may be something more sinister, but the data can point out where to begin.
  • Monitoring the source and destination of traffic can also indicate problems, especially when compared against lists of networks known to support malicious content. Users may have to download files or media for their jobs, but it is also important to stop any unauthorized activity that may fall prey to attackers.
  • Another important metric to track is session length. With everyone working remotely, VPN access will be more frequent than before. Tracking session lengths could indicate compromise. For example, a network session that remains open for extended unofficial time periods could indicate an unauthorized VPN tunnel set up to transfer data. These are potential security breaches.

Cloud system configuration and access control

In addition to consistently monitoring your applications’ data and cloud infrastructure, your focus should also extend to system configuration and access control.

  • Policy violations can expose rule breakers and this could be the sign of a disgruntled employee or an external attacker.For example, you should have a policy on access to data stores covering encryption and access control. Regular audits are helpful, but automating the process makes exposing violations immediately possible. Whether it is deliberate or not, this makes compliance easier.
  • Security certificates should be managed tightly. They must be correctly implemented themselves and renewed promptly: an invalid certificate can break websites and applications, and cause potential security failures—so track and audit all certificates for compliance regularly.
  • Audit user policies and access levels. Some users may have root or superuser access to IT systems; however, this level of access should be strictly managed using time-based access control and only used when necessary. Auditing this access should show where it is needed, and any accounts downgraded if they no longer require full control. Any accounts with cloud providers should have multi-factor authentication applied to them by default as well.
  • Monitor access by any third parties or consultants. These accounts will often need as much access to your systems as your employees, so ensure that appropriate access levels are granted and that they are not able to access systems beyond the agreed scope.

Finally, implement policies that regulate how the system manages changes to an employee or partner’s status. Each change in a user’s status should trigger an audit of their access rights and remove any access that is no longer required.

If the worst takes place …

For quick response to these attempted breaches, review your Mean Time To Detect (MTTD) and Mean Time To Respond (MTTR).

Ideally, both these metrics should be as low as possible. Automating detection steps and analysis processes can help decrease the MTTD figure by flagging incidents that should be investigated and followed up.

Similarly, your security analytics and observability data should help your team to investigate potential risks or outliers in the data faster. Reducing manual processes can help teams be more efficient and effective.

For organizations transitioning to the cloud, managing the sheer volume of data coming in can be the biggest initial hurdle to implementing successful cloud security processes. Once the right processes are in place to deal with data, processes can then be automated.

Setting the right metrics and understanding where that data comes from can make that process itself more successful.