Not if you read on and find out why even low-sensitivity data needs monitoring and protection …
As global privacy regulations have become more stringent over the last few years, businesses have had to take the problem of “shadow data” more seriously in order to ensure compliance.
Within the past year, several key jurisdictions like China, Thailand, Indonesia and Sri Lanka have either adopted or are in the process of introducing their inaugural comprehensive privacy laws. In Singapore, the authorities are set to provide advisory guidelines on incorporating personal data into AI systems by the end of 2023, with ongoing efforts to update data protection measures in response to Generative AI tools such as ChatGPT.
The increasingly stringent regulatory environment is prompting organizations to beef up their data discovery and classification protocols in ways that can also provide an easier path for remediating specific data types. Remediation encompasses actions such as deletion, masking, tokenization and data encryption.
Most organizations will focus on discovering and classifying data assets perceived to be sensitive in nature or part of critical services, ignoring data in non-production environments, as well as data perceived to be low value or inconsequential. This is a mistake!
What needs protecting? Everything!
In simplistic terms, data can be classified according to three sensitivity levels:
- High sensitivity: Personally identifiable (PI) data that, if compromised and misused, could lead to identity theft and financial fraud; data such as financial records or intellectual property that, if compromised or destroyed in an unauthorized transaction, would have a catastrophic impact
- Medium sensitivity: Data that is intended for internal use only, such as emails or documents, but contains no confidential information, so it would be less than catastrophic if compromised or destroyed
- Low sensitivity: Data intended for public use, such as marketing materials or website content
Many businesses assume that once they have classified data, the next logical step is to put protective measures in place to safeguard their high and medium sensitivity data. Meanwhile, low sensitivity data can be ignored, as it is intended for public use, or the company will not be fined if it gets leaked. Right?
This assumption is dangerous. Think of it like a house. If you classify all the items in your home according to the value you place on them (high, medium and low value), you would remove only some security measures around the low value items. The fact is, if someone can come into your house whenever they like, it does not matter if they take only the unimportant stuff to begin with: you have given them a huge amount of insight into where and how the high value items are stored.
The same holds true for data. While low-risk data environments may not pose any sort of immediate risk, the fact that hackers can move in undetected and make a little home for themselves is a significant threat.
Once inside, cybercriminals can perform reconnaissance, to identify where all high sensitivity data resides, what the security controls look like, and who the database administrators are. The hackers will also have all the time in the world to work out how to exfiltrate data from the environment in future. Once they have all of that information, they can choose their moment to spear-phish the right employee and exfiltrate valuable data.
The bottom line is that: if security controls are only in place for high sensitivity data and data environments, enterprises must be in the right place at the right time if they are going to stop a breach in its tracks. This is because when hackers are ready to exfiltrate sensitive data, they typically do it in one of two ways: either at speed (in hours, days, weeks) and with quick exits, or over a long period of time (sometimes years) where vast amounts of data is drip-fed out of environments that typically have poor security and monitoring controls in place.
Monitoring all data and data access, including low sensitivity data and data environments, is where hackers are undertaking their reconnaissance activities.
Stopping hackers before they strike
Discovery and classification of data has become the cornerstone to understanding the business risk data poses to any organization. It is also central to maintaining compliance worldwide, and to the overcoming of some of the issues around data security.
Yet, it would be a mistake to associate regulatory compliance with only high quality data security. Good data security means that no data is ignored, even if it has been classified as low sensitivity and low risk. The latter environments are where hackers live, watch, learn, and wait for the perfect moment to pounce and steal valuable assets.
For businesses committed to creating robust data security within their environments, the objective should not be merely a reactive intervention but proactive identification before threats are discovered. Achieving this demands equal vigilance in monitoring data of all sensitivity levels.