How can advanced technologies minimize or even eliminate human error entirely? A chat with Oracle’s Senior Director of Security for Cloud Infrastructure Development sheds some light …
According to a recent report from Oracle, Security in the Age of AI, “human error” is the top cybersecurity risk for organizations, and client mistakes are predicted to account for 99% of “cloud security failures.” However, companies are still prioritizing investments in people via training and hiring to improve security.
With more and more sensitive data shifting to the cloud, how can organizations hasten their adoption of advanced technologies, such as new software, infrastructure, and artificial intelligence (AI) and machine learning (ML) to minimize or even eliminate human error entirely? Or is that even practicable? CybersecAsia met up with the Senior Director of Security for Oracle Cloud Infrastructure Development, Johnnie Konstantasl, for some insights.
CybersecAsia: Companies prioritise investing in people via training and hiring to improve security. Will these initiatives help reduce “human error” in all-round corporate security?
Konstantasl: Investing in specialist staff and training certainly can have a positive impact on a company’s security posture. For example, our joint Cloud Threat Report with KPMG earlier in the year found that 25% of respondents cited training their staff on new threats and best practices as having the greatest impact on their ability to stay current on adversary tools, tactics, and procedures.
However, security “errors” or misconfiguration were at the heart of some of the biggest breaches of 2019. Failure to patch for a known vulnerability, use of weak passwords or misconfigured servers left unintentionally open to the Internet all point to some failure in the chain of people and processes that manage security. Certainly, some of this can reasonably be attributed to overtaxed security teams with too much to do and too few people to do it given the global cyberskills shortage. However, staff shortage is not the only factor. Companies need to staff security teams for 24/7 vigilance, and they also need people skilled to deal with modern security challenges borne out of increased complexity, both in the types of threats and in our digital footprint designs.
Many enterprises and organizations have networks that span data centers and multiple public clouds. This means the footprint to be protected has significantly grown. Hiring people skilled in threat analytics, data science, incident response and multi-cloud security operations among others will help make enterprises more resilient to attack. Also, more needs to be done than just skilling up. The tools and platforms that network defenders use to protect critical assets must get smarter at closing security holes in an automated way to help defenders stay ahead of attackers who are themselves using the latest collaboration and automation techniques.
CybersecAsia: Will that mean AI, being programmed by humans, will also ultimately be susceptible to human error, hype, biases or agendas?
Konstantasl: Artificial intelligence (AI) holds a lot of promise to not only automate security response but to automate the decision-making processes that govern security changes, making threat detection and response more predictive. We are at the beginning stages of what may eventually be broad use of AI in cybersecurity, and many cloud service providers have begun to harness its potential in specific security applications and use cases. Attackers too are using AI. Polymorphic malware for instance can refactor itself quickly to evade detection.
One of the keys to successfully outpacing attackers in the use of AI is data. High volume, and accurate data and threat “signal” is necessary to train AI models for accuracy. This is where cloud service providers have a distinct advantage over attackers. As providers of hyperscale cloud services on a massive global footprint, cloud service providers (CSPs) have access to a lot of high-fidelity operational data with which to create AI-driven security solutions.
The use of AI for attack and defense is an ongoing tug of war but I believe that when it comes to AI, the “good guys” may have an inherent advantage.
CybersecAsia: How can organizations minimize or even eliminate human error as a cybersecurity risk while balancing negative perceptions by humans who are most vulnerable to AI’s increasing allure of cutting jobs and costs?
Konstantasl: We have to put an emphasis on automating the security of critical workloads at the forefront of cloud infrastructure design. We are moving to a cloud where many security controls are always on, and it is self-securing. Oracle is delivering security innovations across our entire cloud stack in products, which has three core attributes that leverage the power of AI and machine learning:
- Self-driving to automatically provision, secure, monitor, backup, recover, tune, upgrade
- Self-securing to automatically apply security patches with no downtime
- Self-repairing to increase uptime and productivity with 99.995% availability which is less than 2.5 minutes of planned and unplanned downtime a month
The goal is to prevent human errors and free up resources, so those resources can focus less on operational problem solving and more on innovation. Those resources can now be made available to focus on things that are unique to an organization and give it a strategic advantage.
What is important to keep in mind is that even with greater automation in security decisions, human beings are still needed to preside over the decision process and reverse it if necessary. Security jobs will evolve as the tools and technologies do; new specializations and roles will emerge in the development, implementation and use of AI security solutions at the very least. These new roles will need to address yet more complex security challenges so if anything, work in the security sector will become more interesting.
CybersecAsia: Could the rush to hasten adoption of advanced technologies itself be a contributor and catalyst for “human error“? (that is, high-level errors in budgeting decisions, selection of wrong vendors or solutions, etc, instead of just operational or procedural oversights).
Konstantasl: First, from my perspective, the use of AI in security has been measured not rushed. While a lot of security technologies use machine learning—a subset of AI, use of AI for security itself is still in its infancy. In other words, we have not relegated security decision making entirely to AI engines and bots.
That said, there is reason to believe that AI can greatly help close the time window to threat response and even prevent common attacks from ever getting past the first step. In the same way that automated means to fly planes and run trains have sped up travel, AI promises to make cybersecurity more efficient. Time will tell whether AI driven security will ultimately make security incidents and misconfigurations less common but the potential for that is certainly there.
CybersecAsia: Finally, can any unwanted thing—be it human error or spam or crime—be completely eliminated at all? If the answer is Yes, are we headed for a dystopian Terminator movie scenario where humans (the root cause of all these acts) are judged to be obsolete and disposable?
Konstantasl: I do not believe in absolutes so to me such an outcome seems more made for Hollywood. However, if we are to theorize just for fun about a world where cybercrime is eliminated because of super evolved AI, then an outcome of peaceful coexistence could also be in the script.
I think whatever the future holds, humans will certainly have a big hand in how AI is built and what trajectory its evolution takes. This to me opens a world of new possibilities for improving quality of life and longevity.
Epilogue: Humankind has pursued the idea of artificial intelligence since the time of the ancient Greeks, and only formally christened in 1956 at Dartmouth College, Hanover, New Hampshire. The next big jump took place with the development of computers, supercomputers, and after that, the connected Cloud. In between-every stage of its evolution, humans have glamorized AI, fantasized about it, faked it, feared it, and secretly lusted for its material benefits.
As the technological age reaches a point where depleting natural resources and increasing sociopolitical and commercial realities demand drastic automation, AI’s relevance will be cemented into dominance. Just always bear in mind the nature of a double-edge sword and wield it sagaciously!