Charting the course of cybersecurity with AI and ethics as key considerations in a post-pandemic future.
To survive and remain competitive during the COVID-19 outbreak, organizations are being forced to adapt and transform rapidly, while also mitigating the increased security risk associated with a large portion of the workforce working remotely.
The disruption in the way we work, learn and communicate, as well as the increase in reliance on the internet, expose individuals and businesses to more vulnerabilities, which bad actors are preying on.
It is unfortunate that the COVID pandemic has already triggered a wave of malicious attacks, often mimicking the email systems of health and government institutions. The deployment of new technologies will bring organizations into similarly uncharted waters; hence, it is imperative for us to know how we can prepare ourselves to ensure stability, business continuity and – quite simply – survival.
With AI and machine learning set to revolutionize business technology in general – and cybersecurity in particular – CybersecAsia explored some future AI-related cybersecurity issues with Asheesh Mehra, Co-Founder and Group CEO of AntWorks.
How will the advancement of AI and our growing reliance of the technology impact cybersecurity?
Asheesh: Artificial Intelligence is becoming more deeply embedded in our lives and society as a whole. From speeding the advancement of drug developments in the healthcare industry, enhancing product quality and efficiency in the manufacturing sector, to improving cybersecurity threat detection, the use cases and potential benefits that are coming out are limitless. It will transform industries and the way we look at work as we grow to be more reliant on the technology.
AI will also play a key role in aiding businesses and economies in their post-COVID recovery in the upcoming years.
Yet, reports are warning us of how cybercriminals are already using the technology to their advantage to execute elaborate attacks, inevitably making cyberattacks more vicious and threatening than witnessed before.
Acknowledging these concerns, cybersecurity firms and experts have begun exploring more ways to incorporate the technology to build AI-based cybersecurity solutions and tools that offer smarter detection and prediction to help combat these attacks.
How can ethics guide both AI and cybersecurity to make our societies safer?
Asheesh: With great power comes great responsibility – this saying is never more relevant as the use of the technology, for good or bad, lies in the hands of the person who wields it. It is important to understand that AI is not created or programmed to have a mind of its own, as depicted in most sci-fi movies. Instead, it is powered by the data that it is being fed with by the user.
I firmly believe that ethical and responsible use of AI and automation can prove to be a positive force for humanity. This can only be done when user smartly monitors and controls it. Hence, it is not the technology that needs to be regulated, rather its use.
Countries across the world will now have a responsibility to construct a framework upon which the responsible application of AI can be built. This responsibility is referred to as ethical AI. Users, companies and governments across the world need to be held accountable for their use of the technology. Legislators and regulators will have an important role to play in this, specifically, in determining the applications and parameters for the appropriate use of AI.
While society is still reeling from the pandemic, greater collaboration between technology vendors, institutions and governments is needed, to consolidate their expertise and resources to create a safe environment where we can truly understand and embrace the benefits of AI.
What can businesses do to prepare themselves and protect their valuable assets in this AI-powered decade?
Asheesh: With IDC’s prediction that 80% of the world’s data to consist of unstructured data by 2025, unstructured data is becoming a big challenge for cybersecurity teams. Most computers are not set up to process this type of data, making it vulnerable to unauthorized access and internal threats. Businesses can benefit from automation solutions with a cognitive machine reading (CMR) engine to better secure their data with its ability to recognize and process large volumes of highly unstructured data, quickly and accurately.
Automation should be seen as an end-to-end process and not as a single business process. The success of an enterprise automation journey is by identifying and calling out clean, available data as the single most important factor, which also completely solves for the unstructured data challenge. This will achieve the ultimate goal of straight-through processing automating end-to-end business processes quickly, easily and in a scalable manner.
Another key aspect of security is data management. Businesses should invest in automated tools and solutions that can decipher their data and organise them accurately to ensure that important files and documents do not get lost within their servers. Automation powered by fractal science such as AntWorks’ Auto Indexer ensures that users know where their important data is stored by identifying, classifying and organizing vast volumes of files and documents.
To help lower the risk of AI-based attacks, it is essential for businesses to begin educating their teams early on the appropriate and inappropriate uses of AI. With the help from the governments and technology providers, employees can gain a better understanding about the power of AI and its potential for misuse, reducing the risk of exposing themselves to these threats.
It will be a constant struggle for balance. However, through the collaborative efforts between individuals and organizations, and by leveraging on the advancements of AI, machine learning and automation, we can stay ahead of the cybersecurity game.