With generative AI being abused internationally by cybercriminals, the ‘village’ needed to tape AI’s positive potential has to be global.
Thanks to the emergence of generative AI (GenAI), cybercriminals and fraudsters can create realistic deepfakes and speech patterns (vishing) to trick people into revealing sensitive data or releasing company funds to their “CEO” or other business contacts.
Apart from its potential use by cybercriminals, GenAI also poses serious threats to intellectual property rights, data privacy, and ethics. Left unchecked, the deployment of AI tools and use of data to train AI models can result in biases, anomalies, and negative brand impact.
On the other side of the coin, the benefits of tapping AI can be boundless, ranging from accurate threat information to actionable analysis of threat intelligence.
Using AI against the AI threats
AI can automate routine processes and derive meaningful insights from large datasets. Patch management, for instance, can be scheduled to run automatically, ensuring that organizations plug critical vulnerabilities in a timely manner and stay on top of potential threats.
Furthermore, threat detection models running GenAI and deep-learning techniques will enable businesses to stay ahead of evolving online attacks. The use of computer vision and natural language processing, for instance, provides deeper understanding of markers for phishing baits.
AI can also be further integrated with infrastructure security solutions to further protect application programming interfaces and regulate access controls. Cybersecurity training also can be enhanced with AI-powered attack simulation.
Harnessing the power of AI for security goes a long way towards helping security teams manage their increasingly overwhelming workload, which often includes a multitude of applications, security monitoring tools, and various incident reporting and response channels.
This is even more pertinent given the fact that security teams nowadays are likely understaffed amid the talent crunch, and struggling to deal with an increasingly sophisticated and fast-changing threat landscape.
It will still take some time before the gap in security talent and resources is filled. Until then, organizations should continue to harness the power of AI to better combat threat actors that are already using AI to launch attacks.
How one country does it
While the above approach is easier said than done, one country has proactively taken steps to steer herself towards ethical and responsible use of AI via various regulatory frameworks and best practices.
For example, the country recently updated its National AI Strategy to “uplift and empower” her people and businesses, recognizing that AI may lead to more profound challenges, including moral and ethical issues, and that this will have wide-ranging implications for regulations and governance. Also:
- Rather than impose strict rules that may stifle innovation, the country will underscore the need to strike a pragmatic balance. In this aspect, it will create regulatory sandboxes as the catalyst for AI innovation to flourish, while the necessary guardrails should be implemented to ensure this does not result in systemic risks.
- As AI development progresses, these rules must continue to evolve as the government adapts to technological changes.
- Businesses in the country also can play their part by putting in place an AI and data governance framework, specific to their unique needs, to guide their adoption of the technology, including GenAI.
- While there is currently no regulatory framework specifically for AI in cybersecurity, organizations can refer to the National Institute of Standards and Technology AI Risk Management Framework 1.0 and Cybersecurity Framework v1.1 to build their own AI governance model. These aim to incorporate trustworthiness considerations in the development of AI systems and provide best practices on cyber protection.
- IT/Cybersecurity vendors involved in AI projects need to work closely with clients to help mitigate potential security risks in the development and deployment process. To do this, they should conduct extensive testing of clients’ AI systems for vulnerabilities through penetration testing and adversarial attacks before and after deployment to ensure their AI models are resilient against malicious inputs.
It is critical then for organizations to not only be aware of the risks of AI, including its rapidly evolving implications in cybersecurity, but also proactively take measures to mitigate such risks as they adopt and deploy AI.
Such a vigilant approach, guided by a comprehensive, well-defined AI governance framework, will be critical for organizations to achieve the right balance between the risks and rewards when it comes to AI and cybersecurity.