Cybersecurity today is being described variously as the AI arms race, a spy-versus-spy warfare, the AI-versus-AI frontier… Here’s what businesses need to watch out for.
Cyber risk has raised its head higher than ever. The stakes are getting higher, and cyber-defenders seem like they’re falling behind the cyber-attackers.
In a world of AI-enhanced cyber-attacks, what should organizations in Asia Pacific look out for in their efforts to increase cyber-vigilance and cyber-resilience against cyberthreats?
CybersecAsia sought out some insights from Chris Thomas, Senior Security Advisor, APJ, Extrahop.
What key factors do you see driving the intensification of the AI arms race in cybersecurity?
Chris Thomas (CT): While AI’s potential is exciting, it needs some form of control since it can be used for both good and ill intentions. Unfortunately, hackers are using AI to execute more sophisticated and costly cyberattacks, leading to an increase in both the frequency and complexity of cyber threats that are targeting organizations.
There are three emerging AI tactics that threat actors are adopting to infiltrate organizations:
- Increasing use of AI by attackers to write malware and phishing messages
- Deployment of rogue chat programs to distribute malicious code and exfiltrate data
- Targeting of APIs by attackers to steal the data transmitted between applications
This intensification of the arms race underscores the necessity for an intelligent AI-based cybersecurity strategy and tools that can protect against these types of threats.
What are some key AI-enhanced cyber-attacks and what strategies can governments and organizations employ to mitigate the risks?
CT: Since ChatGPT’s release in late 2022, attackers discovered ways to exploit it, including writing new types of malware, including mutating code specially designed to evade endpoint detection and response (EDR) systems. In the following months, ChatGPT and similar AI services implemented safety filters to prevent malicious activity. However, these filters can be bypassed, in the instance of ChatGPT being tricked to write attacker tools.
The dark web is a dangerous zone for threat actors to gain access to unregulated AI applications, and these applications can write solid malware code and hacking tools without any guardrails in place. Gone were the days when we spotted frequent spelling and grammatical errors to expose many past phishing and smishing attempts. Now, attackers use AI to automate tasks like generating highly personalized and error-free fraudulent messages which makes it even more challenging to protect oneself from falling prey to cyber-attacks.
Governments and organizations must acknowledge this evolving threat landscape and adopt proactive strategies to defend against these emerging risks. As these types of threats advance, organizations must invest in tools that will build business resilience.
Why are APIs considered hidden weak points in cybersecurity?
CT: To steal data, threat actors will target vulnerabilities in APIs. APIs often contain user data, such as usernames and passwords, which are accessed when users log into web-based applications. With the website user’s ID being exposed, attackers could easily deduce other user ID numbers and compromise those accounts.
By design, APIs are susceptible to various attacks, and insufficient skills in API development without incorporating web and cloud API regulations will likely lead to vulnerabilities. “Fuzzing” is a common attack, whereby large garbage strings are sent by the attackers to a website or API to potentially create overflows in the code, and expose sensitive data.
Furthermore, weak authentication methods or session ID cookies can lead to API-related breaches too. Other frequent attacks include cross-site scripting (injecting malicious scripts into trusted sites), server-side request forgeries, and SQL injections.
How should organizations adopt an innovative approach to threat detection using AI/ML capabilities to effectively manage their cyber risks?
CT: By leveraging the advanced capabilities of artificial intelligence (AI) and machine learning (ML) via security platforms, it is important for organizations to search for possible data breaches by identifying unusual and malicious behavior across networks and responding to threats in real time.
Cloud-scale machine learning provides scalable insights to organizations with global coverage across their network. The evolution of AI and ML enables continuous improvement and adaptation to deter new threats by refining detection algorithms based on historical data. Organizations should put in place a security platform that can pick up on any tactic that attackers may employ.
For instance, if an attacker infiltrates an organization’s network using AI-generated malware, compromised chat services, or API vulnerabilities, the security platform must be able to track their lateral movements, conduct reconnaissance, and alert data theft.
Additionally, organizations have to proactively address vulnerabilities and reveal risks before they can be exploited. It can be done by fostering a culture of innovation and investing in AI and ML-driven cybersecurity solutions for organizations to become more resilient against emerging threats.
What obstacles do organizations encounter when implementing such security measures, and how can we ensure these measures are comprehensive?
CT: One of the common obstacles organizations face when it comes to implementing security measures is the belief that their current security measures are adequate. From ExtraHop’s Cyber Confidence Index 2024, although an overwhelming majority (98%) of IT and cybersecurity decision makers in organizations said they are confident in their organizations’ ability to manage cyber risk, most acknowledged that they are frequently the victim of ongoing threats and falling behind when it comes to identifying and remediating threats.
In Singapore, for instance, organizations are overwhelmed by a multitude of barriers holding them back from effectively managing cyber risk, citing insufficient budgets (22%), immature risk management processes (18%), outdated technology (16%), and more.
In response to this widespread set of unique challenges, almost half (49%) of the respondents agree using AI and machine learning to help manage and mitigate cyber risk is a top priority for their organizations in 2024.
The issue we often see in the cybersecurity industry today is the huge asymmetry between the effort to successfully attack vs. successfully defend. Millions of attacks can be launched currently per second, and the attacker only has to be successful once. However, as AI and ML come more into play in cybersecurity tools, it will change this imbalance of power to support the defenders.