Besides bad actors leveraging AI for social engineering and other cyberthreats, organizations’ use of AI can pose problems for their cybersecurity postures.
Zscaler’s 2024 AI Security Report found that AI transactions skyrocketed by nearly 600% worldwide as adoption of AI tools become widespread.
Yet, many organizations see GenAI tools as more of a threat than an opportunity. In fact, 18.5% of these transactions are being blocked by enterprises because of data loss and privacy concerns.
Is blocking the only way for organizations to protect themselves? Can organizations enjoy the productivity benefits of AI tools without sacrificing security?
To address these questions, CybersecAsia called upon Heng Mok, CISO, APJ, Zscaler, for his expert perspectives.
Many organizations in Asia Pacific are leveraging AI to stay ahead of the competition. In what ways can the use of AI tools affect an organization’s cybersecurity posture?
Heng Mok (HM): The integration of AI tools into organizational processes undoubtedly brings numerous benefits in terms of efficiency, productivity, and competitiveness. However, it’s crucial to acknowledge the impact of AI tools on cybersecurity posture, as they introduce both opportunities and challenges.
The utilization of GenAI tools within enterprises introduces significant risks. The first concern lies in the protection of intellectual property and non-public information. There’s a substantial risk of data leakage, posing threats to sensitive data and proprietary information.
These risks also highlight an expanded attack surface, creating new avenues for potential breaches, along with the emergence of new threat delivery vectors. Additionally, there’s an increased supply chain risk associated with the integration of AI tools into organizational ecosystems.
Moreover, the reliance on AI introduces concerns regarding data quality and integrity as more data is generated from the use of GenAI tools. Addressing these data concerns is crucial to maintaining the effectiveness of cybersecurity measures amidst the integration of AI tools.
Another notable concern is the potential misuse of AI by cybercriminals to orchestrate more sophisticated and targeted attacks. As highlighted in Zscaler’s recent AI security report, the combination of AI and social engineering exploits can lead to a surge in cyber breaches, characterized by enhanced quality, diversity, and quantity. This is also coupled with the emergence of AI capabilities without the ethical guardrails built into the models, which allows adversaries to significantly automate and lower the barrier of entry to identify vulnerabilities and build exploits as well as malware packages.
However, embracing AI to fight against AI-driven threats represents a proactive approach to cybersecurity, as leveraging AI technologies enables enterprises to enhance threat detection and response capabilities, mitigating risks effectively.
AI tools can significantly enhance an organization’s cybersecurity posture by bolstering threat detection and response capabilities. With AI-driven analytics, organizations are empowered with the ability to sift through vast amounts of data to identify potential threats in real-time, enabling proactive mitigation before they escalate into full-blown breaches. Additionally, AI-powered security solutions can adapt and learn from evolving threats, staying ahead of adversaries in a dynamic threat landscape.
To reap the benefits of AI usage, it is important to adopt a comprehensive and proactive approach to address associated risks. A Zero Trust framework, accompanied by continuous verification mechanisms, can help reduce their attack surface, prevent lateral movement of threats, and lower the risk of data breaches. By implementing robust security measures, promoting cybersecurity awareness, and fostering a culture of collaboration and innovation, organizations can harness the power of AI while safeguarding their digital assets against evolving threats.
How should organizations in Asia Pacific safely and securely adopt AI tools? What must be put in place to address AI’s impact on their security postures?
HM: As technology advances and evolves, the secure implementation of AI tools within organizations is crucial to effectively manage cybersecurity risks. Most importantly enterprises must adopt a “never trust, always verify” mindset, by embracing Zero Trust Architecture. This approach challenges traditional notions within networks, advocating for continuous verification of identities and devices to reduce attack surfaces and prevent lateral movements of threats.
Other than Zero Trust, there are several considerations for enterprises:
- Comprehensive risk assessment: Before integrating AI tools, organizations need to conduct a thorough risk assessment, identifying potential vulnerabilities and security gaps. This assessment should encompass data privacy, regulatory compliance, and the specific security implications of AI implementation.
- Data protection measures: Given the sensitivity of data processed by AI algorithms, robust data protection measures must be in place. This includes encryption of data at rest and in transit, access controls, and regular audits to ensure compliance with data protection regulations like GDPR and local data protection laws.
- Employee training and awareness: Human error remains one of the biggest cybersecurity risks. Providing comprehensive training and awareness programs for employees, especially those involved in AI implementation and data handling, is essential to mitigate risks associated with phishing, social engineering, and inadvertent data exposure.
- Regular ongoing threat and risk assessments: Conducting regular assessments to ensure that AI systems and security measures remain up to date to protect against the latest threats and vulnerabilities is essential. This includes patch management, vulnerability scanning, tabletop exercises, automated assurance and red teaming to identify and address any weaknesses proactively.
By taking these factors into consideration, organizations in Asia Pacific can safely harness the power of AI tools, while effectively managing associated security risks.
What do you see as likely upcoming AI threats, and how should organizations address them?
Given the rapid advancements and use of AI technology, we foresee several potential AI threats on the horizon that organizations should prepare to address proactively.
Firstly, the utilization of GenAI by cybercriminals is likely to lead to an increase in attacks. GenAI tools enable hackers to identify vulnerabilities and orchestrate attacks with unprecedented speed and sophistication. This could result in a surge of cyber breaches characterized by enhanced quality, diversity, and quantity as well as lowering the barrier of entry for adversaries to significantly automate and streamline operations.
To address this, organizations should prioritize enhancing their cybersecurity posture by implementing robust defense mechanisms, such as Zero Trust Architecture and continuous monitoring, to mitigate the risks posed by AI-driven threats.
Secondly, as AI becomes more prevalent in cyber-attacks, we anticipate the emergence of AI-assisted threats, including sophisticated phishing campaigns, evasive malware, and amplified attacks in terms of speed and scale.
To address these threats, organizations should invest in AI-powered security solutions that can effectively detect and respond to evolving threats in real-time. Additionally, leveraging AI for threat intelligence and predictive analytics can help organizations stay one step ahead of cyber adversaries.
Moreover, emerging deepfake technology poses serious threats, including election interference and misinformation spread. AI has been implicated in deceptive tactics during US elections, such as generating robocalls to discourage voter turnout. These instances likely represent just a fraction of AI-driven disinformation.
State-sponsored entities may also exploit AI to undermine trust in electoral processes. Recent incidents, including viral deepfake images of celebrities like Taylor Swift, highlight the real-world impact and rapid spread of manipulated content before moderation measures catch up.
Organizations must prioritize data governance, implementing encryption, access controls, and data loss prevention solutions to protect sensitive information. Taking a proactive stance against upcoming AI threats involves deploying advanced security measures, utilizing AI-powered technologies for threat detection and response, and emphasizing data privacy and governance.
Collaboration with industry partners and cybersecurity experts is crucial for staying informed about emerging threats and best practices in mitigating AI-related risks effectively.