Open source generative AI tools, when not managed well, amplify security risks through shadow IT, putting sensitive data and systems at risk
Across the Asia-Pacific region, organizations are grappling with a complex challenge: how to harness AI’s transformative powers without opening themselves up to unprecedented security risks.
Despite continued investment in traditional data loss tools, organizations are facing a surge in data loss incidents over recent years, a trend that is expected to continue rising in the years to come. Information shared with AI tools can expose sensitive information to competitors, putting organizations at risk. All the more alarming is that emerging technologies can amplify such threats due to shadow IT.
Unapproved installation of generative AI (GenAI) tools on work devices without robust IT security, compliance and legal oversight, dramatically expands the corporate attack surface. A lack of IT visibility into such GenAI usage leads to unmonitored security risks, putting sensitive corporate data at risk and creating critical security blind spots.
Shadow AI traps in unregulated AI
Open source AI models offer several advantages, including transparency, community-driven innovation and customization, which can be tailored to the user’s specific needs.
However, this technological freedom comes with a dangerous flipside. When employees use unapproved GenAI software on work devices without their IT department’s knowledge or consent, they unknowingly open digital backdoors into their organization’s most sensitive information.
The consequences: confidential information may be unknowingly exposed, and intellectual property may be compromised, creating inherent risks of data leakage, misuse and regulatory penalties if not properly secured. According to one survey, 35% of organizations polled had suffered data breaches costing between US$1m and US$20m over the last three years.
Securing data in the GenAI age
Fret not: there is no need to resort to outright banning of certain AI tools. Instead, robust governance is key to harnessing AI power safely.
Rather than restricting innovation, it is my opinion that organizations need to take the following strategic steps to secure enterprise data:
- Maintain clear data and governance policies: Develop and maintain clear, region-specific AI usage policies that address the unique regulatory landscape across any diverse markets that your organization is present in. These policies should specifically address data privacy, security protocols, and approved tools that meet compliance standards, are communicated effectively to all employees, and are rigorously enforced. Organizations must prioritize robust data governance practices that align with rapidly evolving regional regulations, such as Japan’s stringent APPI compliance standards and Singapore’s new governance framework for GenAI.
- Gain visibility and control: Ensure total observability into the applications employees are using, including open source GenAI tools. Use appropriate tools to detect and root out shadow IT risks, to regain the necessary control needed to enforce security policies and protect sensitive data.
- Invest in effective employee training: Invest in security awareness training that educates employees about the specific risks associated with shadow IT GenAI tools. This empowers staff to be part of the solution instead of being the source of risks.
The rise of GenAI chatbots and related tools presents extraordinary potential for businesses in the APAC region to boost productivity, but that potential does not come risk-free.
As the AI landscape continues to evolve in the region at breakneck speed, it remains crucial for enterprises to adopt a proactive approach to AI security.