According to one survey in fall 2023, respondents were actually still feeling their way around the technology and its concomitant hazards.
In a survey of 1,200 selected IT and security decision makers from around the world (director level or above from organizations with greater than 1,000 employees) to gauge the understanding and use of Generative AI (GenAI) and large language model (LLM) tools, one finding was that respondents were struggling to understand and address the security concerns that come with employee GenAI use.
According to the findings, 73% of respondents indicated that GenAI tools or LLM were being used “sometimes” or “frequently” at work, but they were not sure how to appropriately address security risks.
When asked, respondents were more concerned about getting inaccurate or nonsensical responses (40%) than security-centric issues such as exposure of customer and employee data (36%); exposure of trade secrets (33%); and financial loss (25%).
Other findings
Following the widespread accessibility of ChatGPT in November 2022, enterprises have had less than a year to fully weigh the risks versus the rewards that come with generative AI tools. Amid the rapid adoption, business leaders need to understand GenAI usage so they can then identify potential gaps in their security protections to ensure data or intellectual property is not improperly shared. Findings on these trends include:
- 32% of respondents indicated that their organization had banned the use of GenAI tools, a similar proportion to those who were very confident in their ability to protect against AI threats (36%). Another 5% indicated that their employees never used these tools at work.
- 74% of respondents had invested or were planning to invest in GenAI protections or security measures this year.
- 90% indicated wanting the government involved in some way, with 60% favoring mandatory regulations and 30% supporting government standards that businesses can adopt at their own discretion.
- 82% were “very” or “somewhat” confident their current security stack can protect against threats from GenAI tools. However, less than half of them had invested in technology that helps their organization monitor the tech.
- 46% had policies in place governing acceptable use, and 42% trained users on safe use of these tools.
According to Raja Mukerji, Co-founder and Chief Scientist, ExtraHop, the firm that commissioned the survey: “As with all emerging technologies we’ve seen become a staple of modern businesses, leaders need more guidance and education to understand how generative AI can be applied across their organization — and the potential risks associated with it,” opining that GenAI can continue to be a force that will up-level entire industries if the innovation is blended with “strong safeguards”.