We must always remember that threat actors are using AI, too. The tech offers attackers the same exact benefits:

  • It is a force multiplier that allows them to increase the scale and effectiveness of their campaigns.
  • They can even poison the AI model itself to reveal sensitive information or deliver malicious results.
  • Employees that are not adequately trained can inadvertently expose sensitive information through the information they input into AI tools, which subsequently incorporate it into their machine learning activities. We have already seen instances of this invalidating intellectual property claims.

After developing a comprehensive understanding of the privacy mandate of the clientele information, the source of the model’s data, and the native security mechanisms built in, both investors and their fund/wealth managers alike need to know (and underwrite formally) the other risks:

  • Many AI-tools have built-in defenses to protect against unethical use: a good example is ChatGPT’s rules that seek to prevent people from using it for nefarious purposes, such as building malware. However, it is clear also that these rules can be bypassed through cleverly-worded prompts that obscure the intent of the user. This is one type of prompt-injection attack, which is a category of threats unique to AI-based systems.
  • Strong controls must be in place to prevent these attacks before they happen. Broadly, these controls fall under the scope of zero trust cybersecurity strategies.
  • AI tools, especially those using large language models, should not be treated as a typical software tool. They are more like a hybrid between a tool and a user. Zero Trust programs limit access to resources based on a person’s individual job function, scope, and needs. This limits the damage an attacker can do by compromising a single employee, because it limits the range of lateral movement.
  • Remember that adding any software tool also increases the attack surface by offering more entry points to an attacker. Compromising a tool — such as an AI tool that has unlimited access to personally identifiable information, company secrets, proprietary tools, strategic forecasting, competitive analysis, and more — could be catastrophic.
  • Preventing this kind of a breach must be a priority from the very beginning, at the forefront of the strategy-level discussions to implement AI tools. After a cyber security incident, it is often too late. While most AI tools come with built-in security, organizations need to take care to tailor these to their specific needs. They must also go beyond them. Despite similarities, each organization will have unique use cases, and calibrating defense to match these dynamics is table stakes for cybersecurity in 2024.