The risks and potential damages from cyberattacks and “unforseen” tech outages and bugs in the algorithms need to be clearly understood
AI is increasingly being used in stock market punting and financial planning activities alike. As the automated intelligence earns the confidence of speculators, the stakes could run higher: the lifesavings of entire families and individuals are directly at risk.
Even bespoke wealth management services are getting in on the game, typically catering to higher-net worth individuals. However, AI is making it possible to offer their services to a broader group of people now.
Advisors can develop customer profiles and deliver personalized plans based on age, assets, risks, goals, and needs in a fraction of the time, which means firms can offer it to more people — those not savvy enough to understand the exact risks they are opting-in for. This represents a new market for wealth managers, but also a larger risk pool.
Where is the AI threat?
We must always remember that threat actors are using AI, too. The tech offers attackers the same exact benefits:
- It is a force multiplier that allows them to increase the scale and effectiveness of their campaigns.
- They can even poison the AI model itself to reveal sensitive information or deliver malicious results.
- Employees that are not adequately trained can inadvertently expose sensitive information through the information they input into AI tools, which subsequently incorporate it into their machine learning activities. We have already seen instances of this invalidating intellectual property claims.
Therefore, security controls have to be integrated into the entire AI lifecycle, including employee training. Before using any AI tool, organizations must understand the privacy classification of all the data that could be fed into a system; the source of the data used to train the AI tools; and the specifics of the security protocols in place to protect sensitive information. This must be part of the AI rollout from day one.
Open AI systems carry even more risk, as they are designed to be accessible to the public, which enables them to learn from a much larger dataset: but this also allows manipulation by bad actors.
Closed systems are more secure, but require more hands-on management and model training.
Fund- and wealth- managers using AI should be given in-depth training about the tools and how they work, and how to use them safely —which data can be used and which should never be exposed to a large language model of the kind that powers generative AI (GenAI) applications. When implementing an AI-based solution, it is important to identify the scope of the tool and restrict its data access to what is absolutely necessary to train it.
Responsible AI in fund management
After developing a comprehensive understanding of the privacy mandate of the clientele information, the source of the model’s data, and the native security mechanisms built in, both investors and their fund/wealth managers alike need to know (and underwrite formally) the other risks:
- Many AI-tools have built-in defenses to protect against unethical use: a good example is ChatGPT’s rules that seek to prevent people from using it for nefarious purposes, such as building malware. However, it is clear also that these rules can be bypassed through cleverly-worded prompts that obscure the intent of the user. This is one type of prompt-injection attack, which is a category of threats unique to AI-based systems.
- Strong controls must be in place to prevent these attacks before they happen. Broadly, these controls fall under the scope of zero trust cybersecurity strategies.
- AI tools, especially those using large language models, should not be treated as a typical software tool. They are more like a hybrid between a tool and a user. Zero Trust programs limit access to resources based on a person’s individual job function, scope, and needs. This limits the damage an attacker can do by compromising a single employee, because it limits the range of lateral movement.
- Remember that adding any software tool also increases the attack surface by offering more entry points to an attacker. Compromising a tool — such as an AI tool that has unlimited access to personally identifiable information, company secrets, proprietary tools, strategic forecasting, competitive analysis, and more — could be catastrophic.
- Preventing this kind of a breach must be a priority from the very beginning, at the forefront of the strategy-level discussions to implement AI tools. After a cyber security incident, it is often too late. While most AI tools come with built-in security, organizations need to take care to tailor these to their specific needs. They must also go beyond them. Despite similarities, each organization will have unique use cases, and calibrating defense to match these dynamics is table stakes for cybersecurity in 2024.
AI-driven Investors beware
AI will not replace financial advisors, but it will take the industry to its next stage of evolution, and that means new threats.
The scale of the models and the data they ingest (which will grow exponentially larger with each passing day) can expand the attack surface exponentially, and just one major breach can negate any and all gains a speculator or his fund manager makes via leveraging AI. (Editor’s note: imagine a CrowdStrike-level mistake that causes weeks of investment market mayhem!)
Cybersecurity analysis and control, under a Zero Trust model, is indispensable for unlocking the full potential of any AI-based tool.