Here are three broad strategies that can help industry leaders make a sustainable silk purse out of a sow’s ear…
Financial services providers wary of the risks of generative AI (GenAI) may be tempted to consider banning its use in the corporate environment.
That approach may have had its place in the pre-COVID19 past when internet usage was strictly controlled, but workforces more distributed and dependent on internet-based tools, implementing a blanket ban could be disastrous.
Sure, financial services providers could block specific apps, but with many more such apps emerging, the effort required to keep blocking them will become unmanageable very quickly.
A silk purse from a sow’s ear
Instead of trying (and failing) to manage something big that is out of our control, leaders can instead harness its pro’s and control its con’s. How?
- Augmenting AI processes with regulatory oversight
As a critical first step, governments need to continually update and refine data protection and related policies to govern GenAI. Existing models such as the GDPR, HIPAA, and PCI-DSS offer strong foundations within other industries, and financial services providers should work to embed them within policy frameworks. Remember, corporations still hold ultimate responsibility for securing corporate data.
- Build workforce awareness
Security training is integral to effective AI policies. Collaboration between non-technical leadership and security oversight ensures alignment with organizational objectives. Equally crucial is soliciting input from employees directly engaged with AI tools: understanding their specific use cases, and addressing potential workflow disruptions and security risks.
Also, training initiatives must emphasize clarity and simplicity. For example, users should not share anything with these tools that is marked as sensitive or restricted to sensitive. A proactive approach to risk assessment and fostering a culture of vigilance will ensure responsible GenAI use.
- Using the right tools
As with everything else, GenAI can expose organizations to new or unpredictable security risks. The most valuable way to protect against data leakage with generative AI is to audit usage.
Most organizations already have firewalls and proxies to monitor who is accessing what and how often. Both have the ability to monitor user activity online every time site users connect to on the internet. This should be supported by security appliances that offer even more insight — by measuring bytes-in and bytes-out. This kind of broad network visibility allows IT leaders to see if users are sending more bytes out than they should: in the form of data they are uploading to large language models. That will then equip management to assess risk quickly and address the problem swiftly.
Enabling secure innovation
The risks and unpredictable of GenAI are nothing to scoff at. Just like expecting governments to shoulder the responsibility of mitigating the downsides, it is equally futile to rely completely on AI firms to improve safety in their offerings.
A regressive approach like banning its use can alienate staff or impel them to find workarounds that exacerbate shadow AI. On the other hand, an approach grounded in proactive adaptation positions financial services providers to leverage this transformative technology for progress. That ultimately must hinge on prioritizing people, refining processes, and leveraging appropriate tools.
Then, and only then, can financial services providers harness GenAI well.