Here are three broad strategies that can help industry leaders make a sustainable silk purse out of a sow’s ear…

Instead of trying (and failing) to manage something big that is out of our control, leaders can instead harness its pro’s and control its con’s. How?

  1. Augmenting AI processes with regulatory oversight

    As a critical first step, governments need to continually update and refine data protection and related policies to govern GenAI. Existing models such as the GDPR, HIPAA, and PCI-DSS offer strong foundations within other industries, and financial services providers should work to embed them within policy frameworks. Remember, corporations still hold ultimate responsibility for securing corporate data.

  2. Build workforce awareness

    Security training is integral to effective AI policies. Collaboration between non-technical leadership and security oversight ensures alignment with organizational objectives. Equally crucial is soliciting input from employees directly engaged with AI tools: understanding their specific use cases, and addressing potential workflow disruptions and security risks.

    Also, training initiatives must emphasize clarity and simplicity. For example, users should not share anything with these tools that is marked as sensitive or restricted to sensitive. A proactive approach to risk assessment and fostering a culture of vigilance will ensure responsible GenAI use.

  3. Using the right tools

    As with everything else, GenAI can expose organizations to new or unpredictable security risks. The most valuable way to protect against data leakage with generative AI is to audit usage.

    Most organizations already have firewalls and proxies to monitor who is accessing what and how often. Both have the ability to monitor user activity online every time site users connect to on the internet. This should be supported by security appliances that offer even more insight — by measuring bytes-in and bytes-out. This kind of broad network visibility allows IT leaders to see if users are sending more bytes out than they should: in the form of data they are uploading to large language models. That will then equip management to assess risk quickly and address the problem swiftly.