deepfakes

Ng advises users to scrutinize any audio or video content and engagements such as video or voice calls. He pointed out some signals that users should watch out for and best practices they should adopt:

    • Watch out for anomalies in the visuals and audio. For instance, atypical facial movements or blinking patterns, whether the audio matches lip movements, glitches or noticeable edits around the face.
    • Check and verify the source of the audio, video, or content. For example, if it’s a video from a news channel, they will likely have consistent and standard banners, branding, and a running script of breaking news at the bottom. These are all signs of authenticity.
    • For live voice or video calls among colleagues in an organization, it is important to authenticate those involved in the call with three basic factors: something that the person has, something that the person knows, and something that the user is. Ensure that the “something” items are chosen wisely.
    • Prioritize the use of biometric patterns where authentications are required and use multifactor authentication (MFA) when possible.
    • To address the problem at scale, significant policy changes should consider current and future circumstances, as well as address the use of current and previously exposed biometric data.

“To defend against AI-driven scams, organizations must deploy various measures,” added F5’s Shahnawaz. These measures include:

    • Understanding and documenting sensitive transactions, coupled with implementing rigorous verification processes before executing them.
    • Educating staff to discern the legitimacy of communications and identify potential scams.
    • Utilizing AI-driven detection tools to pinpoint and highlight suspicious activities or messages.
    • Securing in-house AI systems with a multi-layered security approach, integrating robust authentication, stringent access controls, and proactive vulnerability management protocols.