AI has made deepfakes realistic and believable: time for a Zero Trust approach and changes in corporate culture and awareness training.
Since 2017, IT security professionals have been debating the threat of emergent deepfake audio and video. What started off as amateurish attempts soon transformed and grew into sizeable cybersecurity concerns.
The risks involved are high: ranging from defamation (early deepfakes mocked or abused celebrities) and political manipulation (e.g., by publishing fake material immediately before elections to discredit political opponents), to a whirlpool of fake news that could even result in business fraud or stock market manipulation.
Infamously, deepfake technology fooled the CEO of a UK-based energy firm to transfer approximately US$243,000 to the bank account of a Hungarian supplier. The CEO only found out later that it was fraudsters who had used AI to imitate the voice of his boss. This was the first noted instance of an AI-generated voice deepfake used in a scam, and it paints a very scary picture of the possible security threats it could cause.
Worrisome malicious trend
The rise of deepfake technology is worrisome, as the risk spectrum is extensive and has potential to encourage widespread misrepresentation, false news and distrust.
Such malicious technology can even undermine National Security. If comedian Jordan Peele was able to flawlessly ventriloquize Barack Obama with a parody of a public service announcement back in 2018, imagine how today’s advancements in deepfake technology can wreak destructive societal and economic impact.
Parties with bad intentions, especially those that are state-sponsored, may now find it easy to leverage deepfakes to spread negative propaganda to an unsuspecting public, leading to irreversible social trauma.
AI technology has played a key factor in the proliferation of deepfakes due to it exponential year-on-year development. This year, the use of AI technology by cybercriminals will likely cause deepfakes to progress from singular obscure attacks, to a rapid-and-ubiquitous wave of attacks.
Such a worrying trend may inevitably cause deepfake technology to reach an industry inflection point, much like how we saw the rise of ransomware in 2016.
Zero trust vs deepfakes
Enter the Zero Trust security framework, an approach that IT can use to enable secure access for all applications, from any devices, by not only establishing trust between the device and an application only at the time of login, but also by continuously evaluating trust at every touchpoint.
By embracing a “never trust, always verify” mindset, and rethinking how company confidential and sensitive data is accessed, a Zero Trust framework ensures each organization will have its own unique approach to designing principles of network access and detecting hacking attempts before real damage is done.
While a “never trust, always verify” mindset can sound like IT lockdown, if done right, it is totally transparent to the user and can even contextually apply appropriate and familiar security measures when potential deepfake behavior is detected, much like those seen in consumer apps such as multi-factor authentication and limited number of password attempts.
However, beyond implementing a Zero Trust approach, there are also other practical steps organizations can consider when looking to future-proof themselves against the rise of deepfake technology. Here are five of these steps:
There are several steps that companies can take to safeguard their business from such attacks:
- Be vigilant: Security teams need to be well-acquainted with new deepfake-related threat intelligence and always keep an eye out for suspicious activities on their radar.
- Always have a plan: In case of an attack, incident response workflows and escalation procedures need to be well-set and understood by internal teams—especially top management, IT security, finance, legal, and PR teams.
- Increase awareness: Make deepfakes and their repercussions a regular part of security awareness campaigns. The workforce should always stay alert and be comfortable enough to raise any concerns regarding information they receive, even credible audio/video content. To start, these trainings should be conducted for high-risk employees such as C-level management, middle management, and the finance department.
- Create internal channels: Establish workflows that allow workers to verify information involving a potential deepfake threat. Much like the two-step authentification that has become the de-facto stadard for access security, double-checking critical information has become imperative, especially in this age of AI-generated misinformation. For backup, there also needs to be another channel where they can verify the information that is critical or urgent, especially since attackers usually apply time pressure on the task. Employees need to be equipped with an easy-to-use multi-channel communication tool that they can effortlessly utilize even during crisis.
- Rethink corporate culture: Most of the time, employees do not question the orders sent by their seniors. This constitutes the first and the easiest step for attackers to manipulate. That is why it is important to re-evaluate the corporate culture, and aim for a flatter hierarchy where employees do not hesitate to ask for confirmation before executing a task.
Since there are no robust solutions that can combat the threat of deepfakes in real time, organizations need to not only be aware of the risks but also be prepared for it beforehand. This will help restrict the threat as much as possible.