Celebrated on June 30th, World Social Media Day underscores the profound impact of social networks on communication, business, and daily life. However, this connectivity has also introduced significant risks.
As organizations across the ASEAN region increasingly embrace digital transformation, the pervasiveness of social media has reached unprecedented levels.
Notably, the surge in AI-generated content and deepfake technology has exacerbated the risks associated with social media attacks in today’s digital economy. Over the past months, deepfakes have been used in social media scams where fraudsters have created videos of high-profile individuals to trick users into parting with money or personal information.
Malicious actors are exploiting deepfakes for various purposes, including disinformation campaigns, financial fraud, and sophisticated social engineering attacks. They then leverage multiple social media platforms to disseminate deepfakes, manipulate perceptions, and extract sensitive information, posing severe risks to both individuals and organizations.
The advent of large language models (LLMs) further exacerbates these risks by enabling the creation of highly convincing and contextually relevant text-based content, which can be used to deceive and manipulate users on a large scale.
For more insights into deepfakes and social media threats, CybersecAsia posed some questions to Steven Scheurmann, Regional Vice President, ASEAN, Palo Alto Networks.
How exactly are deepfakes created and how has the process evolved?
Steven Scheurmann (SS): In essence, deepfakes are created by providing millions of images of people to a machine-learning system, which will then learn to synthesize realistic images of people who don’t exist.
Deepfakes have been around for a long time, however, in recent years there has been a significant evolution in the deepfake creation process. Previously, creating a convincing deepfake required substantial computational resources and expertise in AI and machine learning.
Today, advancements in AI and machine learning technologies, coupled with the availability of powerful computing resources, has democratized the process tremendously.
AI tools and applications like synthetic voice generators, which are now more accessible to the general public, have made it easier than ever to create realistic deepfake content. In fact, online tools can also stitch snippets of your voice together, essentially creating a piece of content that is made from your real voice!
What this means is that individuals with limited technical skills are able to effectively create and share content today. The problem lies when these capabilities are used by threat actors to create misleading content as a part of their tactics.
How do social media platforms facilitate the rapid development and widespread dissemination of deepfake content?
SS: Social media platforms offer deepfake creators an easy and effective way to reach large audiences. This is facilitated by social media algorithms that favour engaging and sensational content, resulting in increased overall visibility of a deepfake, making it viral in a short period of time.
Moreover, the vast amount of personal biometric data available on social media, such as photos, videos, and voice recordings, provides ample raw material for creating convincing deepfakes. This easy access to personal data further fuels the development and accuracy of deepfakes.
What can online users do to protect themselves from deepfake content, and how can organizations fortify their AI models against potential vulnerabilities?
SS: Adopt a mindset of never trust, always verify. Individuals need to approach online content with caution. Always double-check the authenticity of what you see and hear by verifying with trusted sources and using tools that can spot potential deepfakes.
It is not just individuals who need to take caution, as deepfakes have now entered the boardroom. Earlier this year, we saw how fraudsters used deepfake content to dupe a Hong Kong employee into transferring more than $25 million, thinking it was his CFO ordering the money transfer. This underscores the need for organizations to incorporate strong proactive steps to safeguard themselves against deepfake vulnerabilities.
Additionally, with the growing use of AI in the current cyber threat landscape, organizations need to stay one step ahead of threat actors. We believe that today’s approach to cybersecurity is Precision AI, which helps security teams automate detection, prevention and response. Furthermore, this approach enables the identification of the use of AI/ML or Generative AI to develop cyber threats like deepfakes.
Educating employees on how to recognize and respond to deepfake threats is also important in strengthening the organization’s overall security posture. By combining technical defenses with informed user practices, enterprises can reduce the likelihood of falling victim to malicious use of deepfake technology and safeguard their operations and reputation effectively.
What does the future landscape of Generative AI look like and its potential effects on cyber-attacks?
SS: There is little doubt that the future of Generative AI has impacted the cybersecurity landscape significantly, introducing equal opportunities and challenges. While it lowers the entry barriers for attackers by democratizing access to sophisticated technology and tools, it also supports experts in our field in coming up with innovative solutions to counter these evolving threats.
At Palo Alto Networks, we believe that Precision AI – which combines the use of ML, Generative AI, and deep learning – is the way forward for defense capabilities. It will be able to leverage extensive cybersecurity datasets to provide actionable insights in clear and natural language, thereby facilitating quicker threat remediation and empowering teams to enhance readiness against emerging threats.