You may have been fascinated by Youtube videos demonstrating the advances in ‘liveness’ spoofs: a big step ahead of what we used to do with software such as Photoshop to manipulate photos.

Manipulating audio and video images to make people say or do things they didn’t actually say or do may seem a great prank to play. In fact, a number of such ‘fun’ apps have been developed. But such deepfake technology can result in serious consequences, especially in the political and criminal arenas.

With the impending US elections in 2020, current exchanges in the trade war, the explosive rise of e-commerce and digital banking, and other events such as the Hong Kong anti-government protests, the use of deepfake technology in cybercrime and cyberwarfare is limited only by our imagination.

Cybersecurity experts share their views on this emerging threat with CybersecAsia:

Deepfake will see increased usage

Jeff Hurmuses, Area Vice President and Managing Director, Asia Pacific, Malwarebytes: We will see an increased use of deepfake technology for malicious purposes. For example, scammers and malware authors will attempt to sabotage electoral candidates or politicians by spreading falsehoods.

There may be more incidents like the controversial video of a Malaysian Minister or even the use of such technology to make women the victims of digital sexual crimes, as deepfake tech will either be incredibly subtle or incredibly convincing to the point where it would require a lot of digging to determine whether it was fake.

Regardless of the tactics for scamming, the real threat will be the attacks on our hearts and minds through social media and media manipulation.

Broader deepfake capabilities for less-skilled threat actors

Steve Grobman, SVP and Chief Technology Officer, McAfee: The ability to create manipulated content is not new. Manipulated images were used as far back as World War II in campaigns designed to make people believe things that weren’t true.

What’s changed with the advances in artificial intelligence is you can now build a very convincing deepfake without being an expert in technology. There are websites set up where you can upload a video and receive in return, a deepfake video. There are very compelling capabilities in the public domain that can deliver both deepfake audio and video abilities to hundreds of thousands of potential threat actors with the skills to create persuasive phony content.

Deepfake video or text can be weaponized to enhance information warfare. Freely available video of public comments can be used to train a machine-learning model that can develop of deepfake video depicting one person’s words coming out of another’s mouth. Attackers can now create automated, targeted content to increase the probability that an individual or groups fall for a campaign. In this way, AI and machine learning can be combined to create massive chaos.

In general, adversaries are going to use the best technology to accomplish their goals, so if we think about nation-state actors attempting to manipulate an election, using deepfake video to manipulate an audience makes a lot of sense. Adversaries will try to create wedges and divides in society. Or if a cybercriminal can have a CEO make what appears to be a compelling statement that a company missed earnings or that there’s a fatal flaw in a product that’s going to require a massive recall. Such a video can be distributed to manipulate a stock price or enable other financial crimes

We predict the ability of an untrained class to create deepfakes will enhance an increase in quantity of misinformation.

Biometrics creating a false sense of security in the enterprise

Lavi Lazarovitz, Security Research Team Lead, CyberArk: With biometric authentication becoming increasingly popular, we’ll begin to see a level of unfounded complacency when it comes to security.

While it’s true that biometric authentication is more secure than traditional, key-based authentication methods, attackers typically aren’t after fingerprints, facial data or retinal scans. Today, they want the access that lies behind secure authentication methods.

So, while biometric authentication is a very good way to authenticate a user to a device, organizations must be aware that every time that happens, that biometric data must be encrypted and the assets behind the authentication are secure.

Even more importantly, the network authentication token that’s generated must be protected. That token, if compromised by attackers, can allow them to blaze a trail across the network, potentially gaining administrative access and privileged credentials to accomplish their goals – all while masquerading as a legitimate, authenticated employee.