With even tech corporations falling prey to email-related and credential stuffing attacks, let us brace ourselves for machine learning and warring.
Email attacks have rapidly increased in volume and sophistication, with well-researched and convincing impersonation attacks accompanying rising cases of account takeovers.
In one instance, a malicious email arriving at the servers of an Australian logistics company was deemed benign by traditional email tools used for spotting such emails. As a result, one employee’s email account was compromised, and the attacker successfully accessed several sensitive files to gather details of employees and credit card transactions.
With this information, the attack began communicating with others in the organization, sending out over two hundred further emails to take hold of more employee accounts.
To understand how this attack could have been thwarted, CybersecAsia had a chat with Dan Fein, Director of Email Security Products, Darktrace, on email security and how AI makes a difference.
CybersecAsia: What are some examples of ‘traditional’ email tools (or ‘gateways to stopping advanced threats’) for spotting bad emails?
Dan Fein (DF): Secure email gateways and spam filtering are traditional tools that evaluate each email in isolation and decide whether elements of the email have been observed in previous attacks.
AI-powered defenses, on the other hand, understand emails in the context of users’ wider digital activity. Rather than relying on historical data of known ‘bad’ attacks or known domains and file hashes, AI is able to understand the unique ‘patterns of life’ of email users, as well as the complex web of relationships between them.
Akin to the human immune system, this evolving understanding of ‘self’ for every sender and recipient allows AI-enhanced email security solutions to identify the subtle indicators of novel, sophisticated malicious emails, and then neutralize the threat.
CybersecAsia: On the topic of credential harvesting: Is there any danger of an email account being taken over when someone creates a different password for that account? For example, using the same email, we have different passwords for Skype and Gmail. If we go to a suspicious site using the same email account but with a newly-created password, do we run the risk of having that email account taken over?
DF: Phishing tactics to source an employee’s email credentials is on the rise, with a surge of fake login pages for Office 365, G Suite, and other latest SaaS flavor-of-the-day applications being created and sent to unsuspecting users who have unwittingly handed over their credentials. In the victims’ eyes, they were simply signing in for the day as they normally would.
Knowing the email address is half the battle, and then attackers simply ask the user for their password via these hoax login pages. Often, knowing one password can lead you quite easily to the next one by learning a user’s style to brute force their next attempt. Most people use mnemonic techniques or common structures when creating passwords that can easily be copied, (e.g. Summer2020! – Winter2020! – !2020Autumn). Attackers can brute force a range of social media and SaaS application accounts for a single user until they unlock which particular application matches a variation of their known password.
CybersecAsia: Can geofencing rules for vetting emails be overcome?
DF: Geofencing essentially limits access to digital systems based on a device’s geographical location. However, there are many ways that attackers can make it look as if their connection is coming from an approved location using easy and cheap proxy servers, VPNs, or TOR.
Somebody can create their own proxy on a rented Digital Ocean or AWS Micro Instance (which is free of charge or low-cost), which could be paid for using fake details and stolen credit cards. Another common method is using other compromised machines from their own pool of bots.
For example, an attacker could buy access to a legitimate company on the Darknet (such as buying admin credentials), and then would be able to slip past any reputation checks and circumvent geofences.
To sum up, there is no such thing as 100% security when it comes to cybersecurity. Basic cyber hygiene, such as training employees to spot simple spoofs, is important, but it has little effect against the sophisticated spear-phishing attacks we see today.
Thank you, Dan for your insights.
We can take it that an AI cyber defense system continually monitors emails and other forms of communications across the entire digital business from cloud environments to industrial factory floors to understand the ‘baseline’ routine communications content reaching the organization.
Any deviation from this baseline activity will be detected, frozen and alerted to the relevant people much faster than human-based vigilance.
With that in mind, attackers are no doubt scheming new approaches to create artificial ‘baseline’ activities and content to feed into AI cyber defense systems and poison the learning. Such ‘artificial intelligence attacks’ or adversarial machine learning may lead to a war of the machines sooner than we think.
That is why experts have started lobbying for AI security compliance regimes to ensure cyber defense providers keep a tight rein on the digital war—a feature story for another day in CybersecAsia.net.