Shut the front door! Why tolerate today’s levels of phishing attacks?
Organised groups of cybercriminals operate very much like any business. They won’t hesitate to deploy the latest technological advances in their pursuit of profit. In that respect, we’re seeing a huge rise in instances where artificial intelligence is used to gain users’ trust. For example, by leveraging tools like ChatGPT, even inexperienced cybercriminals can now write more sophisticated phishing attacks that better emulate actual conversational styles.
Once bad actors’ emails get into end-users’ inboxes, it seems there’s little that IT departments and cybersecurity teams can do. Clever, well-written messages are more likely to deceive their victims into acting on threat actors’ instructions and credentials being provided to malicious third parties. The last line of defence is staff training to inform employees of the signs of a phishing attack; this is not always an effective strategy, particularly in cases where the recipient isn’t concentrating or is under stress – in fact, in any number of edge cases.
At the end of the day, phishing attacks will get through to end-user inboxes, and there’s little we can do about it, right?
Not so, says Tim Bentley, Regional Director APAC at Abnormal Security. In an exclusive interview with Tech Wire Asia, he said of the presence of malicious content in any user’s inbox, “It’s been widely accepted that bad email like phishing emails get through to users. In turn, the last three or four years have seen a pretty much new industry – security awareness training – go from strength to strength. Technology had effectively waved the white flag because it can’t deal with the influx of malicious email.”
But what if users never had to decipher whether their emails were legitimate or an attack? What if those phishing emails were stopped before they reached inboxes? The engine at the heart of the Abnormal Security approach to email security is behavioural artificial intelligence, which uses an organisation’s email as a learning body to baseline known ‘normal’ behaviour – including user-specific communication patterns, styles, and relationships – and detect deviations that may denote malicious activity.
“For example, if I receive an email from a vendor that’s been compromised – I’ve got no idea that vendor has been compromised – but the source IP address is actually from Bulgaria, which doesn’t tally with how that vendor normally deals with me. There’s language in [the email] that indicates an abnormality. There’s banking information that doesn’t line up with their normal bank information[…] and so forth. All those signals can be pulled into making a more informed decision about the legitimacy of the email. […] At our fingertips now, within milliseconds, we have a mountain of evidence to be able to say, ‘well, this is abnormal!’ before it ever reaches my inbox.”
Companies with extended supply chains are particularly vulnerable. Many malicious actors will target smaller companies and use those, once compromised, to attack bigger companies. Tim says, “It’s not a spoofed email, it’s a compromised email, which is much more difficult to detect, because it’s going to pass all the normal authentication methods.” So spotting it, and subsequently blocking it, needs a smart system to look under the surface.
“We’ll take as much intelligence as we can about how people work, and we use that to determine behavior,’ said Tim. “For internal employees, it’s not just from their email, but from Microsoft 365 as a whole, as well as any other tools that the customer has integrated, like CrowdStrike and Okta.
“More recently, we started protecting Slack, Zoom and Teams, which give us different insights as well. So, for example, let’s say there’s a CFO based in Singapore, and she travels to Hong Kong and Sydney fairly often. She uses an iPhone 13 and her device for work is a ThinkPad. Now, for the first time, the data is telling us that she has popped up in Nairobi, on an Android device over a protocol that bypasses MFA – Abnormal can detect this anomalous behavior, determine it to be suspicious activity, and then remediate her account. Without that background knowledge, any protective shield can’t be effective.”
Note that Abnormal Security doesn’t claim a 100% hit rate.
As Tim says, “You can’t rest on your laurels. But we can raise the bar so high that it becomes an event if something does get through, rather than an acceptance that hundreds [of phishing emails] will get through a day. If any other cyber security defence layer let you down dozens or even hundreds of times a day, you know you’d change it, but somehow, it’s accepted with email.”
As part of the proof of concept, the Abnormal platform spends a week learning from an organisation’s last 90 days of email activity to demonstrate the emails that it would have flagged against what was, by default, let through. It’s a “non-invasive proof-of-concept that connects via API and doesn’t interfere with current processes,” Tim said. “It’s the front door, right? It’s being left ajar at the moment. We’re talking about closing it.”
To learn more, or start with a proof of concept, contact the local team. You can also download the CISO guide to generative AI attacks and discover how cybercriminals use generative AI tools like ChatGPT to create more effective email attacks.
- The end of TikTok Shop and other social commerce in Indonesia
- Lost in translation: Can AI tools improve?
- Is ChatGPT enabling collaborative decision-making or merely Hobson’s choice?
- NVIDIA and NTT DOCOMO revolutionize telecom services with world’s first GPU-accelerated 5G network
- Sony battles new hack: ‘Is my account safe?’ Echoes among concerned customers