How AI can cause machine learning-powered cyber-attacks

How AI can cause machine learning-powered cyber-attacks. Source: Shutterstock

Why AI-powered cyber-attacks are just a matter of time

  • AI and machine learning tools could go a long way in helping to fight cybercrime. But these technologies aren’t a silver bullet, and could also be exploited by malicious hackers
  • Even if AI-based attacks are still relatively rare, they have a huge potential to grow

In 2017, the WannaCry ransomware attack hit more than 200,000 computers in over 150 countries around the world, marking the beginning of a new era in cyberattack sophistication. Its success lay in its ability to move laterally through an organization in a matter of seconds while paralyzing hard drives, and the incident went on to inspire multiple copycat attacks. 

This cycle of “innovation” will continue, and according to Forrester’s Using AI for Evil report, “mainstream artificial intelligence (AI)-powered hacking is just a matter of time”. After all, the tools of AI, from text analytics to facial recognition to machine learning (ML) platforms, are transforming almost every aspect of the business from personalized customer engagement to cybersecurity. 

Yet, just as businesses can benefit from using AI for a variety of endeavors, malicious actors can use these same technologies for nefarious purposes. Yes, the AI revolution has begun, but don’t be fooled into assuming these are tools only the good guys have.

Offensive AI: a paradigm shift in cyberattacks

Cyberattacks are becoming more ubiquitous and it is inevitable that AI will change the nature of it. Almost no sectors are immune from cyberattacks, in fact, the level of sophistication of the threats faced is continually increasing.

Frankly, computer systems that can learn, reason, and act are still in their infancy. To top it off, machine learning requires huge data sets and for many real-world systems, like driverless cars, a complex blend of physical computer vision sensors, complex programming for real-time decision making, and robotics are required. 

Hence, while deployment is simpler for businesses adopting AI, giving AI access to information and allowing any measure of autonomy brings serious risks that must be considered.

The risks AI poses

If you got an email supposedly from your boss that emulated their writing style and even used some pertinent information, wouldn’t you be more likely to open it? That is what it is like. AI has the potential to automate intrusion techniques by launching attacks at unprecedented speed. After automatically profiling multiple targets’ communications patterns, it could launch artificially-generated phishing attacks that mimic them. 

Additionally, AI-powered malware could also move more easily through an organization by using machine learning to probe internal systems without giving itself away. By analyzing network traffic, it could more easily blend its own communications into other communications happening on the network, hiding in plain sight, 

The time is now for intelligence and espionage services to embrace AI in order to protect national security as cybercriminals and hostile nation-states increasingly look to use the technology for nefarious purposes.

According to a Gartner report, through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems. Despite these reasons to secure systems, Microsoft claims its internal studies find most industry practitioners have yet to come to terms with adversarial machine learning.

To put it in context, for the US, In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than US$654 billionn. In 2019, this had increased to an exposure of 4.1 billion records.