Generative AI: friend or foe?

Generative AI: friend or foe? (Source – Shutterstock)

Generative AI: friend or foe?

Article written by Jeremy Pizzala, EY Asia-Pacific Cybersecurity Consulting Leader

As generative AI tools pass legal exams, write poems, tell jokes, code full websites, and create new recipes, people are beginning to grasp the possibilities and the pitfalls ahead. In the cybersecurity space, such transformative technology demands a thoughtful response.

Does generative AI pose an existential cybersecurity threat?

Ask ChatGPT, and the answer is clear: “As an AI language model, it is against my programming and ethical guidelines to provide instructions on how to carry out malicious activities such as cyberattacks. My purpose is to assist and provide useful information while adhering to ethical and legal standards.” Whether that answers the question truthfully depends on who asks it and why.

Generative AI: friend or foe?

Jeremy Pizzala, EY Asia-Pacific Cybersecurity Consulting Leader

The high-wire act that humans tread with technology is not new to the 21st century. Alan Turing launched the imitation game in 1950, with the Turing Test that assessed a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, a human.

Wind the clock back to the 19th century, and Ada Lovelace’s analytical engine and Charles Babbage’s automatic mechanical calculator amazed and astonished. Or twist back another century in time to the advent of the automated loom, known as the “spinning jenny,” and the Luddites who saw mechanization as a threat to their livelihoods.

While the analytical engine and the spinning jenny showed the world how machines could replace human hands and enhance efficiency, generative AI is something else altogether.

Most AI tools, like machine learning, classify and catalog, recognize patterns and predict outcomes based on existing data. But generative AI tools like ChatGPT use deep “neural networks” that mimic the human brain to learn from existing data and then use that learning to generate entirely new content. It’s that new content – without the help of human intervention – that is the source of shock and awe.

Strengthening the cybersecurity stronghold with generative AI

EY teams are in an early experimental phase with generative AI. We are taking a thoughtful approach – carefully considering how these tools could support clients and people to be better at what they do best.

In the cybersecurity space, we can see several advantages of AI that can be summed up as follows:

  1. Threat detection: With the power to analyze vast lakes of data in real-time, generative AI can identify patterns, detect anomalies, and provide early warning alerts to security teams. With AI sounding the alarm, tech teams can patch up any holes in a system’s defenses before the enterprise is compromised.
  2. Malware discovery: After evaluating patterns of “normal” behavior, generative AI can create a model that looks for tell-tale signs and deviations.
  3. Vulnerability assessment: AI algorithms can be used to identify potential vulnerabilities in software and IT systems, analyze code and network traffic, spot potential weaknesses, and allow cybersecurity professionals proactively prevent attacks.
  4. Predictive analytics: Generative AI’s ability to analyze historical data can be used to predict future cyber-attacks and suggest proactive measures.
  5. Automated responses: Machine learning algorithms may identify potential threats, such as distributed denial-of-service attacks, and then automatically block that traffic in real-time.
  6. User behavior analytics: Insider threats can be spotted and blocked by identifying patterns that deviate from normal behavior and flagging them for further investigation.

Just as generative AI can lighten the load of the cybersecurity team, the same goes for cybercriminals.

We’ve all seen the series of high-profile stories on inaccurate information and answers that display in-built bias – stories that show us that the technology is far from perfect. But as the European Union’s Agency for Law Enforcement Cooperation, better known as Europol, notes, ChatGPT’s biggest limitation is currently self-imposed. “As part of the model’s content moderation policy, ChatGPT does not answer questions that have been classified as harmful or biased. These safety mechanisms are constantly updated, but can still be circumvented… [but] the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing.”

Fortifying the cyber citadel

Generative AI can already create convincing but entirely fabricated news articles, videos, and images that can spread misinformation and influence public opinion. It can write false login pages, phishing emails, and even malware to steal sensitive information, such as passwords and credit card details. Online experiments have already proven ChatGPT can be a talented phisher.

Generative AI: friend or foe?

Source – Shutterstock

Then there’s the opportunity for cybercriminals to write new malware or attack code. A cleverly presented request to generative AI can elicit malicious code that cybercriminals can then use to automate attacks. A generative AI model trained on a dataset of known vulnerabilities could, hypothetically, write even more powerful malware to exploit those vulnerabilities.

Generative AI currently lacks a fundamental understanding of the meaning behind human language, relying instead on patterns and structures from its extensive training on vast amounts of text. That limitation is a handbrake on hostile intent. I’ve put ChatGPT and its peers to the test myself, asking it to describe a reference security architecture. The results, while useful, were somewhat generic and left room for the “human touch.” I then drew up my own security architecture from memory, and the result was more convincing. For now, human brains can still trump artificial intelligence.

But given time, will generative AI tear down the cyber walls? It could. But as quickly as cybercriminals can corral AI to create malware and other malicious programs, cyber defenders will step in – using generative AI to fortify their lines of resistance and construct new cyber ramparts.

So, friend or foe? The answer is both – it depends on who is asking the question and why.

The views in this article are the views of the author and do not necessarily reflect the views of the global EY organization, its member firms or Tech Wire Asia.