The growing influence of ChatGPT in the cybersecurity landscape
- As attackers automate their methods and use AI tools like ChatGPT to improve their tactics, it’s important for defenders to use AI to fight back effectively.
- AI can analyze vast amounts of data in mere seconds, leading to a significant improvement in the mean time to detection and response.
Given the recent attention on ChatGPT, a powerful language model developed by OpenAI, many cybersecurity experts closely monitor its potential impact on the field. From students to technology professionals, it seems that ChatGPT has become a common term in everyday vocabulary, highlighting the need for continued examination of its implications for cybersecurity.
With the news of Google’s Bard and the integration of AI ‘Large Language Models’ (LLMs) in search engines, the risks and rewards of using ‘conversational AI-bots’ have hit the mainstream, from the dining room to the boardroom. The conversation drivers on this topic have been mainly about its implementation (or prohibition) in the education industry, and whether the software could displace people from their jobs.
BlackBerry aimed to investigate the opinions of IT security professionals about ChatGPT, despite numerous threat research papers that have identified potential vulnerabilities. Tech Wire Asia recently spoke with Jonathan Jackson, Director of Engineering, APAC at BlackBerry, on this matter.
“Ultimately, we found that IT pros see both opportunities and threats of the technology across different markets. We intend to raise awareness of the risks and help individuals and organizations be better prepared for the AI future,” said Jackson.
He believes that AI bots are here to stay. As threat actors continue to automate attacks and leverage tools like ChatGPT to advance their skills and tactics, the ability to employ AI in defense, fighting fire with fire, is crucial.
IT professionals’ perceptions of ChatGPT
BlackBerry’s survey of 1,500 IT and cybersecurity decision-makers revealed that 51% of respondents believe a successful cyberattack attributed to ChatGPT will occur within a year. 71% of the participants also believe that nation-states may already be using ChatGPT for malicious purposes.
Jackson highlighted a few potential impact areas for cybersecurity professionals to be aware of. According to him, bad grammar and other language issues can be a dead giveaway that a piece of communication is potentially malicious. He also noted that ChatGPT can be used to aid hackers in crafting more legitimate-sounding phishing emails.
“Beyond this, we see evidence in underground forums of ChatGPT being used to create new malware and enhance the coding skills of less experienced hackers,” said Jackson. “With such an evolving cyber landscape, it is essential that we remain vigilant and equipped to mitigate these emerging threats.”
ChatGPT’s role in the cybersecurity industry in the APAC region
ChatGPT has a significant role in the cyber industry, and its influence will only increase over time. The AI platform has raised the level of discourse on both the benefits and ramifications AI brings to different target audiences.
BlackBerry released its first Global Threat Intelligence Report on January 31, 2023, highlighting that its AI-driven prevention-first technology had stopped 1,757,248 malware-based cyberattacks in the 90 days between September 1, 2022, and November 30, 2022, with 10,300 attacks occurring in Singapore. That would have meant 3,433 attacks stopped in Singapore per month, approximately 113 per day, and nearly five attacks per hour over those three months.
Cybercriminals never sleep, and the cybersecurity industry throughout APAC must act in response to prepare for new evasive, targeted, and automated tactics to find their next victim.
The importance of AI investment for businesses to combat cyberthreats
“As LLM AI evolves, it will only get more difficult to defend organizations without using AI to level the playing field. Our research showed that most IT professionals are aware that new AI-powered cyberthreats will demand cyber defenses built on AI-powered tools,” said Jackson.
Organizations such as Indonesia’s Bluebird Group leverage managed security service providers, like BlackBerry’s Managed Extended Detection & Response (XDR), which offers organizations round-the-clock support from seasoned cybersecurity professionals using state-of-the-art threat detection and response tools for intrusion detection, incident response, and threat elimination.
Jackson shared a customer story where GDEX in Malaysia, another BlackBerry customer, uses AI-enabled CylanceOPTICS and CylancePROTECT to stop threats before they happen – meaning fewer people are needed to monitor fewer alerts. CIO Melvin Foong at GDEX says, “We don’t need our people to monitor it closely. I only have to assign one person to check it periodically. Compared to other areas or solutions, I may need five or six people attending to it just to keep things running.”
Jackson explains that AI can analyze vast amounts of data in mere seconds, leading to a significant improvement in the mean time to detection and response. This ability reduces the dependence on a limited number of human resources and reduces IT costs. With such advanced technologies, cyber analysts are alerted about an attack, and the attack type is classified, equipping them to respond most effectively.
“Cyber analysts will be better equipped to manage even the most complex threats with less manual effort, making better use of already scarce resources,” he added.
Recommendations for APAC’s IT industry to counter cyberthreats
Organizations could adjust how they evaluate and strengthen their cybersecurity posture through certification programs like CSA’s Cyber Essentials and Cyber Trust marks and toolkits. Alternatively, they could also reference established standards that consider Operational Technology (OT) vulnerabilities, such as CSA’s Operational Technology Cybersecurity Competency Framework and MITRE’s Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) for ICS.
According to Jackson, organizations should upkeep good cybersecurity hygiene by periodically running health checks of their infrastructure to ensure their patches are up-to-date and their defenses are tuned correctly. Organizations must move to a zero-trust security environment to create connectivity that a remote working world can truly trust. A zero-trust model centers on the belief that organizations should not automatically trust anything inside or outside its perimeters and instead must verify anything and everything – apps, devices, networks, and people.
“Zero Trust approaches verification of user identities as a constant process of authentication, not just at login, but in all paths of a data plane. It not only considers traditional login credentials, but also biometric authentication, contextual factors such as location and device, and even behavioral profiling such as hand-eye coordination, individual scrolling patterns and other user norms,” he continued.
With Zero Trust, the default action is always verification.
ChatGPT and others like it are new technologies. Like any new technology embraced by the masses, questions are asked about the impact, both good and bad.
For example, the Ministry of Education of Singapore (MOE) takes an interesting approach because, rather than outright banning its use, it provides teachers with the guidance and resources to effectively utilize digital technology, such as ChatGPT, to enhance learning.
“It is still early days – and cyber professionals and hackers will continue to look into how they can best utilize it, as will every government department and industry. Putting aside all the hype and scaremongering, only time will tell who is more effective,” Jackson explained.
Key takeaways from BlackBerry’s research on ChatGPT and cybersecurity
“Our research and insights have provided data that supports a very topical discussion,” said Jackson. “We know threat actors are currently testing the waters with ChatGPT and how they can maliciously leverage the software to launch cyberattacks.”
As ChatGPT’s maturity has increased, hackers have been utilizing it and similar platforms to launch increasingly difficult-to-protect-against cyberattacks, leading to a growing need for organizations to employ AI defenses to level the playing field. However, there are concerns about using publicly available AI software, prompting a debate over whether such tools should be regulated. According to a recent survey, 95% of respondents believe that governments are responsible for regulating these technologies.
Despite this, the research also showed that IT professionals are not waiting for government action, with 82% already planning their defensive measures against AI-augmented cyberattacks.
“There are many benefits to be gained from this advanced technology, and we’re only beginning to scratch the surface, which is exciting. But we must also consider that threat actors see the benefits, and they will waste no time adding these new technologies to their malicious arsenals,” Jackson concluded.
As Singapore continues to embrace the benefits of conversational AI tools like ChatGPT, the public and private sectors must stay one step ahead to mitigate cybersecurity risks – fighting fire with fire, with defensive AI.