Europol

As cybercriminals leverage ChatGPT, Europol suggest law enforcement agencies do the same as well. (Source – Shutterstock)

Europol: Law enforcement agencies need to be prepared to deal with ChatGPT

Whenever a new technology gains traction, there will always be those who will want to use it for the wrong reasons. This is exactly what is happening to ChatGPT. The large language model (LLM) developed by OpenAI has been a game changer for everyone, and now it has also found itself being used for the wrong reasons.

There have been multiple reports of cybercriminals leveraging ChatGPT to target victims and launch cyberattacks. Reports have shown that as attackers continue to automate their methods, AI tools like ChatGPT are helping them improve their tactics.

In fact, a recent survey by BlackBerry showed that 51% of its respondents believe that a successful cyberattack attributed to ChatGPT will occur within a year. What’s more concerning is that 71% of them also believe that nation-states may already be using ChatGPT for malicious purposes.

Law enforcement agencies are aware of this and are already looking at ways they can leverage ChatGPT as well to help them deal with cybersecurity issues. Cybersecurity vendors like Sophos for example, have also released research on how the cybersecurity industry can leverage ChatGPT as a co-pilot to assist in defeating cybercriminals.

As the technology is adopted widely, both by the public and enterprises, the Europol Innovation Lab has released a report that highlights the positive and negative potential of ChatGPT. Based on workshops with subject matter experts, Europol collected a wide range of practical use cases that provide a glimpse of what is possible and raise awareness of the impact LLMs can have on the work of the law enforcement community.

While Open AI has included safety features to protect users, Europol believes that these safeguards could be circumvented fairly easily through prompt engineering. A relatively new concept in natural language processing, prompt engineering is the practice of users refining the precise way a question is asked in order to influence the output that is generated by an AI system.

Cybercriminals could abuse the tool in order to bypass content moderation limitations to produce potentially harmful content. While the capacity for prompt engineering creates versatility and added value for the quality of an LLM, Europol feels this needs to be balanced with ethical and legal obligations to prevent their use for harm.

(Source – Shutterstock)

A guide to cybercrime for dummies 

Apart from that, Europol believes that some users may use ChatGPT to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse. Although the information ChatGPT provides is freely available on the internet, the potential for the model to provide specific steps if asked contextual questions means it is significantly easier for malicious actors to better understand and carry out various types of crime, which is deeply concerning.

One of the biggest innovations of ChatGPT is its availability to produce high-quality content based on its prompt. Some industries like marketing and advertising are already using it to improve their content. There’s no reason why cybercriminals wouldn’t use it too, to improve their phishing and scam content.

Europol’s research showed that cybercriminals could easily impersonate an organization or individual in a highly realistic manner even with only a basic grasp of the English language. ChatGPT may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to respond to messages in context and adopt a specific writing style. Additionally, various types of online fraud can be given added legitimacy by using ChatGPT to generate fake social media engagement, for instance, to promote a fraudulent investment offer.

“To date, these types of deceptive communications have been something criminals would have to produce on their own. In the case of mass-produced campaigns, targets of these types of crime would often be able to identify the inauthentic nature of a message due to obvious spelling or grammar mistakes or its vague or inaccurate content. With the help of LLMs, these types of phishing and online fraud can be created faster, much more authentically, and at significantly increased scale,” the report stated.

At the same time, technology can also be abused for propaganda and disinformation. For example, a user can use ChatGPT to gather more information that may facilitate terrorist activities, such as terrorism financing or anonymous file sharing. As ChatGPT excels at producing authentic-sounding text at speed and scale, it becomes ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.

Another concern of ChatGPT is its capability to produce code in various programming languages. While the code may be basic, it is enough for a cybercriminal to exploit an attack vector on a victim’s systems. This type of automated code generation is particularly useful for those criminal actors with little to no knowledge of coding and development.

Europol is concerned 

With concerns about ChatGPT increasing, Europol recommends law enforcement agencies be prepared to deal with growing threats. Europol believes that law enforcement agencies need to understand the impact on all crime areas to be better able to predict, prevent, and investigate different types of criminal abuse.

This can only be achieved if law enforcement officers start developing the skills necessary to make the most of models such as ChatGPT.  Be it understanding how these types of systems can be leveraged to build up knowledge, expanding existing expertise or understanding how to extract the required results, officers need to be able to assess the content produced by generative AI models in terms of accuracy and potential biases.

Europol also suggested law enforcement agencies explore the possibilities of customized LLMs trained on their own, specialized data, to leverage this type of technology for a more tailored and specific use, provided Fundamental Rights are taken into consideration. This type of usage will require the appropriate processes and safeguards to ensure that sensitive information remains confidential, as well as that any potential biases are thoroughly investigated and addressed prior to being put into use.

“The next iterations of LLMs will have access to more data, be able to understand and solve more sophisticated problems, and potentially integrate with a vast range of other applications. It will be crucial to monitor potential other branches of this development, as dark LLMs trained to facilitate harmful output may become a key criminal business model of the future. This poses a new challenge for law enforcement, whereby it will become easier than ever for malicious actors to perpetrate criminal activities with no necessary prior knowledge,” concluded Europol.