Weaponized AI: How organizations can be better prepared to deal with the threat

Source – Shutterstock

Weaponized AI: How organizations can be better prepared to deal with the threat

Article by Ramprakash Ramamoorthy, Director of research at ManageEngine

As organizations gear up to construct robust, secure ecosystems, AI has emerged as a competent tool for enhancing their security postures. Despite AI’s transformative capabilities in cybersecurity, it can also be weaponized by hackers to disrupt the security networks of companies. McKinsey & Company report that hackers are carrying out increasingly sophisticated attacks using AI, machine learning (ML), and other technologies.

A weaponized AI system breaks down performance and disrupts normal operations by leveraging vulnerabilities in a network. AI’s unique abilities—such as information retention, learned intelligence, automation, and improved speed—are used to penetrate networks and systems. By weaponizing AI to model adaptable attacks and develop intelligent malware programs, cybercriminals have been able to program these attacks to collect knowledge of factors that proved to be successful.

A data breach in the Association of Southeast Asian Nations (ASEAN) region takes roughly 184 days to identify and 65 days to contain, according to data from Ponemon Institute. Threat actors may exploit this knowledge when initiating cyberattacks. Despite the fact that the digital economy in the ASEAN region is predicted to grow by US$ 1 trillion over the next 10 years, many organizations in the ASEAN region lack the appropriate cybersecurity awareness and security infrastructure to effectively mitigate the risk of AI-enabled attacks. This can impede the digital transformation initiative undertaken in the region.

Weaponized AI: How organizations can be better prepared to deal with the threat

Ramprakash Ramamoorthy, Director of research at ManageEngine

A double-edged sword: AI used to perpetrate cyberattacks

Features that make AI and ML systems vital to businesses, such as the automation of predictions through analyzing large volumes of data and finding patterns, are the very same features that cybercriminals abuse.

Data poisoning

By making minute adjustments to parameters or creating carefully thought-out scenarios, hackers can subtly influence the data sets used to train AI while gradually guiding it in the wrong direction. Although it may be impossible to verify the veracity of data and inputs, every attempt should be taken to collect data from reliable sources. Incorporate anomaly detection as much as possible, give AI adversarial examples to help it identify malicious inputs, and isolate AI systems with safety features that make the systems simple to shut down if something starts to go wrong.

Deepfakes

Deepfakes, a portmanteau of “deep learning” and “fake media,” entail the use of AI methods to modify audio and visual content to make it appear legitimate. Deepfakes are ideal for use in disinformation campaigns, because it is challenging to instantly distinguish them from authentic content, even with the aid of technology. Deepfakes can reach millions of people worldwide at unprecedented speeds due to the widespread usage of the internet and social media. It is crucial that people understand how convincing AI-powered deepfakes can be and how they can be utilized maliciously.

Mimicking trusted systems

Cybercriminals create malware that can impersonate a managed, secure system using AI technologies. Cybercriminals can therefore carry out stealthy ransomware assaults by integrating into a company’s security network. This malware has the potential to pass security layers and access sensitive information unique to organizations. Legacy systems may not be able to detect such threats, which may lead to a major security risk. Thus, sophisticated attacks call for an advanced, AI-enabled cyber-defense framework that can facilitate a robust security structure.

How to defend against AI-powered cybercrime

An integrated approach in which all the concerned stakeholders (employees, organizations, institutions, and agencies) are responsible for combatting AI-powered cybercrime is an absolute necessity. To match the scope and sophistication of future threats, organizations will need to use AI tools in addition to the standard cybersecurity best practices. But to prevent AI from being used by hackers, the way it is created and commercialized will also need to be managed. In a report on the abuses of AI, Europol urged governments to create particular data protection frameworks for AI and to make sure that these systems follow security-by-design principles.

By creating a baseline of typical behavior and promptly identifying anomalies in things like server access and data traffic, AI can be quite successful at network monitoring and analytics. Early intrusion detection provides the best opportunity of reducing the harm that AI can cause. As AI learns and develops, it might eventually be given the authority to neutralize threats on its own and prevent breaches in real time, even if it may initially be ideal to have AI systems flag problems and alert IT departments so they can investigate.

ManageEngine’s IT at work: 2022 and beyond survey found that 82% of respondents in Singapore believe AI and ML technologies will play a significant role in strengthening their IT security frameworks in the near future. AI can learn when alerts are effective, just as it can model typical behavior, learn how users interact with systems, identify vulnerabilities and viruses, and comprehend what constitutes an emergent threat. AI can gain experience and improve at defending the organization’s network as it expands and receives more feedback on its choices.

From a security perspective, a multi-stakeholder effort is imperative to establishing an appropriate risk-response framework. Relevant agencies and industry bodies are working together to identify emerging security risks associated with such technology adoption and to build capabilities to address these risks. Engaging with the broader AI community, including researchers and experts, to encourage the development of responsible approaches to the use of AI in cybersecurity, and to help ensure that the technology is used for the greater good will pave the way for ethical AI. Organizations will become more adept at reducing AI’s risks as they better grasp the technology.

The views in this article is that of the author and may not reflect the views of Tech Wire Asia.