Misuse of AI: AI weaponization is a complex and controversial topic that raises many ethical, legal, and social questions.

AI weaponization is a complex and controversial topic that raises many ethical, legal, and social questions. (Image generated by AI).

The misuse of AI: North Korea’s AI development worries the world

  • North Korea’s AI development in raising concerns on the misuse of the technology.
  • AI weaponization sees the use of AI to deliberately inflict harm on society.
  • The regulation of AI weapons is a crucial issue that requires international cooperation.

The misuse of AI is becoming an increasingly concerning problem, despite AI being a game-changer for almost everything today. Ever since generative AI started going mainstream, there have been people using the technology to cause problems in society.

The misuse of AI has raised concerns among both governments and businesses. World leaders have come together to address these concerns and called for more regulations to govern the use and development of the technology.

Cybercriminals misuse AI the most. Today, cybercriminals are using generative AI to not only generate phishing emails, but also create content that can cause a lot of problems to businesses. AI-generated threats are becoming harder to detect as well, with more businesses falling victim to such threats.

The rise in deepfake content generated by AI is also a big concern. Recently, US President Joe Biden had a deepfake audio clip circulated against him, raising concerns about how such content could impact the elections in the US this year. But it’s not just politicians and celebrities that are being targeted. As deepfake generators are easily accessible, almost anyone can be a victim.

AI weaponization sees the use of AI to deliberately inflict harm on society by integrating it into the systems and tools of national militaries.

AI weaponization sees the use of AI to deliberately inflict harm on society by integrating it into the systems and tools of national militaries. (Image generated by AI).

Misuse of AI: weaponization

The biggest concern on the misuse of AI though is the ways in which the technology can be weaponized. AI weaponization sees the use of AI to deliberately inflict harm on society by integrating it into the systems and tools of national militaries. It involves the use of AI algorithms to enhance the capabilities of existing weapons, such as drones, missiles, tanks, and submarines, or to create new types of weapons, such as swarms of micro-robots, cyberattacks, and bio-weapons.

According to an article by the World Economic Forum, the weaponization of AI poses a frontier risk for humanity, as it could lead to unpredictable and uncontrollable consequences, such as accidental escalation, ethical dilemmas, and loss of human oversight. The militarization of AI also has implications for global security and warfare, as it could create new sources of conflict, destabilize the balance of power, and challenge existing norms and laws.

That means the regulation of AI weapons is a crucial issue that requires international cooperation. Some initiatives have been proposed to ban or limit the development and use of lethal autonomous weapons, such as the Campaign to Stop Killer Robots and the UN Convention on Certain Conventional Weapons. However, these efforts face many challenges, such as the lack of a clear definition of autonomy, the diversity of stakeholders and interests, and the difficulty of verification and enforcement.

AI weaponization is a complex and controversial topic that raises many ethical, legal, and social questions.

Should the world be concerned about the misuse of AI in North Korea?

Should the world be concerned about the misuse of AI in North Korea?

North Korea’s AI development

According to a report by Reuters, North Korea is currently developing AI and machine learning for various use cases in the country. Citing a study by Hyuk Kim of the James Martin Center for Nonproliferation Studies (CNS) in California, the report states that the country is developing AI for all sorts of use cases which include responding to Covid-19, safeguarding nuclear reactors and even wargaming simulations.

“North Korea’s recent endeavors in AI/ML development signify a strategic investment to bolster its digital economy,” Kim writes, citing open-source information including state media and journals and was published on Tuesday by the 38 North project.

Despite sanctions, North Korea’s AI researchers have collaborated with foreign scholars, including some in China, to develop the technology and use cases. In fact, the country established the Artificial Intelligence Research Institute in 2013. In recent years, several companies have promoted commercial products featuring AI.

Apart from that, there are concerns about technology transfers happening on the cloud. This includes transfer learning, which Kim describes as a technique for fine-tuning a pre-trained model to enhance its performance under specific conditions.

“Unlike traditional machine learning methods, transfer learning does not require the entire training dataset used for the pre-trained model. Instead, it only requires data that a developer is interested in to further train the pre-trained model for their specific needs or circumstances,” explained Kim.

As such, Kim pointed out that transfer learning can be used as an advantage by North Korea as it reduces training time and resource requirements, such as data storage and computational power.

“Moreover, it is theoretically feasible to fine-tune a model initially developed for civilian applications for military purposes. For instance, a model trained by foreign scholars for object detection purposes in aerial environments could be adapted for further fine-tuning that uses data pertaining to military objects that North Korea is interested in. The scope of military simulation can also be expanded through transfer learning to cover more complex combat situations. For example, an agent trained in 2-versus-1 air combat scenarios could be transferred to 2-versus-2 scenarios for further training,” he said.

The regulation of AI weapons is a crucial issue that requires international cooperation and dialogue.

The regulation of AI weapons is a crucial issue that requires international cooperation and dialogue. (Image generated by AI).

Kim also explained that the benefits of transfer learning highlight potential risks associated with technology transfers via intangible means, such as sharing electronic files, a pre-trained model in this context, through email and cloud computing services.

“Many cloud computing services, such as Google Collab, Microsoft Azure and other enterprises, offer AI/ML development environments. These environments are supported by computing power, including a graphic processing unit (GPU), TensorFlow processing unit (TPU) and Nvidia’s A100 and H100 units.

Therefore, the potential proliferation risks linked with ITT and cloud computing services could negate the effectiveness of the sanctions regime and export controls that mainly focus on the transfer of physical goods in general.”

The UN nuclear watchdog and independent experts said last month that a new reactor at North Korea’s Yongbyon nuclear complex appears to be operating for the first time, which would mean another potential source of plutonium for nuclear weapons.

“To effectively address the potential sanctions and proliferation risks posed by North Korea’s AI/ML endeavors, national authorities should proactively engage with cloud computing service providers and academic/professional associations that host international conferences on emerging technology. Discussions with cloud computing service providers should center on raising awareness of potential threats posed by North Korea and considerations for enhancing customer screening during onboarding,” concluded Kim.

While it remains to be seen if there will be a misuse of AI by North Korea, governments will definitely be concerned about how the country is sourcing the technology.