Without proper planning, AI can wreak havoc across the globe – experts
EXPERTS from among three of the world’s most prestigious universities have warned that rapid advances in artificial intelligence (AI) could destabilise the world through automated hacking attacks, taking control of driverless cars and turning commercial drones into weapons, among others.
Dozens of technical, public policy, privacy, security and military researchers from Cambridge, Oxford and Yale from AI said cybercrime and terrorism would rapidly increase unless preparations are made against the malicious use of the technology.
According to the Evening Standard, the report entitled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, said “highly believable fake videos” impersonating prominent figures or faking events can be used manipulate public opinion around political events.
The report also warns of artificially intelligent bots being used to manipulate the news agenda, social media and elections apart from hijacking drones and other automated machinery and vehicles
The study, published on Wednesday sounded the alarm for the potential misuse of AI by rogue states, criminals and lone-wolf attackers.
The researchers said the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks. The study focuses on plausible developments within five years.
“We all agree there are a lot of positive applications of AI,” Miles Brundage, a research fellow at Oxford’s Future of Humanity Institute. “There was a gap in the literature around the issue of malicious use.”
Artificial intelligence, or AI, involves using computers to perform tasks normally requiring human intelligence, such as taking decisions or recognising text, speech or visual images.
It is considered a powerful force for unlocking all manner of technical possibilities but has become a focus of strident debate over whether the massive automation it enables could result in widespread unemployment and other social dislocations.
The 98-page paper cautions that the cost of attacks may be lowered by the use of AI to complete tasks that would otherwise require human labour and expertise. New attacks may arise that would be impractical for humans alone to develop or which exploit the vulnerabilities of AI systems themselves.
It reviews a growing body of academic research about the security risks posed by AI and calls on governments and policy and technical experts to collaborate and defuse these dangers.
The report makes a series of recommendations including regulating AI as a dual-use military/commercial technology.
It also asks questions about whether academics and others should rein in what they publish or disclose about new developments in AI until other experts in the field have a chance to study and react to potential dangers they might pose.
“We ultimately ended up with a lot more questions than answers,” Brundage said.
Additional reporting by Reuters
- Cyber-heist mastery: how North Korea stole over US$3 billion in cryptocurrency
- From 1% to 100%: Tallying the impact from Okta data breach
- VMware by Broadcom: layoffs and redundancy
- ChatGPT: A year of revolutionizing AI dynamics
- Barking up the wrong data tree: even pets aren’t safe from a data breach