Who is liable when robotics, AI cause accidents?
- Who should pay if an autonomous vehicle causes an accident: the car owner, the vehicle manufacturer, or the developers of the artificial intelligence (AI) software that pilots the vehicle?
Road traffic mishaps are one of the leading causes of accidental death, and is one of the leading supporting arguments for why self-driving vehicles need to be adopted more quickly. But what if the autonomous vehicle itself was to hit a pedestrian – who should be held legally responsible for that?
Should it be the car owner, the vehicle manufacturer, or the developers of the artificial intelligence (AI) software that pilots the vehicle? This discussion is one of many that involve the governing of next-generational technology is like AI, robotics, and the interconnectivity of the internet of things (IoT).
It is the question of legal liability that drove a panel discussion at a recent TechLaw.Fest forum in Singapore, where panelists with both legal and technology backgrounds debated who should have to bear losses or pay for damages in the event of an accident involving AI or robotics technology.
“The unique ability of autonomous robots and AI systems to operate independently without any human involvement muddies the waters of liability,” said Charles Lim, co-chair of the Singapore Academy of Law’s Subcommittee on Robotics and Artificial Intelligence.
The 11-member Robotics and Artificial Intelligence Sub-committee had just published its report on what can be done to establish civil liability in such cases last month. Singapore, despite ranking first in the 2019 International Development Research Centre’s Government Artificial Intelligence Readiness Index, still does not have law systems in place for governing liability stemming from robots or AI, including civil liability for autonomous vehicles (AVs).
“There are multiple factors [in play] such as the AI system’s underlying software code, the data it was trained on, and the external environment the system is deployed in,” argued Lim. Such extenuating factors could contribute towards more lack of clarity, such as what if the AV’s onboard AI system was malfunctioning due to a bug introduced by a software update?
Or what if a self-driving car faces a new obstacle, one that its AI has not been trained to avoid in the past. Will a collision caused by that be the fault of the AI provider, or someone else who failed to train the AI on this unexpected obstacle?
Such scenarios are detailed in the 68-page Report on the Attribution of Civil Liability for Accidents Involving Autonomous Cars by Lim’s subcommittee. The report discusses the issues, plausible legal scenarios, and the potential liabilities, and is meant to encourage discussion within the legal community in Singapore and with other stakeholders in the autonomous vehicle value chain
Since 2017, when Singapore launched a SG$150 million (approximately US$109 million) national program to deepen its AI capabilities in preparation for digitalization, the country has been making sizable investments in AI as it establishes itself as the preeminent AI research and development hub in Southeast Asia.
Since then, AI-driven applications have been spotted across the city-state, including in education, healthcare, and recently it has even been spotted to help with building safety inspections by powering AI-enabled drones.
- Malaysia is migrating from the usage of SMS OTP. Is biometrics the answer?
- High-profile breaches a wake-up call on network firewalls
- How has the network security landscape evolved in the APAC region?
- After new data law, cybersecurity fears in Indonesia higher than ever
- Who do you call when you face a cyber threat?