Are autonomous vehicles actually safe to be left on the road?

Are autonomous vehicles actually safe to be left on the road? (Source – Shutterstock)

An autonomous vehicle collides with a firetruck – Are these vehicles ready to be on the road?

  • Autonomous driving tech faces scrutiny after urban incidents challenge safety claims.
  • As autonomous vehicles near reality, accidents spark debates on AI’s road readiness.

Transportation is on the brink of a significant transformation with the rise of autonomous vehicles and their underlying autonomous driving technology. As we teeter on the edge of this change, grasping this pioneering tech’s evolution and impending hurdles becomes vital.

Long before Tesla’s Autopilot debut or Waymo’s minivans took to Phoenix’s streets, the vision of self-driving cars existed. Once deemed a far-off dream, technological leaps have nudged this vision closer to everyday existence. Over time, technology stalwarts and automotive giants have been in fierce competition to refine the art of autonomous driving.

Throughout this path, considerable progress has been marked. Previously cumbersome and pricey sensors have evolved into refined and economical tools. Lidar’s laser mapping, cameras discerning traffic signs, and radars gauging distances have all undergone significant advancements.

Yet, it’s the software that truly astounds. Machine learning and AI now sift through massive data streams from these sensors in real-time, making immediate choices that can deter or cause accidents.

Safety first? Evaluating the claims of autonomous safety

The autonomous vehicle sector has consistently touted enhanced safety as a primary benefit. Given the global annual toll of over a million lives in car accidents, the AV industry’s safety claims push for legislative acceptance of fully autonomous cars. Although the premise that AVs, unlike humans, aren’t susceptible to distractions or rule-breaking sounds solid, evidence confirming that they outperform human drivers in safety is sparse.

However, recent events, such as the mishap with GM’s Cruise robotaxi in San Francisco, underline the enduring challenges, particularly in bustling cityscapes. Incidents like a collision with a firetruck underscore the unpredictable nature of urban streets, where pedestrians, bikers, sudden obstructions, and emergency vehicles converge — a mix AI still grapples with.

The autonomous vehicle firm, a branch of General Motors, publicly disclosed the incident across several social media channels.

“One of our cars entered the intersection on a green light and was struck by an emergency vehicle that appeared to be en route to an emergency scene,” the firm shared on X. According to Cruise, this took place post-10 pm in San Francisco’s Tenderloin district.

The statement added, “Our car contained one passenger who was treated on scene and transported via ambulance for what we believe are non-severe injuries.”

Such occurrences aren’t mere one-offs. They symbolize AI’s current confines and complexities. Despite its growth, machine learning primarily deciphers patterns from massive data pools. Confronted with new challenges, these systems can sometimes stumble, as showcased by the Cruise event where the car overlooked an oncoming firetruck.

Human vs. machine: The blame game in autonomous accidents

In the aftermath of this event, repercussions are inevitable. The California Public Utilities Commission recently greenlit Cruise and Alphabet Inc.’s Waymo to expand their operational scope in the city, even allowing rider charges, with a 3 to 1 vote.

However, San Francisco’s administration, fronted by City Attorney David Chiu, urged state officials to reconsider this expansion.

San Francisco’s 84-page appeal highlighted potential harms if Cruise’s unchecked expansion proceeds. The city cited concerns over Cruise’s vehicles disrupting emergency responses, public transportation, roadwork, and overall traffic flow.

A Cruise autonomous taxi raises question on the safety of autonomous vehicles

A Cruise autonomous taxi raises question on the safety of autonomous vehicles. (Source – Shutterstock)

Recent online clips have also spotlighted some peculiar maneuvers by Cruise’s autonomous cars. A clip recently revealed a car crossing while children were on the crosswalk, and another displayed a Cruise car halting in the middle of a junction.

Meanwhile, Waymo’s cars have been part of 18 described “minor contact incidents,” as reported by The Verge. These range from a vehicle reversing into a stationary Waymo car to a flimsy signboard hitting one due to the wind.

Waymo’s data indicates that 55% of these minor episodes involved another vehicle bumping into a stationary Waymo car, with 10% happening after dark. Crucially, no incident involved crossroads, pedestrians, bikers, or other at-risk commuters.

Furthermore, Waymo was swift to point fingers at human errors. Their analysis showed every such event involved some rule infringement or reckless behavior by human drivers. Waymo stresses this information release is aimed at “greater transparency.”

Addressing the broader issue, Waymo’s chief safety officer Mauricio Peña stated, “Far too many people still die or are injured on our roads every year in communities across the country. This data suggests our fully autonomous driving system, the Waymo Driver, is reducing the likelihood of serious crashes, helping us make progress towards our mission for safer, more accessible mobility for all.”

Consumer demand and the road ahead for autonomous vehicles

Despite these challenges, there’s a strong demand for innovative features in the autonomous vehicle sector. Customers are increasingly purchasing vehicles equipped with advanced capabilities.

Numerous car manufacturers are pouring resources into creating fully autonomous vehicles that prioritize passenger safety with state-of-the-art measures and enhance comfort and security. However, the inclusion of in-car services introduces extra expenses, such as telecommunication service fees, connectivity solutions, and hardware system costs, which could hinder market growth.

It raises an intriguing question: Can AI ever be trained enough to anticipate the myriad of unpredictable situations on the road? Or will there always be a need for human oversight?