Human trapped in an AI loop

A brave tech worker trusted a robotaxi to drive them to the airport. The robotaxi got stuck in a loop on a parking lot, driving in circles, as you can see in the video below. This situation is both funny and scary and we can sense the stress and disbelief in the voice of this passenger.

I do have many questions:

  • Why isn’t there some kind of emergency button you can smash in case of danger? All public transportation have that since before computers were invented.
  • Why didn’t the passenger try to jump in the front seat and take control of the wheel? That must trigger some emergency mechanism also, I believe. Instead, they reached for their phone to call tech support and Tech support asked them to « open the app again and click on the bottom left icon». I do recognize that it needs some physical condition and guts to try this approach that maybe most people don’t have.
  • Why do I have the feeling that this is going to be more and more common as we automate most of our basic services with AI? We already get in AI loops with robocalls and robosupport menus when we call a service because our credit card is blocked or the dishwasher does not connect to the Wifi anymore. It’s very obvious that it’s almost impossible to prevent these automated systems to drive humans in circles or jump off a cliff when «Sorry, the situation you’re experiencing right now does not exist ».

https://www.linkedin.com/posts/mikejohns_lyft-uber-omg-activity-7271962168286191617-E7j4

Crashing at scale

Waymo is voluntarily recalling the software that powers its robotaxi fleet after two vehicles crashed into the same towed pickup truck

Waymo recalls and updates robotaxi software after two cars crashed into the same towed truck

Let’s first look at this curious choice of word “recall” to speak about a software reversion, as it’s more generally used in the industry. It sounds like Waymo had to take out of the street their whole fleet because some software update went wrong, like Tesla had to recall all their cars sold in the US because of non-compliant emergency light design. Waymo didn’t do that. They just reverted the software update and uploaded a patched version. Calling it a recall is a bit of a misnomer and here to make them look compliant with some security practices that exist with regular consumer cars. But that framework is clearly not adapted to this new software-defined vehicle ownership model.

The second most interesting bit here, which to me seems overlooked by the journalist reporting this incident, is a “minor” (according to Waymo) software failure that created two consecutive identical accidents between Waymo cars and the same pickup truck. Read that again. One unfortunate pickup truck was hit by 2 different Waymo cars within the time frame of a couple minutes because it looked weird. Imagine if that pickup truck had crossed the path of more vehicles with that particular faulty software update. How many crashes would that have generated?

The robotaxi’s vision model had not taken into account a certain pattern of pickup truck, thus none of these robotaxis were able to behave correctly around it, resulting in multiple crashes. Which brings the question, should a fleet of hundreds or even thousands of robotaxis run on the same software version (with potentially the same bugs)? If you happen to drive a vehicle or wear a piece of garment that makes a robotaxi behave dangerously, every robotaxi suddenly is out there to get you.

Driving isn’t an autonomous activity

Driverless cars are often called autonomous vehicles – but driving isn’t an autonomous activity. It’s a co-operative social activity, in which part of the job of whoever’s behind the wheel is to communicate with others on the road. Whether on foot, on my bike or in a car, I engage in a lot of hand gestures – mostly meaning ‘wait!’ or ‘go ahead!’ – when I’m out and about, and look for others’ signals. San Francisco Airport has signs telling people to make eye contact before they cross the street outside the terminals. There’s no one in a driverless car to make eye contact with, to see you wave or hear you shout or signal back. The cars do use their turn signals – but they don’t always turn when they signal.

“In the Shadow of Silicon Valley” by Rebecca Solnit

/via @clive@saturation.social

Robotaxis are on fire

San Franciscans celebrate Chinese new year by setting Waymo’s robotaxi on fire.

More than meets the vision sensor

Waymo, the robotaxi company from Alphabet/Google, broke the first law of Asimov.

Way more interesting is to read how the robocompany describes the incident:

“The cyclist was occluded by the truck and quickly followed behind it, crossing into the Waymo vehicle’s path. When they became unoccluded, our vehicle applied heavy braking but was not able to avoid the collision,” Waymo said.

https://boingboing.net/2024/02/07/waymo-autonomous-car-hit-bicyclist.html

Let me emphasize that: “the cyclist crossed into the Waymo vehicle’s path“. That’s such an engineering thing to say. It’s your 2 tons metal box on wheels that does not have a small moving vehicle hidden by a larger one in its computation vision model. Your software calculates a trajectory to pass behind that truck. Oops, there was a cyclist there. But it’s the cyclist who crosses your path? How convenient.

No driver, no fines

Driverless cars have been documented running red lights, blocking emergency responders and swerving into construction zones.

[…] When driverless cars break the rules of the road, there’s not much law enforcement can do. In California, traffic tickets can be written only if there is an actual driver in the car.

Driverless cars immune from traffic tickets in California under current laws

Don’t kick my robotaxi

A pretty good summary of the issues with robotaxis right now. The gap between young Silicon Valley entrepreneurs and tenured city officials is abysmal. Best part of the video is right at the beginning, when we see the journalist trapped in an expensive fully automated metal box getting kicked by an angry citizen not allowed to park their car because of the “intelligence” of said metal box.

We have no standards on which to base whether these vehicle are actually as safe as humans, safer than humans or not as safe as humans, except to trust that these companies are telling us the truth about their safety statistics

Sam Abuelsamid, Principal Research Analyst