Human trapped in an AI loop

A brave tech worker trusted a robotaxi to drive them to the airport. The robotaxi got stuck in a loop on a parking lot, driving in circles, as you can see in the video below. This situation is both funny and scary and we can sense the stress and disbelief in the voice of this passenger.

I do have many questions:

  • Why isn’t there some kind of emergency button you can smash in case of danger? All public transportation have that since before computers were invented.
  • Why didn’t the passenger try to jump in the front seat and take control of the wheel? That must trigger some emergency mechanism also, I believe. Instead, they reached for their phone to call tech support and Tech support asked them to « open the app again and click on the bottom left icon». I do recognize that it needs some physical condition and guts to try this approach that maybe most people don’t have.
  • Why do I have the feeling that this is going to be more and more common as we automate most of our basic services with AI? We already get in AI loops with robocalls and robosupport menus when we call a service because our credit card is blocked or the dishwasher does not connect to the Wifi anymore. It’s very obvious that it’s almost impossible to prevent these automated systems to drive humans in circles or jump off a cliff when «Sorry, the situation you’re experiencing right now does not exist ».

https://www.linkedin.com/posts/mikejohns_lyft-uber-omg-activity-7271962168286191617-E7j4

Don’t what now?

A humanoid robot named Unitree G1 is apparently being mass produced. And the manufacturer thought it was appropriate to add this warning in its video presentation:

*We kindly request that all users refrain from making any dangerous modifications or using the robot in a hazardous manner.

It’s probably the closest thing to a universal robotic device to interact with our anthropocentric world and you expect us not to do dangerous things with it?

I bet that’s probably the first thing we’ll do with it.

10 years is retirement age for androids

…or so it seems. Boston Dynamics retired Atlas, the android robot we all know for its terminator style back-flips and other “parkour” abilities in controlled environment.

The famous robocompany, once Google’s property until it was too toxic for the “no-evil” brand to keep, just released a video celebrating their ten years of product demos, with, you guessed it, lots of unseen bloopers.

Some are very gross with hydraulic body fluids pouring out of broken limbs or our Johnny Atlas here hitting itself in the bearing balls (like in the looping gif above originally extracted by TechCrunch). So, viewer discretion advised.

And bye Atlas, you amazed us as much as you scared us. I won’t say I’ll miss you.

Fragility Robotics

ExTwitter post from Agility Robotics includes a video of a bipedal robot performing a task of moving boxes in a simulated factory setup in what looks like a trade show. The robot crashes on itself at the end in what appears to be a failure of its legs.

So many things happening in this ExTwitter post, let’s unpack.

It’s quite interesting for a robotics company to put up a social media message of its own product failures, especially when that failure happens in a product demo setting at a trade fair. Knowing the price per square footage of these trade shows, a setup with a conveyor belt and box shelf is no small marketing budget. Failures in the lab are ok, and there’s a history of robodog video bloopers, but failures when you’re trying to convince a large crowd to buy your tech, maybe much less so.

So, marketing probably thought, Well, that’s a million dollar fuckup, so let’s change strategy and use this to our advantage. We’ll make it a viral social media event. And while we are at it, let’s make our own metrics: “99% success rate over 20 hours“.

That robot was not going to get back to work without serious repairs, so forget having it moving boxes for the rest of the day. Since we’re talking about human-looking robots replacing humans doing machine jobs, might as well expect robothings to do the work 24/7, as metric of success, not 20/6 or 20/4, in that particular context. So let’s rewrite that as “82.5% success rate over 24 hours” if robothing gets repaired in a few hours. “20.7% success rate over 4 days”, if you forgot to bring a set of “quick change limbs” to the show.

Lastly, I can’t stop looking at the crowd standing on the other side of the conveyor belt, witnessing the scene. The lack of response or interest in what just unfolded is palpable. No one seems surprised, amused or alarmed. Even what looks like members of the sales team from [Fr-]Agility Robotics barely turned around to see what was happening behind their back and then just ignored their flagship robotic product having a melt-down.

Staging robots in manufacturing settings is boring. Breaking a leg is no way to impress.

Crashing at scale

Waymo is voluntarily recalling the software that powers its robotaxi fleet after two vehicles crashed into the same towed pickup truck

Waymo recalls and updates robotaxi software after two cars crashed into the same towed truck

Let’s first look at this curious choice of word “recall” to speak about a software reversion, as it’s more generally used in the industry. It sounds like Waymo had to take out of the street their whole fleet because some software update went wrong, like Tesla had to recall all their cars sold in the US because of non-compliant emergency light design. Waymo didn’t do that. They just reverted the software update and uploaded a patched version. Calling it a recall is a bit of a misnomer and here to make them look compliant with some security practices that exist with regular consumer cars. But that framework is clearly not adapted to this new software-defined vehicle ownership model.

The second most interesting bit here, which to me seems overlooked by the journalist reporting this incident, is a “minor” (according to Waymo) software failure that created two consecutive identical accidents between Waymo cars and the same pickup truck. Read that again. One unfortunate pickup truck was hit by 2 different Waymo cars within the time frame of a couple minutes because it looked weird. Imagine if that pickup truck had crossed the path of more vehicles with that particular faulty software update. How many crashes would that have generated?

The robotaxi’s vision model had not taken into account a certain pattern of pickup truck, thus none of these robotaxis were able to behave correctly around it, resulting in multiple crashes. Which brings the question, should a fleet of hundreds or even thousands of robotaxis run on the same software version (with potentially the same bugs)? If you happen to drive a vehicle or wear a piece of garment that makes a robotaxi behave dangerously, every robotaxi suddenly is out there to get you.

Robotaxis are on fire

San Franciscans celebrate Chinese new year by setting Waymo’s robotaxi on fire.

More than meets the vision sensor

Waymo, the robotaxi company from Alphabet/Google, broke the first law of Asimov.

Way more interesting is to read how the robocompany describes the incident:

“The cyclist was occluded by the truck and quickly followed behind it, crossing into the Waymo vehicle’s path. When they became unoccluded, our vehicle applied heavy braking but was not able to avoid the collision,” Waymo said.

https://boingboing.net/2024/02/07/waymo-autonomous-car-hit-bicyclist.html

Let me emphasize that: “the cyclist crossed into the Waymo vehicle’s path“. That’s such an engineering thing to say. It’s your 2 tons metal box on wheels that does not have a small moving vehicle hidden by a larger one in its computation vision model. Your software calculates a trajectory to pass behind that truck. Oops, there was a cyclist there. But it’s the cyclist who crosses your path? How convenient.