I had heard about this incident in Las Vegas a few weeks ago where an autonomous vehicle ran over a robot and was planning a serious missive to discuss what some of the ramifications of this is, with respect to the autonomous vehicle space. But, first I need to get the LOL out of the way. You have to admit, it is funny.
What this does is bring out one of the issues that exists in the self-driving space. The details are not all that important. But, briefly, the car was a Tesla, the robot was one of those host models that is being developed to act as a service unit in places such as museums, hotels, banks, shopping and business centers. It is the next generation of a robot that has the capability to maneuver around obstacles and move its head and arms (Danger, Will Robinson!). It also has a display to interact with people and give them information.
The accident details were, simply, a robot gone rogue. One of several, it somehow lost its bearing and headed for the street, where the Tesla, which was in self-driving mode, mowed it down. Here is what is funny. The police were called. Seriously?
Shades of Westworld and Futureworld movies. Of course, the robot (affectionately called Promobot) will be given a post mortem to see why it went rogue.
Now – the real-world implications. Unless you live under a rock you are aware that this is not the first time there has been a mishap with self-driving vehicles. While this one may have a bit of a comic relief, the others were very serious. One happened last year when an autonomous Uber vehicle killed a pedestrian. In an another incident, a Tesla vehicle was involved in a fatal accident in 2018 when the autopilot system was engaged. As well, there have been other incidents prior to those.
One of the arguments is that there are bound to be accidents involving autonomous vehicles. Why? Because, first of all, there are just too many circumstances that cannot be preemptively foreseen. The same can be said for human drivers. However, with humans, there is the element of intuition (the non-scientific term), which enables cognitive reactions to recognize, ever so slight, deviations from the norm. Such capabilities will never exist, at least not for the foreseeable future, in an autonomous vehicle.
We can come close, with tons of pre-programmed scenarios, but will that be good enough? Perhaps, when quantum computing does a Vulcan mind meld with AI, and big data algorithms are refined, the space will narrow. But for now, the reality is that there are just too many variables to be handled by current autonomous vehicle technology.
However, there are arguments that an autonomous vehicle ecosystem will be much safer than the present driver-controlled one. Amen to that, but it will not occur until we reach the tipping point where both autonomous vehicles and driver vehicles are operating under a controlled environment. As long as human judgement and free will driving is involved, errors will continue to occur at about the same rate they are, presently. Autonomous vehicles will remove the judgement errors but will introduce other errors (although they should be significantly less among autonomous vehicles).
The interesting thing here is that the Tesla hit the robot just as it would have hit a pedestrian under the same conditions. Non-human devices cannot be expected to differentiate on an emotional scale. There can be certain parameters programmed into the mechanics (such as heat sensors) to give the device more data (unless you live in Alaska or some other frozen land where everything is cold), or character (facial) recognition algorithms (if the data is coming from the front of the human) that hedges the bet. But this is not foolproof either.
One can also go in the opposite direction and simply stop the autonomous vehicle if there is any uncertainty in the scenario, But, then it will get rear-ended by the driven vehicle because the driver happens to be texting. The industry does not have that figured out quite yet.
What all this brings up is that we are a long way away from anything other than driver assist, no matter how advanced it gets. This will be the scenario for years to come. The nice thing is that driver assist will become much more intelligent and offer more options. But letting the vehicle drive itself is not one of them in the near future.
Whether it is a robot, or a human that gets nailed by an autonomous vehicle, the end result is the same in the absolute sense that it was an incident involving a driverless vehicle. That means we have quite a ways to go before we have level 5 autonomy.
My position is that we will not have a fully autonomous vehicle infrastructure until everything and everyone can be precisely identified, and communication is two-way. That is years out.
RIP Little Promobot!