Featured

Published on June 9th, 2020 📆 | 5182 Views ⚑

0

IIHS Is Wrong In Claim That Only One Third Of Crashes Can Be Prevented By Self-Driving


iSpeech

The Insurance Institute for Highway Safety released a report claiming that new calculations suggest that only 1/3rd of car crashes could be prevented through self-driving technology. It is more common to see a number like 90% cited, which is the number of accidents attributable to human error. They break down other error classes such as:

  1. Sensing and Perception — about 24%.
  2. Incapacitation (drunk driving, falling asleep etc.)— about 10%.
  3. Predicting — misjudging what other vehicles will do.
  4. Planning — going too fast for the road, leaving too little following distance, and aggressive driving — about 40%.
  5. Execution — botching an evasive move, overcompensating or otherwise driving badly.
  6. Vehicle failure and impossible road conditions — blowouts, whiteout, etc. These are the errors typically classed as “not human error” — about 10%.

Their “1/3rd” of accidents comes from #1 and #2.

Their analysis is strange and quite flawed. I think if you asked most self-driving car developers where the hard problems are, they would say that #3 is the hard one, and #4 and #5 are the easiest to get right. Robots don’t get drunk or fall asleep — though there can be massive computer failures, which we’ll get to later.

Sensing and perception is not yet perfect in robocars, but one thing that’s very different from humans is the vehicles are always looking in all directions all the time. They will never miss something because they “were not looking” though they still can miss things because their system is not yet robust, like Tesla
TSLA’s Autopilot and Uber
UBER
’s test vehicle in 2018. Teams are working hard to find all perception problems and reduce them until they are so rare that it meets the desired safety goals. They won’t release until they do.

Planning and Execution are expected to do well because you can test and debug those in simulator. Cars in simulator do nothing but drive dangerous situations, every kind you can think of and many you can’t, millions of times a day. This is an area the cars will do much better than humans at. They’ll know the physics of their tires and vehicle much better than a human can.

Humans, on the other hand, are better at predicting what other humans will do. This is an area of hard research, and all teams want to get better here.

Multiple Mistakes

It is also noted that a lot of human caused crashes are the result of two or more things going wrong at once. You follow too closely and you look down at the radio and the car in front brakes suddenly. You drift out of the lane and somebody happens to be coming up right at that moment.





This is the sort of mistake a robot is unlikely to make. Any time robocar developers spot their system doing anything wrong, even if it doesn’t cause an incident, they will work immediately to fix it. Just drifting out of the lane (which people do all the time) will be a high priority event, to be fixed quickly. It should not be around long enough to happen at the same time as another problem. Not never, but a lot less often. No, the errors of computers will be different from those of humans. They will be one very bad thing rather than 3 minor things that add up to very bad.

On top of that, any time one robot makes a mistake, every other robot will learn from it, often that includes the cars of other companies if they hear about the problem. Humans don’t learn very much from the errors of other people; not that way.

Mechanical & Computer Failure

It’s worth noting that the “10% of accidents not due to human error” are actually another ripe area for robocars to get superhuman in their abilities. Modern simulators have very good physics, and as such, these cars are constantly tested and constantly learn from every type of vehicle failure you can think of. If there’s no way to prevent the crash under the laws of physics, then they’ll crash, but in the least dangerous way. If the crash can be prevented, they will probably know how to do that, something humans will never do.

For computers, there is a form of “incapacitation” which is a major hardware or software failure. We’ve all had a computer “crash” (in the metaphoric sense) on us many times and don’t want it to cause a literal crash. The developers have seen this too, more than you have, and so any good design expects this and accounts for it. Typical designs have 2 or more independent computer systems able to control the car. The second one might not be as fancy as the first, but it will be good enough to get the car off the road safely, or even home safety, even with the main computer dead.

Computer failures come in many types. Oddly, with this approach, the full-on “blue screen” crash is easy — the backups just take over. Harder are failures where it’s not clear there is a problem. That’s where the actual errors will lie.

What’s the real number?

The IIHS could not have gotten the analysis much more wrong. The reality is that the actual number is going to be a safety officer’s decision. Perfection is rarely possible, and instead, developers will be questing for a level of safety we can call “safe enough.” They’ll be constantly improving the system to make each type of error more and more rare until they reach that level of “safe enough.” Then they will deploy on the roads, and continue to improve it from there, to even higher levels of safety.

Many argue what safe enough is. Is it as safe as the average human driver? As safe as a good, sober, awake driver? Twice as good? People don’t yet agree. I believe there is a solid case for the least safe of these numbers, rather than the most safe. That’s because the early deployment, at the “minimally safe enough” level, starts small, and there’s more and more learning once this deployment begins. That learning happens faster the more operation there is on the road, and by the time you get to a serious size deployment, the vehicles have reached the more lofty goals. As long as the basic level is “average driver” it means the cars have been saving lives and preventing injuries all the time they were getting better, and they get to seriously preventing accidents much sooner than the more conservative path.

There’s even a an argument for matching the level of the “worst acceptable human driver” which is probably a freshly minted teen driver. After all, we allow those inexperienced teens on the road because it’s our main way of turning them into safer middle-aged drivers. People are not very utilitarian in their private thinking though, so they are unlikely to endorse this strategy even though it reduces road risk the most, because there will be those who are victim to the inevitable incidents of the early stages who will naturally not be utilitarian about it. That’s why I have advocated that the right way to study safety is to think about overall risk, not specific incidents.

Source link

Tagged with: • • • • • • • • • • •



Comments are closed.