Tags

As Google and other companies speed ahead in the driverless car race, they continue to encounter myriad of surprises including regulatory roadblocks. But according to Mark Goldfeder, it’s humans and not autonomous car companies that may get caught like a deer in the headlights.

As we previously reported, California proposed draft rules that would require, among other things, steering wheels and a licensed driver – trained by the manufacturer and with a specially-issued license – in all cars, delivering a major impediment to autonomous vehicle deployment. These types of safety precautions were put forth on the theory that a reasonable person is a safer driver than a computer. However, what if that theory proves false?

In just the last few weeks, according to Goldfeder, “U.S. safety regulators announced that for the purposes of federal law they would consider the “‘driver”‘ in Google’s new self-driving car, to be … the car itself.” While a giant step forward for the industry, it raises the question of how to address the test for negligent driving purposes.

According to Goldfeder, “the law uses the “reasonable driver” standard in evaluating negligence liability. Simply put, if a driver can show he took as much care as a “reasonable driver” should have taken, he is generally not held liable in case of an accident.” So what happens when computers can respond better on the road than a person? Will human drivers be able to meet the ‘reasonable driver’ test or will artificial intelligence and driverless cars mark the end of roadway driving for us humans? Read more from Mark Goldfeder here.

Stay tuned for more twists and turns on this bumpy, but exciting road!