33 – Julian C. Gomez – Autonomous Vehicles: People v. Machines




Trial Lawyer Nation show

Summary: In this Trial Lawyer Nation podcast, Michael Cowen sits down with automotive products liability attorney, Julian C. Gomez, to discuss his expertise on product cases, specifically dealing with autonomous vehicles (AKA: Robot Cars). Most attorneys can relate, but the gist of every other talk Michael has ever heard on this topic, before Julian’s, was that we’re going to get robot cars, they’re never going to crash, and they’re going to put everyone out of business in 5 years. This is certainly what the automotive industry is trying to promise, but the data we have to date suggests otherwise.<br> Julian’s beginnings, getting into the field of automotive product cases, started back when he clerked for a judge who was the first in the country to try a Ford Explorer/Firestone case. He was able to sit through the trial and learn from some of the best lawyers in the country, which sparked his interest and set him on this path. When Julian started doing automotive product cases, he noticed the engineers were starting to address the legal issues as opposed to the engineering issues behind them. He points out that the engineering is really not all that difficult – the vehicle uses data gathering devices, puts the information into a data processor, which processes the data based on an algorithm, then an answer or result is spitting out, and makes the vehicle do something. Getting too far into the details can sometimes overcomplicate things, which Julian compares to the area of autonomous vehicles and states “I don’t have to be a computer engineer, to know that my computer is broken or to know that it’s working.”<br> Julian then describes the different levels of crash avoidance technologies (1-6) to include all sides of the vehicle along with the various types (signaling warnings to taking full-blown actions with the vehicle). He goes on to talk about how the levels start to gray out based on human data input as well as how there really are no “driverless” vehicles on the road today, despite what you hear on the news. He also discusses a recent AAA report addressing the confusion regarding the different types of autonomous systems due to the industry, and manufacturers, because there is not a standardized naming structure for these systems.<br> Interestingly, Julian explains the current way they are measuring the level 3-5 type autonomous vehicles is through disengagements, where the human driver has had to take over the car’s actions instead of it driving itself. In comparison, Apple had roughly 1 disengagement every 1.2 miles whereas, on the opposite end of the spectrum, Waymo had roughly 1 disengagement every 10,000 miles. And while there is a huge disparity between the top performers and the bottom, and numerous tragedies throughout the industry, Julian points out the real problem is there haven’t been enough vehicle miles driven to know how safe they are going to be. He also talks about the millions of vehicle miles driven each year compared to the thousands of deaths that occur on the road, and then extrapolates the data from when Uber had its recent fatality, based on the number of vehicle miles driven by autonomous cars at that point, to determine we would be experiencing around 1.6 million deaths each year. He brings this point home by stating even if you cut that number in half multiple times, it’s still much more than what is happening today on our roads.<br> Another problem Julian points out is the conflicts that occur between an objective algorithm system in the computer within the car working with a human subjective system. He gives a great example of how we’ve all seen cars, even before we started driving, interact in different ways when the driver is planning to turn right (IE: roll slowly through the light, even if it’s technically not the correct way). As humans, we are able to gauge how much space/time we have between our vehicle and the vehicle turning in front of us,