
Cropped from original photo by Steve Jurvetson (CC-BY-2.0)
Tesla’s newest innovation is raising excitement… and concern.
Updates on the development of autonomous cars—cars that can drive themselves without human intervention—have been popping up in the news regularly for the last few years. One of the more recent headlines occurred this summer, when the California DMV released reports on the nine car accidents involving driverless cars that have occurred in the state since September 2014. The data from the reports demonstrated that the driverless cars weren’t at fault in any of the accidents (in seven of the accidents, the driverless cars were completely stopped, or traveling less than 1 mph), though some have pointed out that the cars may contribute to accidents by driving too conservatively.
Perhaps emboldened by this bit of positive PR, Elon Musk decided to make far larger waves this October. Musk’s company, Tesla, released an update for the software in its Model S cars and Model X SUVs (the cars’ software can be updated wirelessly, much like smartphone OS updates) that introduced a new feature, called Autopilot. Autopilot is a program that uses the car’s sensors, which include cameras, radar, sonar, and GPS receivers to autonomously handle some driving duties. When Autopilot is engaged, the car can steer to stay within a single lane, safely change lanes automatically when the driver hits the turn signal, resist unsafe steering, and adjust speed in response to road traffic. It can also auto-steer to avoid crashes, scan for parking spaces, and parallel park without driver input.
Tesla owners have been excitedly posting demonstration videos to YouTube, such as this one. The videos have shown that the system has shortcomings, such as being confused when the car is in a right lane that branches off into an exit. However, the system’s strengths have been shown as well, including features not mentioned in Tesla’s announcement. In the video above, you can see Autopilot can use the car’s cameras to read speed signs, and adjust the speed accordingly. A future update will apparently allow the car to also read stop signs and automatically brake as necessary.
However, critics have been raised a number of concerns and questions about driverless cars.
These concerns boil down to two key questions: (1) What if the car’s navigation malfunctions and causes an accident? Who will be responsible? And (2) What if there is a situation in which a crash is unavoidable, but the car has options as to what or who to hit, such as a school bus versus a car versus a pedestrian in a crosswalk? How do you program a car to make that difficult ethical decision, and who bears responsibility for the consequences?
Tesla’s supporters and detractors are both loudly voicing their opinions through the media. It’s been shown that the software’s ability to operate safely can break down when pushed to its limits by reckless (non-)drivers. But then there are journalists like the one who test drove a Tesla and concluded, “All in all, it was better at driving than I was. It still is. I’m scared.”
But what is the law’s take on Tesla’s new app, and the totally driverless cars that will someday follow?
All Things Legal’s Take on Tesla’s Autopilot Program, and Driverless Cars in General
Craig Ashton launched into the discussion with a brief summary of the latest developments: “Tesla just—last week we talked about it, they can download remotely software to their vehicles which can change the dynamics of the performance—one of the things they just downloaded, which is interacting with the technology already on their vehicles, is the ability to brake and to change lanes autonomously.”
Craig then raised a question put forward by a technology consultant named Tim Bajarin, in an article titled, “Autonomous Cars and Their Ethical Conundrum:” ”So, there was a very interesting article I read… this guy says… ‘I’m a member of AARP and at some point the DMV’s gonna yank my license for age or health reasons and I’m hoping that, basically, autonomous cars come around so I can still get around and not have to worry about driving.’
“He says there’s an issue with the software that they have… this is a question that was posed to a speaker at a Re/code Mobile Conference, [featuring a panel of] people who come up with writing the code for autonomous vehicles. It says, quote:
“‘Let’s say that I am in a self-driving car. It has full control and the brakes go out. We are about to enter an intersection where a school bus had almost finished turning left, a kid on his bike is in the crosswalk just in front of the car and an elderly woman is about enter the cross walk on the right. How does this car deal with this conundrum? Does it think ‘If I swerve to the left I take out a school bus with 30 kids on it. If I go straight, I take out a kid on a bike. If I swerve right, I hit the little old lady?’ Is it thinking, ‘The bus has many lives on it and the kid on the bike is young and has a long life ahead but the elderly woman has lived a long life so I will take her out,” as the least onerous solution?’
“So, they’re actually programming this information, and they said it’s interesting in that the outcomes change, and there’s very different behaviors of the vehicles based upon the algorithms and the information they put into these computers… In real terms, if the brakes go out right now, brakes are a ‘non-delegable duty,’ which means that there’s no excuse. If your brakes go out, you’re going to be legally responsible for whatever occurs. But in this case, can you imagine the lawsuit [filed against a] manufacturer, because this is probably going to be a products liability issue, it still hasn’t been fleshed out, because if you’re not driving the car, are you negligent?”
Christopher Price, the other named partner at Ashton & Price, was making a rare appearance in the recording studio and offered his thoughts: “Well, I think the first and [most] fundamental issue is… is there an [option] on the car where you can allow it to drive itself, or can you drive the car? If there’s that [choice], then I think you absolutely are still personally responsible for whatever the car does, much like the brakes. You get behind the wheel, the brakes go out, you’re responsible. If you have elected to allow this car to drive itself, I think you are responsible. And then the next issue is… there certainly should be no question that the software, [like] the brake manufacturer, would all be sued in this. And then, the even bigger issue is… How are [the manufacturers] imposing their morality on me and the car that I’m driving in? Those are split-second decisions that are going to be made by the driver of the car, and they’re gonna vary every single time. So, to have somebody sitting up in a glass booth regulating this type of morality and reaction time seems [very] dangerous to me.”
Craig pointed out that not giving the driving software some sort of guidance as to what is the least-costly decision is itself a serious problem, complicating the issue: “Yeah, but I mean, if it’s not in there, it just randomly hits a school bus? I don’t know. It’s an interesting ethical question, and it’s really interesting in that there are legal issues associated to it, and then you’re asking a computer to make an ethical decision.”
Christopher countered this point by saying: “The fact is, it’s not random, and that somebody has programmed it, I think, definitely imposes liability on them. [The manufacturer] has made the choice for you, so therefore they’re going to be held accountable for the choice that they make.”
Craig then posed the question, “And is there criminal liability, because essentially, the car intentionally took out the little old lady, because it made that decision, right?”
Christopher’s answer acknowledged the ambiguity of the hypothetical situation: “It definitely made that decision, but I would say that was the least-negligent of the decisions that [could be] made. If a problem was going to occur, it probably wasn’t intentional that she suffered the brunt of it. The intentional act, if there was one, was the programming of the vehicle.”
Craig then summarized how existing law could simplify the situation, cutting through the Gordian knot of the situation, or go the other way entirely: “So, in the context of the legal issues, because we haven’t figured all of this out yet, and you made an interesting point. We could say, ‘Look, bottom line is, you elected to have the car drive you…’ We could say going forward that the manufacturer is absolved from responsibility because you’re still operating that vehicle and you’re making decisions to go into the driverless mode, and then our system doesn’t change at all. It basically would have the same liability system that we have, because it’s gonna be the [responsibility of the] owner, not the manufacturer.
“But the other direction could be, ‘Look, I’m not driving it. The car malfunctioned.’ It’s more dangerous than a reasonable consumer would expect, and Tesla’s on the hook, or General Motors is on the hook, or somebody’s on the hook.”
Unsurprisingly, the situation may be decided by where the money is, as Chris pointed out at the close of the discussion: “Well practically speaking, if that’s the case, unless you’re Donald Trump or a billionaire, you’re not going to have the type of insurance or the type of assets that these large companies do, so you’ll be a bit player in what plays out against the manufacturer of the car, or the brakes, or the system, and they’re the ones who are gonna be the deep pockets in the event that something catastrophic happens.”
The conundrum posed by driverless cars is a difficult one to address. But, it’s definitely going to be an interesting legal situation, and one that will inevitably be posed to the court system, sooner or later.