Automotive and technology companies that are involved in the research and development of autonomous vehicle programs will have to also weigh the legal implications involved in the manufacturing or design of such vehicles. However, analyzing potential liability will largely be based on speculation, as legal doctrine related specifically to autonomous vehicles is non-existent.
A potential foundational case could be filed on behalf of a man killed in a crash last year while using the semi-autonomous driving system on his Tesla, although the attorney for the family indicated that no decisions have been rendered at this time. The driver was killed when the Tesla, allegedly traveling at 74 miles per hour in a 65 mile per hour zone, collided with a truck. According to a report published by the National Transportation Safety Board, the collision occurred while the vehicle was in “Autopilot” and throughout the trip the system repeatedly gave the drive warnings that said “Hands Required Not Detected,” which indicates that the driver’s hands were not on the steering column, despite the system directing the driver to do so. Additionally, the report noted that during a 37-minute period of the trip when the driver was required to have his hands on the wheel, he did so for only 25 seconds.
The use of autonomous or semi-autonomous vehicles will likely increase as new technologies are developed, as will the frequency of accidents involving those vehicles. The litigation of those claims will raise novel questions about the admissibility and reliability of evidence pulled from the computer systems of the autonomous vehicles, as well as how to monitor and detect the actions of the drivers. As was the case involving Volvo, automobile companies have the means of manipulating reports generated by vehicles, and the accuracy of data generated by those programs should be heavily scrutinized.