Bryant Walker Smith – Robohub https://robohub.org Connecting the robotics community to the world Mon, 10 Jan 2022 09:19:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 California’s AV testing rules apply to Tesla’s “FSD” https://robohub.org/californias-av-testing-rules-apply-to-teslas-fsd/ Mon, 10 Jan 2022 08:53:42 +0000 http://robohub.org/?guid=3efa489c89b1e7f7ea6ad99fae288ce0

Tesla Motors autopilot (photo:Tesla)

Five years to the day after I criticized Uber for testing its self-proclaimed “self-driving” vehicles on California roads without complying with the testing requirements of California’s automated driving law, I find myself criticizing Tesla for testing its self-proclaimed “full self-driving” vehicles on California roads without complying with the testing requirements of California’s automated driving law.

As I emphasized in 2016, California’s rules for “autonomous technology” necessarily apply to inchoate automated driving systems that, in the interest of safety, still use human drivers during on-road testing. “Autonomous vehicles testing with a driver” may be an oxymoron, but as a matter of legislative intent it cannot be a null set.

There is even a way to mortar the longstanding linguistic loophole in California’s legislation: Automated driving systems undergoing development arguably have the “capability to drive a vehicle without the active physical control or monitoring by a human operator” even though they do not yet have the demonstrated capability to do so safely. Hence the human driver.

(An imperfect analogy: Some kids can drive vehicles, but it’s less clear they can do so safely.)

When supervised by that (adult) human driver, these nascent systems function like the advanced driver assistance features available in many vehicles today: They merely work unless and until they don’t. This is why I distinguish between the aspirational level (what the developer hopes its system can eventually achieve) and the functional level (what the developer assumes its system can currently achieve).

(SAE J3016, the source for the (in)famous levels of driving automation, similarly notes that “it is incorrect to classify” an automated driving feature as a driver assistance feature “simply because on-road testing requires” driver supervision. The version of J3016 referenced in regulations issued by the California Department of Motor Vehicles does not contain this language, but subsequent versions do.)

The second part of my analysis has developed as Tesla’s engineering and marketing have become more aggressive.

Back in 2016, I distinguished Uber’s AVs from Tesla’s Autopilot. While Uber’s AVs were clearly on the automated-driving side of a blurry line, the same was not necessarily true of Tesla’s Autopilot:

In some ways, the two are similar: In both cases, a human driver is (supposed to be) closely supervising the performance of the driving automation system and intervening when appropriate, and in both cases the developer is collecting data to further develop its system with a view toward a higher level of automation.

In other ways, however, Uber and Tesla diverge. Uber calls its vehicles self-driving; Tesla does not. Uber’s test vehicles are on roads for the express purpose of developing and demonstrating its technologies; Tesla’s production vehicles are on roads principally because their occupants want to go somewhere.

Like Uber then, Tesla now uses the term “self-driving.” And not just self-driving: full self-driving. (This may have pushed Waymo to call its vehicles “fully driverless“—a term that is questionable and yet still far more defensible. Perhaps “fully” is the English language’s new “very.”)

Tesla’s use of “FSD” is, shall we say, very misleading. After all, its “full self-driving” cars still need human drivers. In a letter to the California DMV, the company characterized “FSD” as a level two driver assistance feature. And I agree, to a point: “FSD” is functionally a driver assistance system. For safety reasons, it clearly requires supervision by an attentive human driver.

At the same time, “FSD” is aspirationally an automated driving system. The name unequivocally communicates Tesla’s goal for development, and the company’s “beta” qualifier communicates the stage of that development. Tesla intends for its “full self-driving” to become, well, full self-driving, and its limited beta release is a key step in that process.

And so while Tesla’s vehicles are still on roads principally because their occupants want to go somewhere, “FSD” is on a select few of those vehicles because Tesla wants to further develop—we might say “test”—it. In the words of Tesla’s CEO: “It is impossible to test all hardware configs in all conditions with internal QA, hence public beta.”

Tesla’s instructions to its select beta testers show that Tesla is enlisting them in this testing. Since the beta software “may do the wrong thing at the worst time,” drivers should “always keep your hands on the wheel and pay extra attention to the road. Do not become complacent…. Use Full Self-Driving in limited Beta only if you will pay constant attention to the road, and be prepared to act immediately….”

California’s legislature envisions a similar role for the test drivers of “autonomous vehicles”: They “shall be seated in the driver’s seat, monitoring the safe operation of the autonomous vehicle, and capable of taking over immediate manual control of the autonomous vehicle in the event of an autonomous technology failure or other emergency.” These drivers, by the way, can be “employees, contractors, or other persons designated by the manufacturer of the autonomous technology.”

Putting this all together:

  1. Tesla is developing an automated driving system that it calls “full self-driving.”
  2. Tesla’s development process involves testing “beta” versions of “FSD” on public roads.
  3. Tesla carries out this testing at least in part through a select group of designated customers.
  4. Tesla instructs these customers to carefully supervise the operation of “FSD.”

Tesla’s “FSD” has the “capability to drive a vehicle without the active physical control or monitoring by a human operator,” but it does not yet have the capability to do so safely. Hence the human drivers. And the testing. On public roads. In California. For which the state has a specific law. That Tesla is not following.

As I’ve repeatedly noted, the line between testing and deployment is not clear—and is only getting fuzzier in light of over-the-air updates, beta releases, pilot projects, and commercial demonstrations. Over the last decade, California’s DMV has performed admirably in fashioning rules, and even refashioning itself, to do what the state’s legislature told it to do. The issues that it now faces with Tesla’s “FSD” are especially challenging and unavoidably contentious.

But what is increasingly clear is that Tesla is testing its inchoate automated driving system on California roads. And so it is reasonable—and indeed prudent—for California’s DMV to require Tesla to follow the same rules that apply to every other company testing an automated driving system in the state.

]]> Tesla’s fatal crash https://robohub.org/teslas-fetal-crash/ Thu, 26 Apr 2018 22:58:29 +0000 http://robohub.org/teslas-fetal-crash/

Source: Tesla

Tesla can do better than its current public response to the recent fatal crash involving one of its vehicles. I would like to see more introspection, credibility, and nuance.

Introspection

Over the last few weeks, Tesla has blamed the deceased driver and a damaged highway crash attenuator while lauding the performance of Autopilot, its SAE level 2 driver assistance system that appears to have directed a Model X into the attenuator. The company has also disavowed its own responsibility: “The fundamental premise of both moral and legal liability is a broken promise, and there was none here.”

In Tesla’s telling, the driver knew he should pay attention, he did not pay attention, and he died. End of story. The same logic would seem to apply if the driver had hit a pedestrian instead of a crash barrier. Or if an automaker had marketed an outrageously dangerous car accompanied by a warning that the car was, in fact, outrageously dangerous. In the 1980 comedy Airplane!, a television commentator dismisses the passengers on a distressed airliner: “They bought their tickets. They knew what they were getting into. I say let ‘em crash.” As a rule, it’s probably best not to evoke a character in a Leslie Nielsen movie.

It may well turn out that the driver in this crash was inattentive, just as the US National Transportation Safety Board (NTSB) concluded that the Tesla driver in an earlier fatal Florida crash was inattentive. But driver inattention is foreseeable (and foreseen), and “[j]ust because a driver does something stupid doesn’t mean they – or others who are truly blameless – should be condemned to an otherwise preventable death.” Indeed, Ralph Nader’s argument that vehicle crashes are foreseeable and could be survivable led Congress to establish the National Highway Traffic Safety Administration (NHTSA).

Airbags are a particularly relevant example. Airbags are unquestionably a beneficial safety technology. But early airbags were designed for average-size male drivers—a design choice that endangered children and lighter adults. When this risk was discovered, responsible companies did not insist that because an airbag is safer than no airbag, nothing more should be expected of them. Instead, they designed second-generation airbags that are safer for everyone.

Similarly, an introspective company—and, for that matter, an inquisitive jury—would ask whether and how Tesla’s crash could have been reasonably prevented. Tesla has appropriately noted that Autopilot is neither “perfect” nor “reliable,” and the company is correct that the promise of a level 2 system is merely that the system will work unless and until it does not. Furthermore, individual autonomy is an important societal interest, and driver responsibility is a critical element of road traffic safety. But it is because driver responsibility remains so important that Tesla should consider more effective ways of engaging and otherwise managing the imperfect human drivers on which the safe operation of its vehicles still depends.

Such an approach might include other ways of detecting driver engagement. NTSB has previously expressed its concern over using only steering wheel torque as a proxy for driver attention. And GM’s own level 2 system, Super Cruise, tracks driver head position.

Such an approach may also include more aggressive measures to deter distraction. Tesla could alert law enforcement when drivers are behaving dangerously. It could also distinguish safety features from convenience features—and then more stringently condition convenience on the concurrent attention of the driver. For example, active lane keeping (which might ping pong the vehicle between lane boundaries) could enhance safety even if active lane centering is not operative. Similarly, automatic deceleration could enhance safety even if automatic acceleration is inoperative.

NTSB’s ongoing investigation is an opportunity to credibly address these issues. Unfortunately, after publicly stating its own conclusions about the crash, Tesla is no longer formally participating in NTSB’s investigation. Tesla faults NTSB for this outcome: “It’s been clear in our conversations with the NTSB that they’re more concerned with press headlines than actually promoting safety.” That is not my impression of the people at NTSB. Regardless, Tesla’s argument might be more credible if it did not continue what seems to be the company’s pattern of blaming others.

Credibility

Tesla could also improve its credibility by appropriately qualifying and substantiating what it says. Unfortunately, Tesla’s claims about the relative safety of its vehicles still range from “lacking” to “ludicrous on their face.” (Here are some recent views.)

Tesla repeatedly emphasizes that “our first iteration of Autopilot was found by the U.S. government to reduce crash rates by as much as 40%.” NHTSA reached its conclusion after (somehow) analyzing Tesla’s data—data that both Tesla and NHTSA have kept from public view. Accordingly, I don’t know whether the underlying math actually took only five minutes, but I can attempt some crude reverse engineering to complement the thoughtful analyses already done by others.

Let’s start with NHTSA’s summary: The Office of Defects Investigation (ODI) “analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to and after Autopilot installation. [An accompanying chart] shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation”—from 1.3 to 0.8 crashes per million miles.

This raises at least two questions. First, how do these rates compare to those for other vehicles? Second, what explains the asserted decline?

Comparing Tesla’s rates is especially difficult because of a qualification that NHTSA’s report mentions only once and that Tesla’s statements do not acknowledge at all. The rates calculated by NHTSA are for “airbag deployment crashes” only—a category that NHSTA does not generally track for nonfatal crashes.

NHTSA does estimate rates at which vehicles are involved in crashes. (For a fair comparison, I look at crashed vehicles rather than crashes.) With respect to crashes resulting in injury, 2015 rates were 0.88 crashes per million miles for light trucks and 1.26 for passenger cars. And with respect to property-damage only crashes, they were 2.35 for light trucks and 3.12 for passenger cars. This means that, depending on the correlation between airbag deployment and crash injury (and accounting for the increasing number and sophistication of airbags), Tesla’s rates could be better than, worse than, or comparable to these national estimates.

Airbag deployment is a complex topic, but the upshot is that, by design, airbags do not always inflate. An analysis by the Pennsylvania Department of Transportation suggests that airbags deploy in less than half of the airbag-equipped vehicles that are involved in reported crashes, which are generally crashes that cause physical injury or significant property damage. (The report’s shift from reportable crashes to reported crashes creates some uncertainty, but let’s assume that any crash that results in the deployment of an airbag is serious enough to be counted.)

Data from the same analysis show about two reported crashed vehicles per million miles traveled. Assuming a deployment rate of 50 percent suggests that a vehicle deploys an airbag in a crash about once every million miles that it travels, which is roughly comparable to Tesla’s post-Autopilot rate.

Indeed, at least two groups with access to empirical data—the Highway Loss Data Institute and AAA – The Auto Club Group—have concluded that Tesla vehicles do not have a low claim rate (in addition to having a high average cost per claim), which suggests that these vehicles do not have a low crash rate either.

Tesla offers fatality rates as another point of comparison: “In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.”

In 2016, there was one fatality for every 85 million vehicle miles traveled—close to the number cited by Tesla. For that same year, NHTSA’s FARS database shows 14 fatalities across 13 crashes involving Tesla vehicles. (Ten of these vehicles were model year 2015 or later; I don’t know whether Autopilot was equipped at the time of the crash.) By the end of 2016, Tesla vehicles had logged about 3.5 billion miles worldwide. If we accordingly assume that Tesla vehicles traveled 2 billion miles in the United States in 2016 (less than one tenth of one percent of US VMT), we can estimate one fatality for every 150 million miles traveled.

It is not surprising if Tesla’s vehicles are less likely to be involved in a fatal crash than the US vehicle fleet in its entirety. That fleet, after all, has an average age of more than a decade. It includes vehicles without electronic stability control, vehicles with bald tires, vehicles without airbags, and motorcycles. Differences between crashes involving a Tesla vehicle and crashes involving no Tesla vehicles could therefore have nothing to do with Autopilot.

More surprising is the statement that Tesla vehicles equipped with Autopilot are much safer than Tesla vehicles without Autopilot. At the outset, we don’t know how often Autopilot was actually engaged (rather than merely equipped), we don’t know the period of comparison (even though crash and injury rates fluctuate over the calendar year), and we don’t even know whether this conclusion is statistically significant. Nonetheless, on the assumption that the unreleased data support this conclusion, let’s consider three potential explanations:

First, perhaps Autopilot is incredibly safe. If we assume (again, because we just don’t know otherwise) that Autopilot is actually engaged for half of the miles traveled by vehicles on which it is installed, then a 40 percent reduction in airbag deployments per million miles really means an 80 percent reduction in airbag deployments while Autopilot is engaged. Pennsylvania data show that about 20 percent of vehicles in reported crashes are struck in the rear, and if we further assume that Autopilot would rarely prevent another vehicle from rear-ending a Tesla, then Autopilot would essentially need to prevent every other kind of crash while engaged in order to achieve such a result.

Second, perhaps Tesla’s vehicles had a significant performance issue that the company corrected in an over-the-air update at or around the same time that it introduced Autopilot. I doubt this—but the data released are as consistent with this conclusion as with a more favorable one.

Third, perhaps Tesla introduced or upgraded other safety features in one of these OTA updates. Indeed, Tesla added automatic emergency braking and blind spot warning about half a year before releasing Autopilot, and Autopilot itself includes side collision avoidance. Because these features may function even when Autopilot is not engaged and might not induce inattention to the same extent as Autopilot, they should be distinguished from rather than conflated with Autopilot. I can see an argument that more people will be willing to pay for convenience plus safety than for just safety alone, but I have not seen Tesla make this more nuanced argument.

Nuance

In general, Tesla should embrace more nuance. Currently, the company’s explicit and implicit messages regarding this fatal crash have tended toward the absolute. The driver was at fault—and therefore Tesla was not. Autopilot improves safety—and therefore criticism is unwarranted. The company needs to be able to communicate with the public about Autopilot—and therefore it should share specific and, in Tesla’s view, exculpatory information about the crash that NTSB is investigating.

Tesla understands nuance. Indeed, in its statement regarding its relationship with NTSB, the company noted that “we will continue to provide technical assistance to the NTSB.” Tesla should embrace a systems approach to road traffic safety and acknowledge the role that the company can play in addressing distraction. It should emphasize the limitations of Autopilot as vigorously as it highlights the potential of automation. And it should cooperate with NTSB while showing that it “believe[s] in transparency” by releasing data that do not pertain specifically to this crash but that do support the company’s broader safety claims.

For good measure, Tesla should also release a voluntary safety self-assessment. (Waymo and General Motors have.) Autopilot is not an automated driving system, but that is where Tesla hopes to go. And by communicating with introspection, credibility, and nuance, the company can help make sure the public is on board.

]]> The senate’s automated driving bill could squash state authority https://robohub.org/the-senates-automated-driving-bill-could-squash-state-authority/ Tue, 31 Oct 2017 23:14:09 +0000 http://robohub.org/the-senates-automated-driving-bill-could-squash-state-authority/ My previous post on the House and Senate automated driving bills (HB 3388 and SB 1885) concluded by noting that, in addition to the federal government, states and the municipalities within them also play an important role in regulating road safety.These numerous functions involve, among others, designing and maintaining roads, setting and enforcing traffic laws, licensing and punishing drivers, registering and inspecting vehicles, requiring and regulating automotive insurance, and enabling victims to recover from the drivers or manufacturers responsible for their injuries.

Unfortunately, the Senate bill could preempt many of these functions. The House bill contains modest preemption language and a savings clause that admirably tries to clarify the line between federal and state roles. The Senate bill, in contrast, currently contains a breathtakingly broad preemption provision that was proposed in committee markup by, curiously, a Democratic senator.

(I say “currently” for two reasons. First, a single text of the bill is not available online; only the original text plus the marked-up texts for the Senate Commerce Committee’s amendments to that original have been posted. Second, whereas HB 3388 has passed the full House, SB 1885 is still making its way through the Senate.)

Under one of these amendments to the Senate bill, “[n]o State or political subdivision of a State may adopt, maintain, or enforce any law, rule, or standard regulating the design, construction, or performance of a highly automated vehicle or automated driving system with respect to any of the safety evaluation report subject areas.” These areas are system safety, data recording, cybersecurity, human-machine interface, crashworthiness, capabilities, post-crash behavior, accounting for applicable laws, and automation function.

A savings provision like the one in the House bill was in the original Senate bill but apparently dropped in committee.

A plain reading of this language suggests that all kinds of state and local laws would be void in the context of automated driving. Restrictions on what kind of data can be collected by motor vehicles? Fine for conventional driving, but preempted for automated driving. Penalties for speeding? Fine for conventional driving, but preempted for automated driving. Deregistration of an unsafe vehicle? Same.

The Senate language could have an even more subtly dramatic effect on state personal injury law. Under existing federal law, FMVSS compliance “does not exempt a person from liability at common law.” (The U.S. Supreme Court has fabulously muddied what this provision actually means by, in two cases, reaching essentially opposite conclusions about whether a jury could find a manufacturer liable under state law for injuries caused by a vehicle design that was consistent with applicable FMVSS.)

The Senate bill preserves this statutory language (whatever it means) and even adds a second sentence providing that “nothing” in the automated driving preemption section “shall exempt a person from liability at common law or under a State statute authorizing a civil remedy for damages or other monetary relief.”

Although this would seem to reinforce the power of a jury to determine what is reasonable in a civil suit, the Senate bill makes this second sentence “subject to” the breathtakingly broad preemption language described above. On its plain meaning, this language accordingly restricts rather than respects state tort and product liability law.

This is confusing (whether intentionally or unintentionally), so consider a stylized illustration:

1) You may not use the television.

2) Subject to (1), you may watch The Simpsons.

This language probably bars you from watching The Simpsons (at least on the television). If the intent were instead to permit you to do so, the language would be:

1) You may not use the television.

2) Notwithstanding (1), you may watch The Simpsons.

The amendment as proposed could have said “notwithstanding” instead of “subject to.” It did not.

I do not know the intent of the senators who voted for this automated driving bill and for this amendment to it. They may have intended a result other than the one suggested by their language. Indeed, they may have even addressed these issues without recording the result in the documents subsequently released. If so, they should make these changes, or they should make their changes public.

And if not, everyone from Congress to City Hall should consider what this massive preemption would mean.

]]>
Congress’ automated driving bills are both more and less than they seem https://robohub.org/congress-automated-driving-bills-are-both-more-and-less-than-they-seem/ Fri, 27 Oct 2017 14:53:03 +0000 http://robohub.org/congress-automated-driving-bills-are-both-more-and-less-than-they-seem/ Bills being considered by Congress deserve our attention—but not our full attention. To wit: When it comes to safety-related regulation of automated driving, existing law is at least as important as the bills currently in Congress (HB 3388 and SB 1885). Understanding why involves examining all the ways that the developer of an automated driving system might deploy its system in accordance with federal law as well as all the ways that governments might regulate that system. And this examination reveals some critical surprises.

As automated driving systems get closer to public deployment, their developers are closely evaluating how the full set of Federal Motor Vehicle Safety Standards (FMVSS) will apply to these systems and to the vehicles on which they are installed. Rather than specifying a comprehensive regulatory framework, these standards impose requirements on only some automotive features and functions. Furthermore, manufacturers of vehicles and of components thereof self-certify that their products comply with these standards. In other words, unlike its European counterparts (and a small number of federal agencies overseeing products deemed more dangerous than motor vehicles), the National Highway Traffic Safety Administration (NHTSA) does not prospectively approve most of the products it regulates.

There are at least seven (!) ways that the developer of an automated driving system could conceivably navigate this regulatory regime.

First, the developer might design its automated driving system to comply with a restrictive interpretation of the FMVSS. The attendant vehicle would likely have conventional braking and steering mechanisms as well as other accoutrements for an ordinary human driver. (These conventional mechanisms could be usable, as on a vehicle with only part-time automation, or they might be provided solely for compliance.) NHTSA implied this approach in its 2016 correspondence with Google, while another part of the US Department of Transportation even highlighted those specific FMVSS provisions that a developer would need to design around. Once the developer self-certifies that its system in fact complies with the FMVSS, it can market it.

Second, the developer might ask NHTSA to clarify the agency’s understanding of these provisions with a view toward obtaining a more accommodating interpretation. Previously—and, more to the point, under the previous administration—NHTSA was somewhat restrictive in its interpretation, but a new chief counsel might reach a different conclusion about whether and how the existing standards apply to automated driving. In that case, the developer could again simply self-certify that its system indeed complies with the FMVSS.

Third, the developer might petition NHTSA to amend the FMVSS to more clearly address (or expressly abstain from addressing) automated driving systems. This rulemaking process would be lengthy (measured in years rather than months), but a favorable result would give the developer even more confidence in self-certifying its system.

Fourth, the developer could lobby Congress to shorten this process—or preordain the result—by expressly accommodating automated driving systems in a statute rather than in an agency rule. This is not, by the way, what the bills currently in Congress would do.

Fifth, the developer could request that NHTSA exempt some of its vehicles from portions of the FMVSS. This exemption process, which is prospective approval by another name, requires the applicant to demonstrate that the safety level of its feature or vehicle “at least equals the safety level of the standard.” Under existing law, the developer could exempt no more than 2,500 new vehicles per year. Notably, however, this could include heavy trucks as well as passenger cars.

Sixth, the developer could initially deploy its vehicles “solely for purposes of testing or evaluation” without self-certifying that those vehicles comply with the FMVSS. Although this exception is available only to established automotive manufacturers, a new or recent entrant could partner with or outright buy one of the companies in that category. Many kinds of large-scale pilot and demonstration projects could be plausibly described as “testing or evaluation,” particularly by companies that are comfortable losing money (or comfortable describing their services as “beta”) for years on end.

Seventh, the developer could ignore the FMVSS altogether. Under federal law, “a person may not manufacture for sale, sell, offer for sale, introduce or deliver for introduction in interstate commerce, or import into the United States, any [noncomplying] motor vehicle or motor vehicle equipment.” But under the plain meaning of this provision (and a related definition of “interstate commerce”), a developer could operate a fleet of vehicles equipped with its own automated driving system within a state without certifying that those vehicles comply with the FMVSS.

This is the background law against which Congress might legislate—and against which its bills should be evaluated.

Both bills would dramatically expand the number of exemptions that NHTSA could grant to each manufacturer, eventually reaching 100,000 per year in the House version. Some critics of the bills have suggested that this would give free rein to manufactures to deploy tens of thousands of automated vehicles without any prior approval.

But considering this provision in context provides two key insights. First, automated driving developers may already be able to lawfully deploy tens of thousands of their vehicles without any prior approval—by designing them to comply with the FMVSS, by claiming testing or evaluation, or by deploying an in-state service. Second, the exemption process gives NHTSA far more power than it otherwise has: The applicant must convince the agency to affirmatively permit it to market its system.

Both bills would also require the manufacturer of an automated driving system to submit a “safety evaluation report” to NHTSA that “describes how the manufacturer is addressing the safety of such vehicle or system.” This requirement would formalize the safety assessment letters that NHTSA encouraged in its 2016 and 2017 automated vehicle policies. These three frameworks all evoke my earlier proposal for what I call the “public safety case,” wherein an automated driving developer tells the rest of us what they are doing, why they think it is reasonably safe, and why we should believe them.

Unsurprisingly, I think this is a fine idea. It encourages innovation in safety assurance and regulation, informs regulators, and—if disclosure is meaningful—helps educate the public at large. Congress could strengthen these provisions as currently drafted, and it could give NHTSA the resources needed to effectively engage with these reports. Regardless, in evaluating the bills, it is important to understand that these provisions increase rather than decrease what an automated driving system developer must do under federal law. They are an addition rather than an alternative to each of the seven pathways described above.

Both bills would also exclude heavy trucks and buses from their definitions of automated vehicle. This exclusion, added at the behest of labor groups concerned about the eventual implications of commercial truck automation, means that NHTSA cannot exempt tens of thousands of heavy vehicles per manufacturer from a safety standard. But each truck manufacturer can still seek to exempt up to 2,500 vehicles per year—if such an exemption is even required. And, depending on how language relating to the safety evaluation reports is interpreted, this exemption might even relieve automated truck manufacturers of the obligation to submit these reports.

Finally, these bills largely preserve NHTSA’s existing regulatory authority—and that authority involves much more than making rules and granting exemptions to those rules. Crucially, the agency can conduct investigations and pursue recalls—even if a vehicle fully complies with the applicable FMVSS. This is because ensuring motor vehicle safety requires more than satisfying specific safety standards. And this broader definition of safety—“the performance of a motor vehicle or motor vehicle equipment in a way that protects the public against unreasonable risk of accidents occurring because of the design, construction, or performance of a motor vehicle, and against unreasonable risk of death or injury in an accident, and includes nonoperational safety of a motor vehicle”—gives NHTSA great power.

States and the municipalities within them also play an important role in regulating road safety—and my next post considers the effect of the Senate bill in particular on this state and local authority.

]]>
Georgia and Virginia legislation for automated driving and delivery robots https://robohub.org/georgia-and-virginia-legislation-for-automated-driving-and-delivery-robots/ Thu, 09 Feb 2017 14:10:45 +0000 http://robohub.org/georgia-and-virginia-legislation-for-automated-driving-and-delivery-robots/

How Governments Can Promote Automated Driving recommended that governments conduct “legal audits” to “identify and analyze every statute and regulation that could apply adversely or ambiguously to automated driving.” Automated Vehicles Are Probably Legal in the United States attempted this nationwide, and now the authors of Georgia’s HB 248 have produced a bill that (while not perfect) reflects a thoughtful effort to do the same in that state.

HB 248 incorporates terms and concepts from SAE J3016, maps a variety of existing legal provisions onto automated driving, and designates the manufacturer of a vehicle operated by an automated driving system as the vehicle’s sole driver.

The authors could have defined this driver as the developer of the automated driving system rather than the manufacturer of the platform vehicle. However, like other “SAVe” acts, this bill would limit its special legal framework to motor vehicle manufacturers that deploy their vehicles as part of fleets within specific geographic areas.

An open question is whether this special framework would preclude other developers of or approaches to automated driving. HB 248 need not — and should not — establish an exclusive legal framework for all automated driving applications in the state. This is because automated driving involves credible developers other than vehicle manufacturers as well as potential approaches other than truly driverless systems (even though these driverless systems remain my favorite).

Michigan actually did enact a “SAVe” law limited to manufacturers, although a companion bill relaxed the limitation somewhat, and in any case, that state’s simultaneous passage of multiple bills seems to suggest that the “SAVe” framework is not the only one under which automated driving could lawfully occur. (But it’s hard to know for sure; those Michigan laws are a collective mess.) A similar bill pending in Maryland, however, would have a clearly restrictive effect.

While you’re browsing legislation, check out Virginia’s bill on “electric personal delivery devices.” Sidewalk delivery robots are exciting in their own right — and they provide an important reminder that transportation automation is much more than just cars.

My publications are at newlypossible.org.

]]>
Uber must follow California’s laws https://robohub.org/uber-must-follow-californias-laws/ Mon, 19 Dec 2016 10:47:19 +0000 http://robohub.org/uber-must-follow-californias-laws/ Source: Uber

Source: Uber

Uber is testing its self-proclaimed “self-driving” vehicles on California roads without complying with the testing requirements of California’s automated driving law. California’s Department of Motor Vehicles says that Uber is breaking that law; Uber says it’s not. The DMV is correct.

Uber’s argument is textually plausible but contextually untenable. It exploits a drafting problem that I highlighted first when Nevada was developing its automated driving regulations and again when California enacted a statute modeled on those regulations. California’s statute defines “autonomous technology” as “technology that has the capability to drive a vehicle without the active physical control or monitoring by a human operator”—and yet the purpose of testing such a technology is to develop or demonstrate that consistent capability. Indeed, the testing provisions of the statute even require a human to actively monitor a vehicle that, by definition, doesn’t need active human monitoring.

This linguistic loophole notwithstanding, the testing requirements of California’s law were intended to apply to aspirationally automated driving. If not, then those requirements would not reach any vehicle being tested when the law was enacted, any vehicle being tested today, or any “test” vehicle whatsoever.

Uber understandably analogizes its activities to the deployment of Tesla’s Autopilot. In some ways, the two are similar: In both cases, a human driver is (supposed to be) closely supervising the performance of the driving automation system and intervening when appropriate, and in both cases the developer is collecting data to further develop its system with a view toward a higher level of automation.

In other ways, however, Uber and Tesla diverge. Uber calls its vehicles self-driving; Tesla does not. Uber’s test vehicles are on roads for the express purpose of developing and demonstrating its technologies; Tesla’s production vehicles are on roads principally because their occupants want to go somewhere.

The line between testing and deployment is blurry—and will only become more so as over-the-air updates and pilot projects become more common. Nonetheless, Uber’s activities are comfortably (or, for Uber, uncomfortably) on the side of testing.

If I were advising Uber, I’d ask what its end game is and how its current posture in California advances that end game. Is it trying to create facts on the ground to which law may eventually conform? Establish a legal foundation for an argument about the legality in California of remotely monitored vehicles? Signal once again that it won’t be bound by conventional norms (even legal norms)?

Uber has noted that “real world testing on public roads is essential … to gain public trust,” and yet California’s law was in large part about building this trust—in the technologies and their developers and among regulators and the public they serve. The statutory and regulatory testing provisions are far from perfect, but the registration and reporting requirements that Uber has eschewed seek the transparency through which trust can be earned. In contrast, Uber’s current posture is not building trust in its technologies, its practices, or its philosophy.

That posture is also antagonizing California’s DMV and Office of the Attorney General. The state’s lawyers may well ask a judge for an injunction ordering Uber and its employees to stop their testing in the state. If a judge were to issue a temporary or permanent injunction, then Uber and any employees to whom the injunction applied would be in held in contempt if they nonetheless continued to test. That could subject them to fines, jail, and a misdemeanor charge.

Remedies beyond an injunction may also be available to the state (and to others with an interest in Uber’s activities). The DMV, Highway Patrol, and other state agencies could conceivably consider how a variety of statutory provisions apply (beyond those specific to automated driving), including revocation of vehicle registration, the obligation to comply with a peace officer’s lawful order, the prohibition against reckless driving, the prohibition against violating any provision of the vehicle code (including the automated driving provisions), the prohibition against causing unlawful operation, the prohibition against driving with certain video displays active, and even the (dubious) penal code prohibition against conspiring to “commit any act injurious to the public health, to public morals, or to pervert or obstruct justice, or the due administration of the laws.”

(A side note: Violations of the vehicle code are punishable in part through driving license points, and the testing regulations at issue here prohibit test drivers with more than one point on their license.)

California directs that provisions of its penal code are to be “construed according to the fair import of their terms, with a view to effect its objects and to promote justice.” If the same construction is applied to the vehicle code, then some of the above statutory provisions may be a poor fit for Uber’s conduct—but then so too is Uber’s strictly textual argument.

A law is often susceptible to a range of reasonable interpretations. Existing law in most states, for example, is probably consistent with many forms of automated driving—but the legal interpretations on which that conclusion rests can depend as much on the public perception of automated driving as they do on the specific language of the relevant laws. (Even so, Michigan in particular should take note here of the pitfalls of muddled laws.) Here, too, trust matters.

Against this flexible background, laws specific to automated driving can actually be more restrictive than permissive. This is certainly true in California, and although I’ve criticized the regulations that unduly limit automated driving testing, that is the state’s prerogative. Uber does not need to test in California, but if it does, it must follow California’s laws.

]]> Michigan’s automated driving bills https://robohub.org/michigans-automated-driving-bills-2/ Mon, 26 Sep 2016 13:30:46 +0000 http://robohub.org/michigans-automated-driving-bills-2/ self_driving_Robocar_autonomous_car_spedometer_accelerate_needle_speed

Michigan’s Senate are reviewing several bills related to automated driving. SB995, 996, 997, and 998 are now out of committee, and SB 927 and 928 are not far behind. These bills seem to be a mixed bag. Critically, they are in desperate need of clarification followed by thoughtful discussion.

Editor’s note: Please note that this article refers to the versions that came out of the senate before moving to the house, where they have been slightly modified.

What’s good

SB 995 would repeal the state’s express ban on automated driving generally — a three-year-old anachronism that has frustrated officials promoting Michigan to developers of automated driving systems and that could eventually frustrate early efforts to actually deploy those systems. At a minimum, this would return Michigan law to flexible ambiguity on the question of the legality of automated driving in general. The bill probably goes even further by expressly authorizing automated driving: It provides that “[a]n automated motor vehicle may be operated on a street or highway on this state,” and the summary of the bill as reported from committee similarly concludes that SB 995 would “[a]llow an automated motor vehicle to be operated on a street or highway in Michigan.” (This provision is somewhat confusing because it would be added to an existing statutory section that currently addresses only research and testing and because it would seem to subvert many proposed restrictions on research tests and “on-demand automated motor vehicle networks.”) Regardless, this bill would also exempt groups of closely spaced and tightly coordinated vehicles from certain following-distance requirements that are incompatible with platooning. Furthermore, by using key definitions from SAE J3016, SB 995 would also help to align legal language with credible technical language.

What’s weird

SB 995 and 996 may or may not give recognized manufacturers of motor vehicles a special driverless taxi privilege, and they may or may not disadvantage companies that cannot partner with such a manufacturer — but they definitely do add unnecessary confusion. Together, these two bills expressly authorize “on-demand automated motor vehicle networks” that involve a recognized motor vehicle manufacturer in some capacity. Under the bills as originally drafted, only these manufacturers would have been eligible to “participate” in those networks. This would have meant that General Motors could run an “on-demand automated motor vehicle network” while Google and Uber could not. Although nothing in the original bills would have explicitly — or even, at least arguably, implicitly — prohibited other driverless taxi services, these services would not have qualified as “on-demand motor vehicle networks” and would not have benefitted from an express authorization. As revised, however, SB 995 now broadens the scope of “on-demand automated motor vehicle networks” to include both those in which a vehicle manufacturer is the only participant (still called a “SAVE Project”) and those in which such a manufacturer merely “supplie[s] or control[s]” the vehicles used therein (not called a “SAVE Project”).

Making sense of all this is difficult. The currently proposed language could mean that automated driving is lawful only in the context of research and development and “on-demand motor vehicle networks.” Or it could mean that automated driving is lawful generally and that these networks are subject to more restrictive requirements. It could mean that any company could run a driverless taxi service, including motor vehicle manufacturers that might otherwise face unrelated and unspecified legal impediments. Or it could mean that a company seeking to run a driverless taxi service must partner with a motor vehicle manufacturer — or that such a company must at least purchase production vehicles, the modification of which might then be restricted by SB 927 and 928 (see below). It could also mean that municipalities could regulate and tax only those driverless taxi services that do not involve a manufacturer. Or that any vehicle manufacturer that wants to run a “SAVE Project” may not bring on any other project partners. Or, because “on-demand automated motor vehicle network” and “participating fleet” are each defined by circular reference to the other, it could mean something else altogether. Clarifying these provisions is a necessary condition to evaluating them.

What’s rough

Like earlier bills in Michigan and other states, SB 995 and 996 understandably struggle to reconcile an existing vehicle code with automated driving. Under existing Michigan law, a “driver” is “every person who drives or is in actual physical control of a vehicle,” an “operator” is “a person, other than a chauffeur, who “[o]perates” either “a motor vehicle” or “an automated motor vehicle,” and “operate” means either “[b]eing in actual physical control of a vehicle” or “[c]ausing an automated motor vehicle to move under its own power in automatic mode,” which “includes engaging the automated technology of that automated motor vehicle for that purpose.” The new bills would not change this language, but they would further complicate these concepts in several ways:

  • SB 995 makes reference to “operation without a human operator” and to “[a]utomated technology … that has the capability to assist, make decisions for, or replace a human operator.”
  • The bills specify that “[w]hen engaged, an automated driving system” (in SB 995) or “an automated driving system or any remote or expert-controlled assist activity” (in SB 996) “shall be considered the driver or operator of the [or a] vehicle for purposes of determining conformance to any applicable traffic or motor vehicle laws.”
  • The bills further provide that this system (plus, in the case of SB 996, “expert-controlled assist activity”) “shall be deemed to satisfy electronically all physical acts required by a driver or operator of the vehicle.”
  • SB 995 provides both that “[a] person may operate a platoon … if the person files a plan” with the state and that “the operator of a truck or truck tractor that is in a platoon shall allow reasonable access for other vehicles.”
  • The bill further provides that “an on-demand motor vehicle network may be operated.”
  • The bill additionally provides that developers of any “technology that allows a motor vehicle to operate without a human operator” shall ensure that the vehicle “is operated only by an employee” or other authorized person and imposes requirements on “[t]he individual operating the vehicle … and the individual who is monitoring the vehicle” — individuals who are, definitionally, the same individual.
  • The bill also provides that a current prohibition on using mobile communications devices “while operating a motor vehicle that is moving” does not apply either to “a person using an on-demand automated motor vehicle network” or, in a particularly striking sentence, to “an individual who is using” that device to “operate … an automated motor vehicle while … operating the automated motor vehicle without a human operator.” (Read that a few times.)

This is, collectively, a mess.

If these bills are enacted, drivers and operators could conceivably include companies running driverless taxi services, engineers who start automated vehicles, passengers who merely ride in them (since otherwise a mobile device exception would be unnecessary), companies that file platooning permits, and the automated driving systems themselves. The bills accordingly complicate rather than clarify the meaning of these two critical terms. These terms are critical because Michigan’s current vehicle code places a wide array of rights and responsibilities on the driver or operator of a motor vehicle. Provisions such as the basic seatbelt requirement or the entire licensing regime make no sense when applied to something other than a natural person. And provisions that impose criminal penalties make no sense when applied to something, like an “automated driving system,” that is not even a legal person. This is an important distinction between state vehicle codes, which necessarily treat drivers as legal entities, and the National Highway Traffic Safety Administration’s Federal Motor Vehicle Safety Standards, which do not. (This is why NHTSA’s suggestion that a “self-driving system” could be the “driver” in the limited context of the FMVSS is not as revolutionary as popularly reported.)

Consider the provision that “an automated driving system … shall be considered the driver or operator … for purposes of determining conformance to any applicable traffic or motor vehicle laws.” This provision says nothing about who or what the driver is for purposes of determining liability for a violation of those laws, particularly when there is no crash. SB 996 does provide that “a motor vehicle manufacturer shall assume liability for each incident in which the automated driving system is at fault,” subject to the state’s existing insurance code — but only for SAVE projects. (The additional qualification — “during the time that an automated driving system is in control of a vehicle” — is both unnecessary and insufficiently broad.)

Moreover, the provision that the automated driving system (or possibly the “expert-controlled assist activity”) “shall be deemed to satisfy electronically all physical acts required by a driver or operator of the vehicle” is unclear. The drafters may have intended this to establish that an automated driving system that accomplishes the same ends as a human driver is not unlawful merely because it uses different means. The most natural reading of the actual words, however, is that an automated driving system is deemed to satisfy any and every requirement for physical action, even if it does not achieve an equivalent end.

Applying the existing vehicle code (and other related codes) in the context of automated driving requires much more careful thought. Potential approaches range from wholly revising these codes to accommodate both automated and conventional driving to wholly exempting automated driving and regulating it under a separate regime. Michigan’s bills — as well as inchoate efforts in other states — attempt to take the middle ground by categorically mapping existing law onto automated driving. In 2012, I also attempted this by defining key terms such as driver and by specifying particular canons of interpretation; the result was far more systematic than Michigan’s effort — but still far from perfect. If the state’s legislature wishes to continue with a middling approach, it should provide (or empower an agency to provide) much more clarity on the questions of who is and isn’t a driver and how existing codes actually apply.

What’s alarming

SB 927 provides that a person who “intentionally access[es] or cause[s] access to be made to an electronic system of a motor vehicle to willfully … alter … the motor vehicle” is “guilty of a felony punishable by imprisonment for life or any term of years.” SB 928 accordingly, but inconsistently, amends the code of criminal procedure to specify that “access[ing] electronic systems of motor vehicle to obtain data [!] or control of vehicle” is a class A felony punishable by a statutory maximum of life imprisonment. The primary intent of these bills is, I would hope, to prohibit malicious interference with a vehicle. However, the broad language of SB 927 (“A person shall not intentionally access or cause access to be made to an electronic system of a motor vehicle to intentionally destroy, damage, impair, alter, or gain unauthorized control of the motor vehicle”) goes far beyond any such aim. A literal interpretation would make criminals out of manufacturers that send over-the-air updates to their vehicles, vehicle owners who accept such updates, repair shops that run diagnostics checks while fixing vehicles, owners who install new stereos, automated driving startups that modify production vehicles, researchers who test the safety of vehicle electronics, and many others. These bills are particularly troublesome in light of the assertion by some automakers that they alone “own” the software on vehicles that they have already sold. If these bills move forward, they should be limited to instances in which a person acts in willful or wanton disregard for the safety of others.

What’s left

Other provisions could, at a minimum, benefit from careful review. SB 995 adds to an existing requirement that any developer of relevant technologies submit proof of insurance even before “beginning research,” which seems a bit premature. A provision in the same bill for academic and public research references earlier provisions in a way that makes their application unclear. On at least one reading of SB 995, the bill would not prohibit the wholly unsupervised operation of a lone commercial vehicle but would require a driver behind the wheel if that same vehicle is part of a platoon. Many issues like these might be caught and corrected in the normal legislative process (or not), but the cumulative effect at this point is to create unnecessary confusion about the actual content and effect of a potentially historic set of bills.

For steps that governments can take now to encourage the development, deployment, and use of automated driving systems, please see How Governments Can Promote Automated Driving, available at newlypossible.org.

]]>
US Department of Transportation issues guidance for automated driving https://robohub.org/us-department-of-transportation-issues-guidance-for-automated-driving/ Wed, 21 Sep 2016 12:53:35 +0000 http://robohub.org/us-department-of-transportation-issues-guidance-for-automated-driving/ Source: US Department of Transportation.

Source: US Department of Transportation.

With the recent announcement, the US Department of Transportation is enthusiastically embracing automated driving. It’s saying that self-driving vehicles are coming in some form (or many forms) and that the agency can play a role not only in supervising but also in assisting this transportation transformation. The DOT is recognizing the wide range of relevant technologies, applications, and business models and is striving to address them more quickly and flexibly through its wide range of prospective and retrospective regulatory tools. (To be pithy: It’s not your father’s Oldsmobile; it can’t be your father’s DOT.)

https://www.youtube.com/watch?v=YSYjxXfdBcs

As always, the devil will be in the details. Those details will become clearer when the agency releases its full guidance on Tuesday and then as that guidance is (1) revised after public comment (for which the DOT wisely provided), (2) augmented by a related report by the American Association of Motor Vehicle Administrators, and (3) actually applied. For example, the DOT has not yet indicated whether its model state policy will be permissive or restrictive, particularly with respect to actual demonstration projects and even commercial deployments.

I would also expect that this guidance will be the starting point for more thoughtful legislative discussions — not only at the state level but also, for the first time, at the federal level. It will be interesting to see which developers carry the DOT’s implicit requests for new authorities and resources to Congress. The model state policy does not bind states, and some may well decide not to follow it. The performance guidance likewise does not bind developers of automated driving systems, but I would expect few of these developers to deviate from it. This soft guidance could become even more influential if states incorporate it in legislation, if the DOT’s National Highway Traffic Safety Administration (NHTSA) considers it in the course of exemption or enforcement decisions, or if courts look to it to understand how a reasonable developer should act. In other words, DOT is establishing expectations.

You can read the Federal Automated Vehicles Policy – September 2016 here.

Accordingly, I’m especially pleased to see the DOT’s recommendation that these developers undertake — and share — a 15-point safety assessment. Given its limited resources and the speed of technological change, NHTSA cannot hope to come up with all the answers for automated driving in its many forms. But NHTSA can focus on asking the important questions. To that end, this safety assessment encourages developers to share what they are doing, why they think it is safe, and why they should be believed. This is what I have recommended on many occasions (including here, here, and here).

Finally, a note on “driver.” I’ve noted a lot of confusion about NHTSA’s earlier recognition that an automated driving system can be a “driver” for the purpose of the federal motor vehicle safety standards. That limited recognition did not change and could not change how states define drivers for the purpose of their laws or what legal obligations states place on these drivers. Similarly, DOT’s statements today about software do not represent some dramatic change. Because of its authority over motor vehicle design, NHTSA has always had authority over motor vehicle software, which is already ubiquitous on vehicles. (Indeed, in the aftermath of the Toyota’s sudden unintended acceleration debacle, the National Academies released a major report on this very topic.) NHTSA would not change state law by continuing to exercise this authority. For example, NHTSA might recommend (and may one day require) that automated driving software be designed to generally obey speed limits, but states would still set those speed limits.

For DOT’s actual guidance, check out transportation.gov/AV. For more thoughts on what governments can do, check out How Governments Can Promote Automated Driving at newlypossible.org.


If you liked this post, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

]]>
Two New Year’s resolutions for developers of automated vehicles https://robohub.org/new-years-resolutions-for-developers-of-automated-vehicles/ Tue, 12 Jan 2016 20:39:57 +0000 http://robohub.org/new-years-resolutions-for-developers-of-automated-vehicles/ self_driving_Robocar_autonomous_car_spedometer_accelerate_needle_speedIn the spirit of the New Year, and especially in the wake of California’s draft rules for the (theoretical) operation of automated motor vehicles, I offer two resolutions for any serious developer of an automated driving (or flying) system.

Such a developer should:

1) Detail the specific changes to existing law, if any, that the developer needs in order to deploy its system in each jurisdiction of interest. Serious developers should already be conducting legal research and development commensurate with their technical research and development; sharing their legal conclusions (rather than merely making generalized statements about the need for uniformity or clarity) will complement the comprehensive legal audits that I have urged governments to conduct.

2) Make what I call a “public safety case” that candidly explains how the developer (a) defines reasonable safety, (b) will satisfy itself that its sytem is reasonably safe, and (c) will continue to do so over the lifetime of the system. This public safety case will be crucial to appropriately managing public expectations in both directions, to easing the concerns of regulators who are understandably wary about asserting the safety of novel systems, and to establishing norms against which amateur efforts at automation might be measured.

In many cases, an automated system may not be mature enough for its developer to understand the specific legal implications or to articulate a specific safety philosophy. If so, that is also a valuable message. As automated systems become more visible on public roads (and in public airspace), their developers should step more fully into the broader public sphere.

]]>
Automated vehicle crashes https://robohub.org/automated-vehicle-crashes/ Fri, 29 May 2015 09:07:24 +0000 http://robohub.org/automated-vehicle-crashes/ Photo source: Wikipedia [Flckr user jurvetson (Steve Jurvetson) CC BY-SA 2.0]

Photo source: Wikipedia [Flckr user jurvetson (Steve Jurvetson) CC BY-SA 2.0]

Earlier this month, the Associated Press reported on several past crashes involving automated vehicles. (Per SAE Standard J3016, I use the term “automated vehicle” instead of “autonomous vehicle” or “self-driving car” or “driverless car.”) A few thoughts.

1) As I wrote in 2012, we would need more information — about the crashes themselves, the conditions under which each company’s automated vehicles are tested, and the situations in which each company’s test drivers intervene — to provide statistical context for these incidents.

2) In some ways, the AP’s inquiry gave us a preview of how public and private actors might respond to future automated vehicle crashes that actually result in injury or death. It may be instructive to view the reactions ofGoogleDelphi, and the California DMV in this light.

3) Over the last few years, I have advised both developers and regulators of automated systems to put in place specific plans for responding, both publicly and privately, to the first high-profile incidents involving these systems. My sense, however, is that many organizations still have not created these “break-the-glass” or “break-glass” plans.

4) Earlier this semester, my impressive Law of the Newly Possible students did develop two thoughtful break-glass plans: one for the developers of automated driving systems and another for the regulators of these systems. Interestingly, although the private-sector group and the public-sector group each recognized the need to communicate with each other in the event of a crash, each also hesitated in reaching out to the other in the course of planning. In the real world, a broad range of stakeholders should be coordinating these plans sooner rather than later.

5) My book chapter on Regulation and the Risk of Inaction, also released this week, identifies eight public-sector strategies for managing risks related to automated driving. It can be freely downloaded here. A key point is that we must expect more of conventional drivers as well as automated vehicles. To paraphrase myself: I’m concerned about computer drivers, but I’m terrified about human drivers.

6) As always, please visit newlypossible.org for additional materials.

]]>
Why Tesla’s purported liability ‘fix’ is technically and legally questionable https://robohub.org/why-teslas-purported-liability-fix-is-technically-and-legally-questionable/ Tue, 26 May 2015 07:04:06 +0000 http://robohub.org/why-teslas-purported-liability-fix-is-technically-and-legally-questionable/ Tesla Motors autopilot (photo:Tesla)

Tesla Motors autopilot (photo:Tesla)

An interesting article in last week’s Wall Street Journal spawned a series of unfortunate headlines (in a variety of publications) suggesting that Tesla had somehow “solved” the “problem” of “liability” by requiring that human drivers manually instruct the company’s autopilot to complete otherwise-automated lane changes.

(I have not asked Tesla what specifically it plans for its autopilot or what technical and legal analyses underlie its design decisions. The initial report may not and should not be the full story.)

For many reasons, these are silly headlines.

First, “liability” is a word that nonlawyers often use in clumsy reference to either (a) the existence of vague legal or policy questions related to the increasing automation of driving or (b) the specific but unhelpful question of “who is liable when an automated vehicle crashes.”

If this term is actually meant to refer, however imperfectly, to “driver compliance with the rules of the road,” then it is not clear to what problem or for what reason this lane-change requirement is a “solution.” For a foundational analysis of legality, including my recommendation that both developers and regulators carefully examine applicable vehicle codes, please see Automated Vehicles Are Probably Legal in the United States.

This term probably refers instead to some concept of fault following a crash. As a forthcoming paper explains, liability (or the broader notion of “responsibility”) is multifaceted:

Responsibility

Moreover, although liability is frequently posited as an either/or proposition (“either the manufacturer or the driver”), it is rarely binary. In terms of just civil (that is, noncriminal) liability, in a single incident a vehicle owner could be liable for failing to properly maintain her vehicle, a driver could be liable for improperly using the vehicle’s automation features, a manufacturer could be liable for failing to adequately warn the user, a sensor supplier could be liable for poorly manufacturing a spotty laser scanner, and a mapping data provider could be liable for providing incorrect roadway data. Or not.

The relative liability of various actors will depend on the particular facts of the particular incident. Although automation will certainly impact civil liability in both theory and practice, merely asking “who is liable in tomorrow’s automated crashes” in the abstract is like asking “who is liable in today’s conventional crashes.” The answer is: “It depends.”

Second, the only way to truly “solve” civil liability is to prevent the underlying injuries from occurring. If automated features actually improve traffic safety, they will help in this regard. As a technical matter, however, it is doubtful that requiring drivers merely to initiate lane changes will ensure that they are engaged enough in driving that they are able to quickly respond to the variety of bizarre situations that routinely occur on our roads. This is one of the difficult human-factors issues present in what I call the “mushy middle” of automation, principally SAE levels 2 and 3.

These levels of automation help ground discussions about the respective roles of the human driver and the automated driving system.

At SAE level 2, the human driver is still expected, as a technical matter, to monitor the driving environment and to immediately respond as needed. If this is Tesla’s expectation as well, then the lane-change requirement alone is manifestly inadequate: Many drivers will almost certainly consciously or unconsciously disengage from actively driving for long stretches without lane changes.

At SAE level 3, the human driver is no longer expected, as a technical matter, to monitor the driving environment but is expected to resume actively driving within an appropriate time after the automated system requests that she do so. This level raises difficult questions about ensuring and managing this human-machine transition. But it is technically daunting in another way as well: The vehicle must be able to drive itself for the several seconds it may take the human driver to effectively reengage. If this is ultimately Tesla’s claim, then it represents a significant technological breakthrough—and one reasonably subject to requests for substantiation.

Third, these two levels of automation raise specific liability issues that are not necessarily clarified by a lane-change requirement.

At SAE level 2, driver distraction is entirely foreseeable, and in some states Tesla might be held civilly liable for facilitating this distraction, for inadequately warning against it, or for inadequately designing against it. Victims of a crash involving a distracted Tesla driver would surely point to more robust (though hardly infallible) ways of monitoring driver attention—like requiring occasional contact with the steering wheel or monitoring eye movements—as proof that arguably safer designs are readily available.

At SAE level 3, Tesla would be asserting that its autopilot system could handle the demands of driving—from the routine to the rare—without a human driver monitoring the roadway or intervening immediately when needed. If this is the case, then Tesla may nonetheless face significant liability, particularly in the event of crashes that could have been prevented either by human drivers or by better automated systems.

Fourth, this potential liability does not preclude the development or deployment of these technologies—for reasons that I discuss here and here and here.

In summary, merely requiring drivers to initiate lane changes raises both technical and legal questions.

However, the idea that there can be technological fixes to legal quandaries is a powerful one that is explored in Part III of Proximity-Driven Liability.

A related example illustrates this idea. In the United States, automakers are generally required to design their vehicles to be reasonably safe for unbelted as well as belted occupants. (In contrast, European regulators assume that anyone who cares about their safety will wear their seatbelt.) In addition, some US states restrict whether or for what purpose a defendant automaker can introduce evidence that an injured plaintiff was not wearing her seatbelt.

An understandable legal response would be to update these laws for the 21st Century. Regardless, some developers of automated systems seem poised to adopt a technological response: Their automated systems will simply refuse to operate unless all of the vehicle’s occupants are wearing their seatbelts. Given that seatbelt use can cut the chance of severe injury or death by about half, this is a promising approach that must not be allowed to suffer the same fate as the interlocks of 40 years ago.

More broadly, any automaker must consider the complex interactions among design, law, use, and safety. As I argue in a forthcoming paper by the same name, Lawyers and Engineers Should Speak the Same Robot Language.

]]>
Slow down that runaway ethical trolley https://robohub.org/slow-down-that-runaway-ethical-trolley/ Tue, 13 Jan 2015 20:04:23 +0000 http://robohub.org/slow-down-that-runaway-ethical-trolley/ Source: Wikipedia

Source: Wikipedia

The runaway trolley has chased automated motor vehicles into the new year.

In early 2012, I raised a variation of the classic thought experiment to argue that there is not always a single absolute right choice in the design of automated vehicles — and that engineers should not presume to always know it. While this remains true, the kind of expert comments that concerned me three years ago have since become more the exception than the norm. Now, to their credit, engineers at Google and within the automotive industry openly acknowledge that significant technical hurdles to fully automated vehicles remain and that such vehicles, when they do exist, will not be perfect.

Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other— by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.

Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?

When automated vehicles are good enough to reliably replace human drivers across a wide range of driving environments (and we are not there yet), the kinds of incidents that compel a choice among victims will, one hopes, be remarkably rare. In many cases, the prudent strategy for such scenarios will be to drive carefully enough that they either (a) do not arise at all or (b) can be mitigated if they do arise (by, for example, stopping quickly). This is because poor decisions by human drivers— driving too fast, while drunk, while texting, while tired, without braking quickly enough, etc.— contribute to the vast majority of today’s crashes.

In the near term, some crashes might be addressed by automated emergency intervention systems (AEISs) that automatically brake or steer when the human driver fails to act. Because these systems are designed to engage just before a crash (sometimes to lessen rather than to negate the impact), they could conceivably face the kind of dilemmas that are posited for automated vehicles. Nonetheless, some of these systems have already reached the market and are saving lives — as are airbags and electronic stability control and other technologies that necessarily involve safety tradeoffs. At the same time, these intervention systems occasionally detect objects that don’t actually exist (false positives) or fail to detect objects that actually do exist (false negatives).

This is a critical point in itself: Automation does not mean an end to uncertainty.  How is an automated vehicle (or its designers or users) to immediately know what another driver will do? How is it to precisely ascertain the number or condition of passengers in adjacent vehicles? How is it to accurately predict the harm that will follow from a particular course of action? Even if specific ethical choices are made prospectively, this continuing uncertainty could frustrate their implementation.

For this reason, a more practical approach in emergency situations may be to weight general rules of behavior: decelerate, avoid humans, avoid obstacles as they arise, stay in the lane, and so forth. As I note in a forthcoming book chapter (“Regulation and the Risk of Inaction“), this simplified approach would accept some failures in order to expedite and entrench what could be automation’s larger successes. As Voltaire reminds us, we should not allow the perfect to be the enemy of the good.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

]]>
Deploying automated vehicles https://robohub.org/deploying-automated-vehicles/ Wed, 16 Jul 2014 19:00:01 +0000 http://robohub.org/?p=34659
In 2012, the Nevada DMV issued the first license to test autonomous vehicles on public roads in the US. The red license plates are only for autonomous test vehicles and are meant to make the cars easily identifiable.
Road Vehicle AutomationThis article is an excerpt from a book chapter by Bryant Walker Smith titled A Legal Perspective on Three Misconceptions in Vehicle Automation appearing in Road Vehicle Automation (Springer, 2014, Eds. Gereon Meyer and Sven Baker), and considers only one of the three misconceptions addressed in the full chapter.

Casual consumers of automated vehicle news—though not the many casual producers of this news—could be forgiven for concluding that driverless cars are ready to be sold to ordinary drivers. States are passing laws, companies are testing cars on public roads, and commentators are declaring that “the technology is here.” The corollary of this belief is that, if such vehicles are not yet ready, then fault must lie elsewhere—with consumers for not accepting them, with governments for not “legalizing” them1, or with lawyers for outright blocking them.

A 2013 radio interview is illustrative: The first guest, a reporter, asserted that “the technology is here” and that “right here and now we can have driverless cars.”

I replied that the research vehicles under discussion were neither designed nor demonstrated to operate at a reasonable level of risk under a full range of unsupervised driving scenarios.

1 Referring to the “legalization” of automated vehicles is misleading; see:

Smith, Bryant Walker, Automated Vehicles Are Probably Legal in the United States, Center for Internet and Society, 2012, forthcoming Texas A&M Law Review 2014.

After a short break, the host resumed the discussion by reminding listeners that “the technology for driverless cars is in fact available, and we’re trying to figure out why we don’t then have them.”

Automotive experts recognize that the path from research to product is long—and that there is a tremendous difference between, on one hand, a research system that well-trained technicians carefully maintain, update, and operate exclusively on certain roads in certain conditions and, on the other hand, a production system that poorly trained consumers neglect and abuse for two decades in almost any conceivable driving scenario. For this reason, production vehicles take years to be developed, tested, and certified to a complex array of highly detailed public and private standards.

2 For a discussion on the deployment of driverless specialty vehicles, see: Smith, Bryant Walker, A Legal Perspective on Three Misconceptions in Vehicle Automation (January 1, 2014). Road Vehicle Automation Lecture Notes in Mobility, Springer, p. 85, June 2014.

Recent state laws regarding automated driving embrace this important distinction between research-and-development testing and consumer operation: Nevada’s infinity-styled “autonomous vehicle” license plates, for example, are red for test vehicles and, one day, will be green for all others.

However, a yellow license plate may, at least metaphorically, be most appropriate for a set of potential deployments that do not fit comfortably in either category. The first deployments of low-speed shuttles2 are likely to be pilot projects.

3 Similarly, as part of the US Department of Transportation’s field study of dedicated short-range communications (DSRC)—a related but distinguishable set of technologies—nearly three thousand ordinary vehicles in Ann Arbor, Michigan were retrofitted with DSRC equipment.

Volvo Cars intends to place 100 automated vehicles on public roads in the Swedish city of Gothenburg by 20173. Internet companies that are comfortable with invitation-only beta rollouts of their software and hardware may adopt a similar approach for their updatable automotive products. And an individual who uses a vehicle that she herself has built or modified may likewise straddle the divide between testing and operating.

These hybrid deployments may push up against state and federal regimes that assume a more straightforward product path for research, development, production, sale, resale, and disposal. For example, while automakers currently self-certify that their vehicles as originally manufactured meet federal safety standards, this date of original manufacture may be less determinative of the safety of vehicles subsequently modified. Similarly, while state tort law often looks to the date that a product is originally sold to a consumer, as a practical matter this date may become less clear or less relevant to alleged harms.

Indeed, automakers concerned about the post-sale modification of their vehicles by third parties have lobbied successfully in Florida and Michigan (and unsuccessfully in California) to expressly limit their liability for injuries caused by such modification. These statutory provisions, however, largely restate existing principles of tort law, which makes both the insistence on and the opposition to them rather striking.

The complete lifecycle of early automated vehicles does present significant concerns. The mechanical life of these vehicles may be much longer than the functional life of their automation systems. Consumers in the secondary market may face a hodgepodge of proprietary driver assistance systems with different capabilities and limitations that cannot be easily intuited. And vehicles may long outlive some of the companies—whether small startups or legacy behemoths—responsible for their design, sale, and ongoing support.

For these reasons, what I have called “planning for the obsolescence of technologies not yet invented” should be a key consideration for automakers, regulators, and insurers.

Robohub is an online platform that brings together leading communicators in robotics research, start-ups, business, and education from around the world. Learn more about us here. If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

]]>
SAE defines six levels of driving automation https://robohub.org/sae-defines-six-levels-of-driving-automation/ Fri, 07 Feb 2014 16:33:00 +0000 http://robohub.org/?guid=a01d0dcbd93fdbdfcb20198eb4116a6b
Photo:  Osvaldo Gago

SAE International‘s On-Road Automated Vehicle Standards Committee, on which I serve along with experts from industry and government, has released an information report defining key concepts related to the increasing automation of on-road vehicles.

Central to this 12-page report are six levels of driving automation: 0 (no automation), 1 (driver assistance), 2 (partial automation), 3 (conditional automation), 4 (high automation), and 5 (full automation). The table below (available here in PDF) summarizes these levels.

PNG image of levels of driving automation. Link to PDF document available above.

This article was originally posted on CIS Blog on 18/12/2013: SAE Levels of Driving Automation.

 

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

]]>
Human error as a cause of vehicle crashes https://robohub.org/human-error-as-a-cause-of-vehicle-crashes/ Fri, 07 Feb 2014 15:15:33 +0000 http://robohub.org/?guid=21c8f5be1df6bf0f7f8e825dcec5e442 car_crash

 

Source: Wonderlane.

Some ninety percent of motor vehicle crashes are caused at least in part by human error. This intuitive claim is a fine place to start discussions about the safety potential of vehicle automation. (It is not an appropriate place to end these discussions. After all, humans can be amazing drivers, the performance of advanced automation systems is still unclear, automated vehicles can be misused, and automation shifts some error from driver to designer.) And since the claim is often made without sufficient citation, I’ve compiled several relevant sources.

(1) The most thorough analysis of crash causation, the Tri-Level Study of the Causes of Traffic Accidents published in 1979, found that “human errors and deficiencies” were a definite or probable cause in 90-93% of the incidents examined. The executive summary is here (see vii), and the two-part final report is here and here.  The study continues to be cited, and the Venn diagram on this page provides a useful summary.

(2) A UK study published in 1980 (summarized here at 88-89) likewise identifies driver error, pedestrian error, or impairment as the “main contributory factor[]” in 95% of the crashes examined.

(3) Another US study published in 2001 (available here) found that “a driver behavioral error caused or contributed to” 99% of the crashes investigated.

(4) An annual UK crash report (available here with updated data here in table RAS50002) identifies 78 potential contributing factors, most of which suggest human error. However, multiple factors were recorded for some crashes, and none were recorded for others.

(5) NHTSA’s 2008 National Motor Vehicle Crash Causation Survey is probably the primary source for the common assertion by NHTSA officials that “[h]uman error is the critical reason for 93% of crashes” (at least if “human error” and “driver error” are conflated). The 93% figure is absent from the report itself (probably intentionally) but calculable from the totals given on page 24.

(Note that this 2008 report identifies a single “critical reason” for each crash analyzed. This critical reason, which is “often the last failure in the causal chain,” is “an important element in the sequence of events leading up to a crash” but “may not be the cause of the crash” and does not “imply the assignment of fault to a vehicle, driver, or environment, in particular.” Indeed, several NHTSA officials have told me that they deliberately avoid any reference to causation in this context.)

One final note: Human error, which could mean many things, should certainly encompass drinking and texting. Please don’t do either.

This article was originally posted on CIS Blog on 18/12/2013: Human error as a cause of vehicle crashes.
]]>