Over the past decade, there has been significant technical development of increasingly autonomous vehicles that are physically capable of operating in the national air-, sea-, and ground-space. These advances include the use of formal methods, designed to ensure that these systems operate as expected in a variety of circumstances. Except in specific conditions, however, these vehicles cannot practically or legally operate absent certification from a regulatory agency that they are safe and at least minimally effective (i.e., capable of completing their assigned tasks). Such certification will require a fundamental shift in how vehicle operators are judged legally competent: from a regime focused on human knowledge, experience, and efficacy as judged by their peers (i.e., a written test, supervised hours requirement, and driving test) to one focused on disaggregating and testing a complex, algorithmic system.
This work computationally and experimentally explores multiple approaches to certifying autonomous vehicles. First, we demonstrate a decomposition approach: translating the steps used by qualified helicopter pilots to safely land their vehicles into discrete specifications which are then formally validated and verified. Second, we demonstrate that these decision engines have achieved adequate results during real-world testing conditions, they cannot substitute for human judgment during off-nominal conditions, and thus any effective certification regime will require some evaluation of atypical conditions. Finally, in order to better consider such atypical situations, regulators must be able to objectively define off-nominal conditions, and we demonstrate how certification standards can provide objective measures for subjective characteristics (i.e., defining an objective definition of situation awareness) that can ultimately be useful in determining when autonomous systems are no longer safely operating. Ultimately, these techniques can be used to demonstrate objectively defined, formally verifiable indicators of vehicle safety and effectiveness under both nominal and off-nominal conditions.
Dr. Huan "Mumu" Xu is an assistant professor at the University of Maryland in Aerospace Engineering with a joint appointment at the Institute for Systems Research. She received her BS in mechanical engineering and material science from Harvard University in 2007, and her M.S. and Ph.D. in mechanical engineering from the California Institute of Technology in 2008 and 2013. Her doctoral work focused on the use of control synthesis and timed specification languages for the design and analysis of distributed cyber-physical systems. Her current research interests include the areas of control and dynamical systems, formal methods with applications in autonomy, real-time coordination and system identification.