Learning-Based Observer Design for Nonlinear Systems with Convergence Guarantees

This thesis advances nonlinear estimation by proposing learning-based nonlinear observers with convergence guarantees. The proposed approaches overcome the limitations of the existing methods by being applicable to a wide range of nonlinear systems while providing global convergence guarantees. Furthermore, the developed observers are robust to disturbances, sensor delay, and measurement noise.

Overview

Nonlinearity is a fundamental feature of most physical, biological, and engineering systems, resulting in behaviors that are difficult to predict and measure directly. Estimating the internal variables of these systems from the available measurements is vital for decision-making, controller design, and fault detection. However, designing observers for nonlinear systems is not straightforward and becomes even more challenging when special performance requirements are needed, such as non-asymptotic convergence. Most existing observers either target a specific class of nonlinear systems or offer a generic design approach for a wide range of nonlinear systems, but limit convergence to a local one only. Therefore, one often has to make a compromise between the generality of the approach and the convergence guarantees.

Furthermore, most systems are subject to additional challenges, including modeling uncertainties, external disturbances, and measurement delays, which considerably affect closed-loop performance if not taken into account in the observer design. However, most existing model-based observers fall short in addressing these challenges. In contrast, learning-based techniques are powerful tools suitable for uncertain, complex, and highly nonlinear settings. Owing to the universal approximation principle, neural networks demonstrate significant ability in approximating nonlinear functions and solving complex equations. Nevertheless, learning-based methods lack convergence guarantees. In this thesis, we propose learning-based observers for nonlinear systems with convergence guarantees by combining the rigor of model-based techniques with the power and flexibility of learning-based approaches.

Presenters

Brief Biography

Yasmine is a Ph.D. candidate in Electrical and Computer Engineering, supervised by Prof. Eric Feron and Prof. Taous-Meriem Laleg-Kirati. Before joining KAUST as a doctoral student, she was an intern in the Estimation, Modeling, and Analysis Group (EMANG). Her research focuses on developing hybrid model-based and learning-based estimation algorithms for diverse classes of nonlinear systems with convergence guarantees. During her Ph.D., Yasmine completed an internship at the University of California, Berkeley, under the supervision of Prof. Alexandre Bayen, and visited several universities, including Stanford, UC Santa Barbara, and UC Irvine.