Reinforcement Learning and Path Planning for Urban Air Mobility

We focus on the development of safe guidance strategies for aerial vehicles capable of routine and emergency maneuvers, using recent progress in Reinforcement learning, graph neural network and transfer learning. Our study will complement the original automaton concept with “emergency maneuvers” that are adapted to the various kind of failures that can be encountered by the flying machine. We will formulate trajectory planning as a reinforcement learning  problem. In this context, we will explore how value functions learned in a context-free environment maybe properly adapted and used in operational, obstacle-ridden environment with risk-prone and risk-free areas. One targeted application is for urban air mobility.

Investigator: