Attackability of Machine Learning Models with Applications in the Discovery of Safety-critical Aerial Operations

Event Start
Event End


In safety-critical application domains, it is crucial to assess the attackability of the employed machine learning models and to design fault-tolerant models against the noise and attacks. This talk will first introduce our study on characterizing the attackability of a targeted classifier on categorical sequences and a targeted multi-label classifier under evasion attack. A fault-tolerant model is then presented to handle the noisy data existing in the machine learning input. Our application aim is to find safety-critical aerial operations. We will introduce the development of a deep learning model for quick airspace configuration safety assessment based on LSTM learning from sequences of air traffic configurations.

Brief Biography

Dr. Xiangliang Zhang is an Associate Professor of Computer Science and directs the MINE ( group at KAUST, Saudi Arabia. She earned her Ph.D. degree in computer science from INRIA-University Paris-Sud, France, in July 2010. She received her M.S. and B.S. degrees from Xi’an Jiaotong University, China, in 2006 and 2003, respectively. Dr. Zhang's research mainly focuses on learning from complex and large-scale streaming data and graph data, with applications on recommendation systems, biomedical knowledge discovery and social media data analysis. Dr. Zhang has published over 160 research papers in referred international journals and conference proceedings, including TKDE, SIGKDD, AAAI, IJCAI, NeurIPS, ICDM, etc.   She regularly serves on the Program Committee for premier conferences like SIGKDD (Senior PC), AAAI (Senior PC), IJCAI (Area Chair, Senior PC), ICDM, NIPS, ICML etc.  Dr. Zhang was invited to deliver an Early Career Spotlight talk at IJCAI-ECAI 2018.

Contact Person