On the Natural Gradient Descent

Event Start
Event End
Location
https://kaust.zoom.us/j/92353712311

Abstract

Numerous problems in scientific computing can be formulated as optimization problems of suitable parametric models over parameter spaces. Neural network and deep learning methods provide unique capabilities for building and optimizing such models, especially in high-dimensional settings. Nevertheless, neural networks and deep learning techniques are often opaque and resistant to precise control of their mathematical properties in terms of architectures, hyperparameters, etc. Consequently, optimizing neural network models can result in a laborious hyperparameter tuning process that does not necessarily generalize to new problems and settings. Natural gradient descent is an optimization technique that mitigates some of these shortcomings by exploiting the geometry of the model space. I will introduce the natural gradient technique and discuss some of its computational and theoretical aspects. I will also discuss some open questions and possible future research directions.

Brief Biography

Levon Nurbekyan is an Assistant Professor in the Department of Mathematics at Emory University. He previously held postdoctoral and visiting positions at UCLA, McGill, KAUST, the National Academy of Sciences of Armenia, and the Technical University of Lisbon. Nurbekyan obtained a Ph.D. in Mathematics from the Technical University of Lisbon through the UT Austin—Portugal CoLab program. His research interests include calculus of variations, optimal control, game theory, artificial intelligence and machine learning, optimal transportation theory, shape optimization problems, and dynamical systems.

Contact Person