Randomized multilevel Monte Carlo for inference
- Kody J.H. Law, Professor, Applied Mathematics in the Department of Mathematics, University of Manchester and Manchester Institut
KAUST
Often in the context of data centric science and engineering applications, one endeavours to learn complex systems in order to make more informed predictions and high stakes decisions under uncertainty. Some key challenges which must be met in this context are robustness, generalizability, and interpretability.
Overview
Abstract
Often in the context of data centric science and engineering applications, one endeavours to learn complex systems in order to make more informed predictions and high stakes decisions under uncertainty. Some key challenges which must be met in this context are robustness, generalizability, and interpretability. The Bayesian framework addresses these three challenges, while bringing with it a fourth, undesirable feature: it is typically far more expensive than its deterministic counterparts. In the 21st century, and increasingly over the past decade, a growing number of methods have emerged which allow one to leverage cheap low-fidelity models in order to precondition algorithms for performing inference with more expensive models and make Bayesian inference tractable in the context of high-dimensional and expensive models. Some notable examples are multilevel Monte Carlo (MLMC), multi-index Monte Carlo (MIMC), and their randomized counterparts (rMLMC), which are able to provably achieve a dimension-independent (including infinite-dimension) canonical complexity rate with respect to mean squared error (MSE) of 1/MSE. Some parallelizability is typically lost in an inference context, but recently this has been largely recovered via novel double randomization approaches. Such an approach delivers i.i.d. samples of quantities of interest which are unbiased with respect to the infinite resolution target distribution. This talk will describe the general approach with a focus on a Markov chain Monte Carlo method. Time permitting, some sequential Monte Carlo methods will be discussed. Over the coming decade, this family of algorithms has the potential to transform data centric science and engineering, as well as classical machine learning applications such as deep learning, by scaling up and scaling out fully Bayesian inference.
Brief Biography
Kody J.H. Law is a professor of Applied Mathematics in the Department of Mathematics at the University of Manchester and Manchester Institute of Data Science and AI, and a fellow of The Alan Turing Institute, specializing in computational applied mathematics and statistics. He received his PhD in Mathematics in 2010 from the University of Massachusetts, Amherst, and subsequently held positions as a postdoc at the University of Warwick, and senior mathematician at King Abdullah University of Science and Technology and Oak Ridge National Laboratory. He has published in the areas of computational applied mathematics, statistics, scientific computing, and physics. His current research interests are focused on the fertile intersection of mathematics and statistics, and in particular (a) data assimilation and inverse methodology: involving merging physical/engineering models with data; as well as (b) data-driven methodology: learning directly from data alone, for example to infer a model when one does not exist.