Post-Doc André Carlon participation at MCQMC2022 in Linz

Simulation of an experiment with a linear elastic beam model

 

Between July 17 to 22 of 2022, the 15th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing was held in Linz, Austria. A postdoctoral fellow of our group, André Carlon, presented a joint work with Joakim Beck and Prof. Raúl Tempone named "Adaptive stochastic gradient descent for Bayesian optimal experimental design."

AbstractExperiments play a central role in many fields of science. Usually, it is of the interest of the investigators to perform experiments as efficiently as possible. However, finding the optimal design for an experiment can be a cumbersome task. In the field of Bayesian optimal experimental design, it is usual to use the mutual information between experimental observations and parameters of interest, the Expected Information Gain (EIG), as a measure of the quality of an experiment. Thus, in a gradient-based optimization approach to maximize the EIG, one needs to compute the gradient of the EIG every iteration. Here, we propose the use of an adaptive Stochastic Gradient Descent (SGD) using a double-loop Monte Carlo (DLMC) estimator of the gradient of the EIG. In every optimization iteration, we compute the DLMC sample sizes necessary to keep the relative statistical error and the relative bias uniformly bounded. Under the assumption of strong-convexity of the EIG, we prove that our method attains linear convergence iteration-wise in the L2 sense. Our error analysis of the DLMC estimator of the gradient of the EIG incorporates discretization errors of the model, thus being suited for cases where the experiment is described by a partial or ordinary differential equation. The performance of SGD with our DLMC estimator is validated with numerical results.