Derivative-Free Global Minimization: Relaxation, Monte Carlo and Sampling
We develop a derivative-free global minimization algorithm that is based on a gradient flow of a relaxed functional. We combine relaxation ideas, Monte Carlo methods, and resampling techniques with advanced error estimates. Compared with well-established algorithms, the proposed algorithm has a high success rate in a broad class of functions, including convex, non-convex, and non-smooth functions, while keeping the number of evaluations of the objective function small.
Overview
Abstract
We develop a derivative-free global minimization algorithm that is based on a gradient flow of a relaxed functional. We combine relaxation ideas, Monte Carlo methods, and resampling techniques with advanced error estimates. Compared with well-established algorithms, the proposed algorithm has a high success rate in a broad class of functions, including convex, non-convex, and non-smooth functions, while keeping the number of evaluations of the objective function small.
Brief Biography
Diogo Gomes received his Ph.D. in Mathematics from the University of California at Berkeley in 2000 and was later awarded a Habilitation in Mathematics from Universidade Técnica de Lisboa in 2006. Before his current tenure at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, where he is Professor and AMCS program chair, he was a faculty member at the Instituto Superior Técnico since 2001. His postdoctoral research included positions at the Institute for Advanced Study in Princeton in 2000 and the University of Texas at Austin in 2001. Professor Gomes is known for his integration of Mean-field Game theory with price models, contributions to the regularity theory and monotonicity methods for Mean-field games. His work encompasses both theoretical and numerical aspects and his achievements in these areas have significantly advanced the understanding of these models.