Towards trustworthy use of scientific machine-learning in large scale numerical simulations

  • Alena Kopanicakova, Researcher, Brown University
-

B9 L3 R3128

Recently, scientific machine learning (SciML) has expanded the capabilities of traditional numerical approaches by simplifying computational modeling and providing cost-effective surrogates. However, SciML models suffer from the absence of explicit error control, a computationally intensive training phase, and a lack of reliability in practice. In this talk, we will take the first steps toward addressing these challenges by exploring two different research directions.

Overview

Abstract

Recently, scientific machine learning (SciML) has expanded the capabilities of traditional numerical approaches by simplifying computational modeling and providing cost-effective surrogates. However, SciML models suffer from the absence of explicit error control, a computationally intensive training phase, and a lack of reliability in practice. In this talk, we will take the first steps toward addressing these challenges by exploring two different research directions. Firstly, we will demonstrate how the use of advanced numerical methods, such as multilevel and domain-decomposition solution strategies, can contribute to efficient training and produce more accurate SciML models. Secondly, we propose to hybridize SciML models with state-of-the-art numerical solution strategies. This approach will allow us to take advantage of the accuracy and reliability of standard numerical methods while harnessing the efficiency of SciML. The effectiveness of the proposed training and hybridization strategies will be demonstrated by means of several numerical experiments, encompassing the training of DeepONets and the solving of linear systems of equations that arise from the high-fidelity discretization of linear parametric PDEs using DeepONet-enhanced multilevel and domain-decomposition methods.

Brief Biography

Alena Kopanicakova is a researcher at Brown University (USA). Before joining Brown University, she was a postdoctoral researcher at Università della Svizzera italiana (Switzerland), from where she also obtained her PhD in Computational Science. Her research focuses on designing multilevel and domain-decomposition methods for large-scale non-convex optimization problems. Currently, she is developing scalable training algorithms for (scientific) machine learning (ML) applications and exploring the potential of ML approaches for enhancing the convergence and robustness of classical iterative methods.

Presenters

Alena Kopanicakova, Researcher, Brown University