Coffee Time: 15:30 - 16:00. Kernel matrices can be found in computational physics, chemistry, statistics, and machine learning. Fast algorithms for matrix-vector multiplication for kernel matrices have been developed, and is a subject of continuing interest, including here at KAUST. One also often needs fast algorithms to solve systems of equations involving large kernel matrices. Fast direct methods can sometimes be used, for example, when the physical problem is 2-dimensional. In this talk, we address preconditioning for the iterative solution of kernel matrix systems. The spectrum of a kernel matrix significantly depends on the parameters of the kernel function used to define the kernel matrix, e.g., a length scale.

Overview

Abstract

Coffee Time: 15:30 - 16:00

Kernel matrices can be found in computational physics, chemistry, statistics, and machine learning. Fast algorithms for matrix-vector multiplication for kernel matrices have been developed and are a subject of continuing interest, including here at KAUST. One also often needs fast algorithms to solve systems of equations involving large kernel matrices. Fast direct methods can sometimes be used, for example, when the physical problem is 2-dimensional. In this talk, we address preconditioning for the iterative solution of kernel matrix systems. The spectrum of a kernel matrix significantly depends on the parameters of the kernel function used to define the kernel matrix, e.g., a length scale. This makes it challenging to design a preconditioner for a (regularized) kernel matrix that is robust across different parameter values. We will first discuss the Nystrom approximation to a kernel matrix, which is very effective when the kernel matrix is low rank. For kernel matrices with moderate rank, we propose a correction to the Nystrom approximation. The resulting preconditioner has a block-factorized form and is efficient for kernel matrices with large numerical ranks. Important issues are the estimation of the rank of the kernel matrix, and the selection of the landmark points in the Nystrom approximation. The preconditioner also has application to stochastic trace estimation.

This is joint work with Yuanzhe Xi, Shifan Zhao, and Tianshi Xu.

Brief Biography

Edmond Chow is Professor and Associate Chair in the School of Computational Science and Engineering at Georgia Institute of Technology. His research is in developing numerical methods specialized for high-performance computers and applying these methods to enable the solution of large-scale physical simulation problems in science and engineering. Dr. Chow previously held positions at D. E. Shaw Research and Lawrence Livermore National Laboratory. He was chair of the 2022 ACM Gordon Bell Prize committee and was co-chair of the 2022 SIAM Annual Meeting. He is currently the program director of the SIAM Activity Group on Computational Science and Engineering. Dr. Chow is a Fellow of SIAM.

Presenters

Edmond Chow, Professor and Associate Chair, School of Computational Science and Engineering, Georgia Institute of Technology