The Role and Importance of Sparse Matrices in Statistics
Overview
Abstract
The common tool for dealing with dependence in statistics, is linear dependence described through covariance or it scaled version, correlation. In this framework, a zero in the correlation matrix indicate independence, which is a very strong property and not something we would aim for in statistical models. A weaker version of linear dependence, is conditional independence. This corresponds to the inverse correlation matrix called the precision matrix. A zero in the precision matrix corresponds to conditional independence. These matrices can be very sparse although all random variables are dependent. Moving on to additive (generalised) regression models and the class of latent Gaussian models, then precision matrices has this amazing sparsity preserving property which we can utilise when we construct models and also when we do the estimation/inference. Within this class, we can construct very accurate approximations using nested Laplace approximations, whose success depends critically on (parallel) numerical methods for large sparse symmetric matrices. The operations needed are (sparse) Cholesky decomposition, linear solves of various types, log-determinants and selected inversion. I will show some application of its usage, the typical structures of the sparse matrices in these applications, discuss the link to solving PDEs with FEMs and future plans.
Brief Biography
Haavard Rue is a professor of Statistics at CEMSE Division, at KAUST in Saudi Arabia, since 2017. H was named a highly cited researcher in 2019 and 2020, he gave the Bahadur Memorial Lectures at University of Chicago in 2018, and was in 2021 awarded the Royal Statistical Society Guy Medal in Silver. My research is mainly centred around the R-INLA project see www.r-inla.org