There are several Bayesian models where the posterior density is not available in a closed and tractable form. In these situations, Markov chain Monte Carlo algorithms and approximate methods, such as variational Bayes and expectation-propagation, provide common solutions to perform posterior inference. However, in high-dimensional studies, with large or even huge p, these approaches still face open problems in terms of scalability and quality of the posterior approximation. Notably, such issues also arise in basic predictor-dependent models for binary data which appear in a wide variety of formulations. For example, in this talk I will prove that the joint posterior mode, where classical mean-field variational Bayes and Laplace approximations for probit models are centered, is a poor point estimator and massively affects the predictive performance. Motivated by these issues, I will introduce a novel variational approximation for the posterior distribution of the coefficients in a probit regression with Gaussian priors. This method leverages a representation with global and local variables but, unlike for classical mean-field assumptions, it crucially avoids a fully factorized approximation, and instead relies on a variational family in which only the joint density of the local variables is factorized. I will prove that the approximate posterior which arises from this assumption belongs to a tractable class of unified skew-normal distributions that preserves the skewness and, unlike for state-of-the-art variational Bayes solutions, it converges to the exact posterior in high-dimensional settings as p goes to infinity. A scalable coordinate ascent variational algorithm is proposed to obtain the optimal parameters of the approximating densities. As I will outline in theoretical studies and in an application to Alzheimer's data, such a routine requires a number of iterations converging to 1 as p goes to infinity, and can easily scale to large p settings where expectation-propagation and state-of-the-art Markov chain Monte Carlo algorithms are impractical.
Daniele Durante is an Assistant Professor of Statistics at the Department of Decision Sciences, Bocconi University, Italy, and a Research Affiliate at the Bocconi Institute for Data Science and Analytics (BIDSA). His research is characterized by an interdisciplinary approach at the intersection of methods, computations, theory and applications, with a particular interest on the Bayesian approach to inference. In past years, he has been awarded the Laplace prize (SBSS Section of ASA, 2013), the Byar Award (Biometrics Section of ASA, 2015), the Mitchell prize (ISBA, 2018) and the best Ph.D. thesis award (Italian Statistical Society, 2018). He is currently Associate Editor of Biometrika and of the Journal of Computational and Graphical Statistics. In 2018 he has been chair of the j-ISBA and y-SIS sections.
Light refreshments will be served around 15:15 HRS