Stochastic Numerics and Statistical Learning: Theory and Applications Workshop 2024

About the event

This scientific meeting will concentrate on stochastic algorithms and their rigorous numerical analysis for various problems, including statistical learning, optimization, and approximation. 

Stochastic algorithms are valuable tools when addressing challenging computational issues. For instance, Machine Learning, Stochastic Optimal Control, Data Assimilation, and Bayesian Statistics are important research areas where these algorithms exhibit their strength. The realm of applications is immense and of great interest to KAUST and the Kingdom. 

The meeting will include contributions that offer mathematical foundations to algorithmic analysis or showcase relevant applications. 

Some of the external speakers confirmed for the event:

  • Prof. Mike Giles (University of Oxford);
  • Prof. Terry Lyons (University of Oxford);
  • Prof. Eric Moulines (Ecole Polytechnique);
  • Prof. Cristopher Salvi (Imperial College London);
  • Prof. Kengo Kamatani (Institute of Statistical Mathematics);
  • Prof. Jack Jacquier (Imperial College London);
  • Prof. Alexandros Beskos (University College London -UCL)
  • Prof. Fabio Nobile (Ecole Polytechnique Fédérale de Lausanne / EPFL);
  • Prof. Peter Friz (Technical University of Berlin);


Please login HERE to register for the event.

Poster Sessions

Two poster sessions will be held on May 20th and 27th. To participate, submit a title and abstract on the registration form. The dates are as follows:

  • March 7 - Title and abstract submissions are open
  • April 7 - Title and abstract submissions are closed
  • April 14 - The link for poster upload will be sent to authors whose abstracts are accepted
  • May 14 - Deadline for poster upload


As a part of the workshop, we will offer two minicourses on "Quantum Computing for Finance" by Prof. Antoine Jacquier and "Scaling limits of random neural networks" by Prof. Cris Salvi.

About the minicourse "Quantum Computing for Finance" by Prof. Antoine Jacquier

Quantum Computing, relegated for decades as a spooky distant myth, is now becoming a reality. To wit, quantum computers (albeit small in scale) are already available, developed by the likes of IBM, Rigetti, D-Wave, Google, Microsoft, ….. However, a quantum computer is not simply a bigger and more powerful computer and requires a whole new set of algorithms to be written to perform useful tasks. These, and the underlying technology, draw from the laws of quantum mechanics, fundamentally different from our usual numerical toolbox. The goal of this course is to provide a mathematical introduction to Quantum Computing and to highlight applications in Quantitative Finance, in particular for Monte Carlo simulations, Machine learning and Optimisation. Numerical examples (through python) will also be introduced to provide a tangible reality.

About the minicourse "Scaling limits of random neural networks" by Prof. Cris Salvi

Deep neural networks are classically initialized with random parameters. In this mini-course I will present a number of results to compute moments and quantify spectral properties of random neural networks when the number of neurons diverges. I will explain how the insights gained from these results have implications for studying training dynamics, designing effective backpropagation strategies, and explaining phenomena like exploding/vanishing gradients and feature learning. Throughout the course, we will consider the running example of randomly initialised ResNets, their continuous-depth counterparts known as Neural ODEs, and study how scaling limits for these architectures can be analysed using tools from rough path theory.


Youtube Channel

The previous talks and recorded poster presentations are available at the workshop's youtube channel.


Contact Person