Prof. Alessandro Astolfi, Electronic Engineering, University of Rome Tor Vergata
Wednesday, November 06, 2024, 12:00
- 13:00
Auditorium between Building 2&3

Abstract

The interplay between Pontryagin’s Minimum Principle and Bellman’s Principle of Optimalit

Tuesday, October 29, 2024, 11:00
- 12:30
Building 2, Level 5, Room 5209; https://kaust.zoom.us/j/95703237916
Contact Person
Modeling data distributions is a fundamental aspect of machine learning, encompassing both discriminative modeling, which focuses on building predictive models, and generative modeling, which aims to synthesize new data that mirrors existing distributions.
Sunday, October 27, 2024, 13:00
- 15:00
Building 1, Level 2, Room 2202
Contact Person
Existing molecular visualization techniques are limited to small biological entities like viruses and bacteria due to hardware limitations.
Monday, October 07, 2024, 12:00
- 13:00
Building 9, Level 2, Room 2325
Contact Person
In recent years, the rapid expansion of model and data scales has notably enhanced the performance of AI systems. However, this growth has significantly increased GPU memory demands, limiting further scaling due to current constraints.
Monday, September 09, 2024, 12:00
- 13:00
Building 9, Level 2, Room 2325
Contact Person
Currently, attention mechanism becomes a standard fixture in most state-of-the-art NLP, Vision and GNN models, not only due to outstanding performance it could gain, but also due to plausible innate explanation for the behaviors of neural architectures it provides, which is notoriously difficult to analyze.
Monday, September 02, 2024, 12:00
- 13:00
Building 9, Level 2, Room 2325
Contact Person
The design of efficient parallel/distributed optimization methods and tight analysis of their theoretical properties are important research endeavors. While minimax complexities are known for sequential optimization methods, the theory of parallel optimization methods is surprisingly much less explored, especially in the presence of data, compute and communication heterogeneity.
Prof. Rio Yokota, Tokyo Institute of Technology
Thursday, August 29, 2024, 16:00
- 17:00
Building 1, Level 3, Room 3119
Contact Person
Large language models (LLM) have become part of our daily life and are now indispensable tools for conducting research as well. The performance of LLMs is known to increase as the model size and data size is scaled up.
Marcin Rogowski, PhD Student, Computer Science
Wednesday, August 28, 2024, 14:30
- 15:30
Building 3, Level 5, Room 5220
Contact Person
Over the past three decades, high-performance computing has undergone significant transformations. Tremendous advances were made in terms of the compute capability per node, memory bandwidth, as well as high-performance interconnect and storage. However, these technologies have evolved at disparate rates.
Monday, August 26, 2024, 12:00
- 13:00
Building 9, Level 2, Room 2325
Contact Person
The overarching goal of the KAUST Computational Sciences Group is enabling accurate and efficient physics-based simulations for applications in Visual Computing. Towards this goal, the group develops new principled computational methods based on solid theoretical foundations.
Wednesday, July 17, 2024, 12:00
- 14:00
Building 1, Level 3, Room 3119
Contact Person
Monocular depth estimation, the task of inferring depth information from a single RGB image, is a fundamental yet challenging problem in computer vision due to its inherently ill-posed nature. This dissertation presents a series of approaches that significantly advance the state-of-the-art in depth estimation.
Thursday, June 06, 2024, 13:00
- 15:00
Building 1, Level 4, Room 4214
Contact Person
Currently, the acquisition of accurate cryogenic electron microscopy data deals with problems with complex and time-consuming processes, low signal-to-noise ratio, and missing wedge, leading to a lack of highly accurate imaging data. Such data would be necessary to develop computational methods/visualizations and essential to train deep learning models that are used to solve inverse problems.
António Casimiro is an Associate Professor at the Department of Informatics of the University of Lisboa Faculty of Sciences (FCUL)
Thursday, May 30, 2024, 15:30
- 16:30
Building 4, Level 5, Room 5220
Contact Person
With the ever-increasing amount of cyberthreats out there, securing IT and OT infrastructures against these threats has become not only desirable, but fundamental. Network Intrusion Detection Systems (NIDS) are key assets for system protection, providing early alerts of network attacks. An important class of NIDS are those based on ML techniques, around which a substantial amount of research is being done these days. Unfortunately, being ML-based, these NIDS can be targeted by adversarial evasion attacks (AEA), which malicious parties try to exploit to perform network attacks without being detected.
Thursday, May 30, 2024, 11:00
- 14:00
Building 3, Level 5, Room 5220
Contact Person
The first part of the dissertation presents a study on the convergence properties of Stein Variational Gradient Descent (SVGD), a sampling algorithm with applications in machine learning. The research delves into the theoretical analysis of SVGD in the population limit, focusing on its behavior under various conditions, including the Talagrand’s inequality T1 and the (L0, L1)−smoothness condition. The study also introduces an improved version of SVGD with importance weights, demonstrating its potential to accelerate convergence and enhance stability.
Tuesday, May 28, 2024, 15:00
- 17:00
Building 2, Level 5, Room 5209
Contact Person
Deep Learning and generative Artificial Intelligence has grown rapidly during the past few years due to the advancement of computing powers and parallel distributed training algorithms. As a result, it has been a common practice to use hundreds or thousands of machines to train very large Deep Neural Networks.
Konstantin Mishchenko
Sunday, May 05, 2024, 11:00
- 13:00
Building 9, Level 3, Room 3128, https://kaust.zoom.us/j/95768114437
Contact Person
In this talk, I will present some work in progress on practical optimization methods for deep learning. We will start with a discussion of several empirical techniques that enable training of large-scale models in language and vision tasks, including weight decay, averaging, and schedulers. We will then look at a new approach that we call schedule-free due to its ability to work without a pre-defined time horizon. I will share some details about the theory for these methods, explain why they might be useful in practice and then shed some light on their limitations. This talk will be oriented towards people who already have some knowledge of optimization methods.
Dr. Jehad Abed, Postdoctoral Researcher, Fundamental AI Research at Meta
Tuesday, April 30, 2024, 11:00
- 12:00
Building 1, Level 3, Room 3426
Contact Person
In this talk, I will discuss our progress in advancing the discovery of catalysts for green hydrogen production and carbon dioxide conversion, as well as designing novel metalorganic frameworks for direct air capture.
Prof. Sven Dietrich, Computer Science, City University of New York
Monday, April 29, 2024, 11:30
- 12:30
Building 9, Level 2, Room 2325
Contact Person
To improve the data transmission speed of HTTP, HTTP/2 has extended  features based on HTTP/1.1 such as stream multiplexing. Along with its  wide deployment in popular web servers, numerous vulnerabilities are exposed. Denial of service, one of the most popular HTTP/2 vulnerabilities is attributed to the inappropriate implementations of flow control for stream multiplexing.