Edmond Chow, Professor and Associate Chair, School of Computational Science and Engineering, Georgia Institute of Technology
Tuesday, June 06, 2023, 16:00
- 17:00
Building 2, Level 5, Room 5220
Contact Person
Coffee Time: 15:30 - 16:00. Kernel matrices can be found in computational physics, chemistry, statistics, and machine learning. Fast algorithms for matrix-vector multiplication for kernel matrices have been developed, and is a subject of continuing interest, including here at KAUST. One also often needs fast algorithms to solve systems of equations involving large kernel matrices. Fast direct methods can sometimes be used, for example, when the physical problem is 2-dimensional. In this talk, we address preconditioning for the iterative solution of kernel matrix systems. The spectrum of a kernel matrix significantly depends on the parameters of the kernel function used to define the kernel matrix, e.g., a length scale.
Prof.Gustavo Alonso, Computer Science, ETH Zurich
Monday, March 13, 2023, 12:00
- 13:00
Building 9, Level 2, Room 2325, Hall 2
Contact Person
In this talk I will discuss the shift towards hardware acceleration and show with several examples from industry and from research the large role that FPGAs are playing. I will hypothesize that we are in a new era where most of the established assumptions, rules of thumb, and accumulated wisdom about many aspects of computation in general and of data processing in particular no longer hold and need to be revisited.
Tuesday, September 06, 2022, 12:00
- 13:00
Building 9, Level 2, Room 2322
Contact Person
Tile low-rank and hierarchical low-rank matrices can exploit the data sparsity that is discoverable all across computational science. We illustrate in large-scale applications and hybridize with similarly motivated mixed precision representations while featuring ECRC research in progress with many collaborators.
Monday, June 20, 2022, 11:00
- 13:00
Building 9, Level 4, Room 4223
Contact Person
Scientific applications from diverse sources rely on dense matrix operations. These operations arise in: Schur complements, integral equations, covariances in spatial statistics, ridge regression, radial basis functions from unstructured meshes, and kernel matrices from machine learning, among others. This thesis demonstrates how to extend the problem sizes that may be treated and reduce their execution time. Sometimes, even forming the dense matrix can be a bottleneck – in computation or storage.
speakers from KAUST, Birmingham, Graz, Utrecht, Stuttgart, Frankfurt, Buffalo, Linz, Weissach, Lugano, Kaliningrad, Heidelberg, State College, Philadelphia, Torino, Riyadh
Monday, March 21, 2022, 09:00
- 17:30
Building 3, Level 5, Room 5209
Contact Person

The workshop provides a forum for researchers to present and discuss recent progress in modelling and simula

Gabriel Ghinita, Associate Professor, University of Massachusetts, Boston
Sunday, November 28, 2021, 12:00
- 13:00
Building 9, Level 2, Room 2322, https://kaust.zoom.us/j/96553196829
Contact Person
Skyline computation is an increasingly popular query, with broad applicability to many domains. Given the trend to outsource databases, and due to the sensitive nature of the data (e.g., in healthcare), it is essential to evaluate skylines on encrypted datasets.
Jinchao Xu, Affiliate Professor of Information Sciences and Technology, Penn State University
Wednesday, October 13, 2021, 09:00
- 10:00
Building 9, level 2, Room # 2322
Contact Person
I will give a self-contained introduction to the theory of the neural network function class and its application to image classification and numerical solution of partial differential equations.
Jinchao Xu, Affiliate Professor of Information Sciences and Technology, Penn State University
Tuesday, October 12, 2021, 09:00
- 10:00
BW BUILDING 4 AND 5 Level: 0 Room: AUDITORIUM 0215
Contact Person
I will give a self-contained introduction to the theory of the neural network function class and its application to image classification and numerical solution of partial differential equations.
Jinchao Xu, Affiliate Professor of Information Sciences and Technology, Penn State University
Monday, October 11, 2021, 09:00
- 10:00
BW BUILDING 4 AND 5 Level: 0 Room: AUDITORIUM 0215
Contact Person
I will give a self-contained introduction to the theory of the neural network function class and its application to image classification and numerical solution of partial differential equations.
Bilel Hadri, Computational Scientist, Supercomputing Lab, KAUST
Friday, July 02, 2021, 14:00
- 18:00
ISC21 (virtual), Frankfurt, Germany (Time CET)
Contact Person

Abstract

With the hardware technology scaling and the trend on heterogeneous chip design, the exis

Piotr Luszczek, Research Assistant Professor, University of Tennessee
Monday, March 01, 2021, 09:00
- 18:00
vFairs online platform (SIAM CSE21 registration required)
Contact Person

Abstract

This minisymposium brings together experts in numerical simulation that have developed HP

Thursday, October 08, 2020, 12:00
- 13:00
https://kaust.zoom.us/j/95474758108?pwd=WkwrdiszTE1uYTdmR3JRK09LVDErZz09
We present Exascale GeoStatistics (ExaGeoStat) software, a high-performance library implemented on a wide variety of contemporary hybrid distributed-shared supercomputers whose primary target is climate and environmental prediction applications.
Thursday, July 09, 2020, 16:00
- 17:00
https://kaust.zoom.us/j/94054511362
Contact Person
Out-of-Core simulation systems often produce a massive amount of data that cannot fit on the aggregate fast memory of the compute nodes, and they also require to read back these data for computation. As a result, I/O data movement can be a bottleneck in large-scale simulations. Advances in memory architecture have made it feasible to integrate hierarchical storage media on large-scale systems, starting from the traditional Parallel File Systems to intermediate fast disk technologies (e.g., node-local and remote-shared NVMe and SSD-based Burst Buffers) and up to CPU’s main memory and GPU’s High Bandwidth Memory. However, while adding additional and faster storage media increases I/O bandwidth, it pressures the CPU, as it becomes responsible for managing and moving data between these layers of storage. Simulation systems are thus vulnerable to being blocked by I/O operations. The Multilayer Buffer System (MLBS) proposed in this research demonstrates a general method for overlapping I/O with computation that helps to ameliorate the strain on the processors through asynchronous access. The main idea consists in decoupling I/O operations from computational phases using dedicated hardware resources to perform expensive context switches. By continually prefetching up and down across all hardware layers of the memory/storage subsystems, MLBS transforms the original I/O-bound behavior of evaluated applications and shifts it closer to a memory-bound or compute-bound regime.
Thursday, March 05, 2020, 12:00
- 13:00
Building 9, Level 2, Room 2322
In the lecture we present a three dimensional mdoel for the simulation of signal processing in neurons. To handle problems of this complexity, new mathematical methods and software tools are required. In recent years, new approaches such as parallel adaptive multigrid methods and corresponding software tools have been developed allowing to treat problems of huge complexity. Part of this approach is a method to reconstruct the geometric structure of neurons from data measured by 2-photon microscopy. Being able to reconstruct neural geometries and network connectivities from measured data is the basis of understanding coding of motoric perceptions and long term plasticity which is one of the main topics of neuroscience. Other issues are compartment models and upscaling.
Prof. Dmitri Kuzmin, Applied Mathematics, TU Dortmund University
Monday, February 03, 2020, 14:00
- 15:00
Building 1, Level 4, Room 4214
Contact Person
In this talk, we review some recent advances in the analysis and design of algebraic flux correction (AFC) schemes for hyperbolic problems. In contrast to most variational stabilization techniques, AFC approaches modify the standard Galerkin discretization in a way which provably guarantees the validity of discrete maximum principles for scalar conservation laws and invariant domain preservation for hyperbolic systems. The corresponding inequality constraints are enforced by adding diffusive fluxes, and bound-preserving antidiffusive corrections are performed to obtain nonlinear high-order approximations. After introducing the AFC methodology and the underlying theoretical framework in the context of continuous piecewise-linear finite element discretizations, we present some of the limiting techniques that we use in high-resolution AFC schemes. This presentation is based on joint work with Dr. Manuel Quezada de Luna (KAUST) and other collaborators.
Wednesday, December 11, 2019, 16:00
- 17:00
Building 2, Level 5, Room 5220
Contact Person
The SLATE (Software for Linear Algebra Targeting Exascale) library is being developed to provide fundamental dense linear algebra capabilities for current and upcoming distributed high-performance systems, both accelerated CPU–GPU based and CPU based.
Monday, December 02, 2019, 12:00
- 13:00
Building 9, Level 2, Hall 1, Room 2322
Contact Person
This talk will be a gentle introduction to proximal splitting algorithms to minimize a sum of possibly nonsmooth convex functions. Several such algorithms date back to the 60s, but the last 10 years have seen the development of new primal-dual splitting algorithms, motivated by the need to solve large-scale problems in signal and image processing, machine learning, and more generally data science. No background will be necessary to attend the talk, whose goal is to present the intuitions behind this class of methods.
Prof. Ben Zhao, Computer Science, University of Chicago, USA
Monday, November 25, 2019, 12:00
- 13:00
Building 9, Level 2, Hall 1, Room 2322
In this talk, I will describe two recent results on detecting and understanding backdoor attacks on deep learning systems. I will first present Neural Cleanse (IEEE S&P 2019), the first robust tool to detect a wide range of backdoors in deep learning models. We use the idea of perturbation distances between classification labels to detect when a backdoor trigger has created shortcuts to misclassification to a particular label.  Second, I will also summarize our new work on Latent Backdoors (CCS 2019), a stronger type of backdoor attack that is more difficult to detect and survives retraining in commonly used transfer learning systems. Latent backdoors are robust and stealthy, even against the latest detection tools (including neural cleanse).
Prof. David L. Donoho, Department of Statistics, Stanford University
Tuesday, November 12, 2019, 15:00
- 16:00
Building 19, MOSTI Auditorium
Contact Person
We consider the problem of recovering a low-rank signal matrix in the presence of a general, unknown additive noise; more specifically, noise where the eigenvalues of the sample covariance matrix have a general bulk distribution. We assume given an upper bound for the rank of the assumed orthogonally invariant signal, and develop a selector for hard thresholding of singular values, which adapts to the unknown correlation structure of the noise.
Prof. David L. Donoho, Department of Statistics, Stanford University
Tuesday, November 12, 2019, 12:00
- 13:00
Building 9, Level 2, Hall 2, Room 2325
Contact Person
A variety of intriguing patterns in eigenvalues were observed and speculated about in ML conference papers. We describe the work of Vardan Papyan showing that the traditional subdisciplines, properly deployed, can offer insights about these objects that ML researchers had.
Monday, November 11, 2019, 12:00
- 13:00
Building 9, Level 2, Hall 1, Room 2322
Contact Person
Adil Salim is mainly interested in stochastic approximation, optimization, and machine learning. He is currently a Postdoctoral Research Fellow working with Professor Peter Richtarik at the Visual Computing Center (VCC) at King Abdullah University of Science and Technology (KAUST).