Thursday, July 09, 2020, 16:00
- 17:00
KAUST
Contact Person
Out-of-Core simulation systems often produce a massive amount of data that cannot fit on the aggregate fast memory of the compute nodes, and they also require to read back these data for computation. As a result, I/O data movement can be a bottleneck in large-scale simulations. Advances in memory architecture have made it feasible to integrate hierarchical storage media on large-scale systems, starting from the traditional Parallel File Systems to intermediate fast disk technologies (e.g., node-local and remote-shared NVMe and SSD-based Burst Buffers) and up to CPU’s main memory and GPU’s High Bandwidth Memory. However, while adding additional and faster storage media increases I/O bandwidth, it pressures the CPU, as it becomes responsible for managing and moving data between these layers of storage. Simulation systems are thus vulnerable to being blocked by I/O operations. The Multilayer Buffer System (MLBS) proposed in this research demonstrates a general method for overlapping I/O with computation that helps to ameliorate the strain on the processors through asynchronous access. The main idea consists in decoupling I/O operations from computational phases using dedicated hardware resources to perform expensive context switches. By continually prefetching up and down across all hardware layers of the memory/storage subsystems, MLBS transforms the original I/O-bound behavior of evaluated applications and shifts it closer to a memory-bound or compute-bound regime.
Monday, May 04, 2020, 12:00
- 13:00
KAUST
Contact Person
In my talk, I will present techniques that allow biologists to model a mesocale entity in a rapid way in the timeframe of a few minutes to hours. This way we have created the first complete atomistic model of the SARS-CoV-2 virion that we are these days sharing with the worldwide scientific community. Mesoscale represents a scalar gap that is currently not possible to accurately image with neither microscopy nor X-ray crystallography approaches. For this purpose, scientists characterize it by observations from the surrounding nanoscale and the microscale. From this information, it is possible to reconstruct a three-dimensional model of a biological entity with a full atomistic model. The problem is that these models are enormously large and are not possible to model with traditional methods from computer graphics within a reasonable time.
Monday, April 27, 2020, 12:00
- 13:00
KAUST
Contact Person
In this talk, we will discuss a new way of computing with quad meshes. It is based on the checkerboard pattern of parallelograms one obtains by subdividing a quad mesh at its edge midpoints. The new approach is easy to understand and implement. It simplifies the transfer from the familiar theory of smooth surfaces to the discrete setting of quad meshes. This is illustrated with applications to constrained editing of 3D models, mesh design for architecture and digital modeling of shapes which can be fabricated by bending flat pieces of inextensible sheet material.
Monday, April 20, 2020, 12:00
- 13:00
KAUST
Contact Person
In the lecture, we present a three-dimensional model for the simulation of signal processing in neurons. Part of this approach is a method to reconstruct the geometric structure of neurons from data measured by 2-photon microscopy. Being able to reconstruct neural geometries and network connectivities from measured data is the basis of understanding coding of motoric perceptions and long term plasticity which is one of the main topics of neuroscience. Other issues are compartment models and upscaling.
Monday, April 13, 2020, 12:00
- 13:00
KAUST
Contact Person
Dynamic programming is an efficient technique to solve optimization problems. It is based on decomposing the initial problem into simpler ones and solving these sub-problems beginning from the simplest ones. A conventional dynamic programming algorithm returns an optimal object from a given set of objects. We developed extensions of dynamic programming which allow us (i) to describe the set of objects under consideration, (ii) to perform a multi-stage optimization of objects relative to different criteria, (iii) to count the number of optimal objects,(iv) to find the set of Pareto optimal points for the bi-criteria optimization problem, and (v) to study the relationships between two criteria.
Monday, April 06, 2020, 19:30
- 21:30
KAUST
Contact Person
We developed and expanded novel methods for representation learning, predicting protein functions and their loss of function phenotypes. We use deep neural network algorithm and combine them with symbolic inference into neural-symbolic algorithms. Our work significantly improves previously developed methods for predicting protein functions through methodological advances in machine learning, incorporation of broader data types that may be predictive of functions, and improved systems for neural-symbolic integration. The methods we developed are generic and can be applied to other domains in which similar types of structured and unstructured information exist. In future, our methods can be applied to prediction of protein function for metagenomic samples in order to evaluate the potential for discovery of novel proteins of industrial value.  Also our methods can be applied to the prediction of loss of function phenotypes in human genetics and incorporate the results in a variant prioritization tool that can be applied to diagnose patients with Mendelian disorders.
Monday, April 06, 2020, 12:00
- 13:00
KAUST
Contact Person
In this seminar, I will present some of the work I have done on Continual Deep Learning, among the research topics at the Vision-CAIR group. Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity as adopted in modern deep learning techniques.  Decreasing the gap towards human-level continual learning, we extended continual deep learning from multiple perspectives. The Hebb's learning theory from biology can be famously summarized as “Cells that fire together wire together.". Inspired by this theory from biology, we proposed Memory Aware Synapses (ECCV18) to quantify and reduce machine forgetting in a way that enables leveraging unlabeled data, which was not possible in former techniques. We later developed a Bayesian approach appearing at ICLR2020, where we explicitly modeled uncertainty parameters to orchestrates forgetting in continual learning. We showed in our ICLR2019 and ACCV18 works that task descriptors/ language can operate in continual learning visual tasks to improve learning efficiency and enable zero-shot task transfer. Beyond computer vision tasks, we recently developed an approach appearing at ICLR2020 we call "Compositional Language Continual Learning". We showed that disentangling syntax from semantics enables better compositional Seq2Seq learning and can significantly alleviate forgetting of tasks like machine translation.  In the talk, I will go over these techniques and shed some light on future research possibilities.
Suhaib Fahmy, Reader, School of Engineering, University of Warwick, UK
Wednesday, April 01, 2020, 12:00
- 13:00
KAUST
Contact Person
This talk discusses a body of work on exploiting the DSP Blocks in modern FPGAs to construct high-performance datapaths including the concept of FPGA overlays. It outlines work that established FPGAs as a viable virtualized cloud acceleration platform, and how the industry has adopted this model. Finally, it discusses recent work on incorporating accelerated processing in network controllers and the emerging concept of in-network computing with FPGAs. These strands of work come together to demonstrate the value of thinking about computing beyond the CPU-centric view that still dominates.
Sunday, March 29, 2020, 12:00
- 13:00
KAUST
In this talk, I will present a line of work done at the Image and Video Understanding Lab (IVUL), which focuses on developing deep graph convolutional networks (DeepGCNs). A GCN is a deep learning network that processes generic graph inputs, thus extending the impact of deep learning to irregular grid data including 3D point clouds and meshes, social graphs, protein interaction graphs, etc. By adapting architectural operations from the CNN realm and reformulating them for graphs, we were the first to show that GCNs can go as deep as CNNs. Developing such a high capacity deep learning platform for generic graphs opens up many opportunities for exciting research, which spans applications in the field of computer vision and beyond, architecture design, and theory. In this talk, I will showcase some of the GCN research done at IVUL and highlight some interesting research questions for future work.
Di Wang, Ph.D. Student, Computer Science and Engineering, State University of New York at Buffalo
Tuesday, March 24, 2020, 15:00
- 16:00
KAUST
Contact Person
In this talk, I will use the Empirical Risk Minimization (ERM) problem as an example and show how to overcome these challenges. Particularly, I will first talk about how to overcome the high dimensionality challenge from the data for Sparse Linear Regression in the local DP (LDP) model. Then, I will discuss the challenge from the non-interactive LDP model and show a series of results to reduce the exponential sample complexity of ERM. Next, I will present techniques on achieving DP for ERM with non-convex loss functions. Finally, I will discuss some future research along these directions.
Riyadh Baghdadi, Postdoctoral Associate, Computer Science, MIT
Sunday, March 22, 2020, 15:00
- 16:00
KAUST
Contact Person
This talk is about building compilers for high-performance code generation. It has three parts. The first part is about Tiramisu (http://tiramisu-compiler.org/), a polyhedral compiler designed for generating highly efficient code for multicores and GPUs. It is the first polyhedral compiler that can match the performance of highly hand-optimized industrial libraries such as Intel MKL and cuDNN. The second part is about applying Tiramisu to accelerate deep learning (DNN) inference. In comparison to other DNN compilers, Tiramisu has two unique features: (1) it supports sparse DNNs; and (2) it can express and optimize general RNNs (Recurrent Neural Networks). The third part will present recent work on the problem of automatic code optimization. In particular, it will focus on using deep learning to build a cost model to explore the search space of code optimizations.
Monday, March 16, 2020, 12:00
- 13:00
KAUST
Contact Person
In this talk, I will discuss a wide range of ideas on studying computer science and doing research in computer science. Peter Wonka is the program chair of the computer science program. His research interests are deep learning, computer graphics, computer vision, and remote sensing.
Tuesday, March 03, 2020, 10:00
- 11:30
Building 9, Level 2, Hall 2, Room 2325
Contact Person
In my research I aim to understand how formalized knowledge bases can be used to systematically structure and integrate biological knowledge, and how to utilize these formalized knowledge bases as background knowledge to improve scientific discovery in biology and biomedicine.  To achieve these aims, I develop methods for representing, integrating, and analyzing data and knowledge with the specific aim to make the combination of data and formalized knowledge accessible to data analytics and machine learning in bioinformatics. Biomedicine, and life sciences in general, are an ideal domain for knowledge-driven data analysis methods due to the large number of formal knowledge bases that have been developed to capture the broad, diverse, and heterogeneous data and knowledge.
Monday, March 02, 2020, 12:00
- 13:00
Building 9, Level 2, Hall 1, Room 2322
Contact Person
A traditional goal of algorithmic optimality, squeezing out operations, has been superseded because of evolution in architecture. Arithmetic operations no longer serve as a reasonable proxy for all aspects of complexity. Instead, algorithms must now squeeze memory, data transfers, and synchronizations, while extra operations on locally cached data represent only small costs in time and energy. Hierarchically low-rank matrices realize a rarely achieved combination of optimal storage complexity and high-computational intensity in approximating a wide class of formally dense operators that arise in applications for which exascale computers are being constructed. We describe modules of a KAUST-built software toolkit, Hierarchical Computations on Manycore Architectures (HiCMA), that illustrate these features and are building blocks of KAUST mission applications, such as matrix-free higher-order methods in optimization and large-scale spatial statistics. Early modules of this open-source project have undergone industrial-rigor testing are distributed in the software libraries of major vendors.
Charalambos (Harrys) Konstantinou, Assistant Professor of Electrical and Computer Engineering with Florida A&M University and Florida State University (FAMU-FSU) College of Engineering
Monday, February 24, 2020, 12:00
- 13:00
Building 9, Level 2, Hall 1
Contact Person
Election hacking, power grid cyber-attacks, troll farms, fake news, ransomware, and other terms have entered our daily vocabularies and are here to stay. Cybersecurity touches nearly every part of our daily lives. Most importantly, national security and economic vitality rely on a safe, resilient, and stable cyber-space. We rely on cyber-physical systems with hardware devices, software platforms, and network systems to connect, travel, communicate, power our homes, provide health care, run our economy, etc. However, cyber-threats and attacks have grown exponentially over the past years, exposing both corporate and personal data, disrupting critical operations, causing a public health and safety impact, and imposing high costs on the economy. In this talk, we will focus on cyber-physical energy systems (CPES) as the backbone of critical infrastructure, and provide a research perspective and present red team security threats, challenges, and blue team countermeasures. We will discuss recent approaches on developing low-budget targeted cyberattacks against CPES, designing resilient methods against false data, and the need for an accurate assessment environment achieved through the inclusion of hardware-in-the-loop testbeds.
Takashi Gojobori, Distinguished Professor, Bioscience
Wednesday, February 19, 2020, 12:00
- 13:00
Building 9, Level 2, Hall 2 (Room 2325)
In the history of humankind, domestication was invented as anthropogenic evolution that fulfills mankind’s critical food demand. The domestication is simply based on mating processes and subsequent selection processes to pick up better hybrid offspring that have advantageous combinations of genomes. Taking advantage of machine learning classifier, we discovered a number of sub-genomic regions that have been incorporated in the rice genomes through production of hybrid offspring during domestication. This so-called “introgression” event is disclosed as an essential key of domestication process. This eventually leads to construction of the AI-aided Smart Breeding Platform to accumulate all the breeding histories of crop species into an Integrated Breeding Knowledgebase.
Sunday, February 16, 2020, 16:00
- 18:00
Building 2, Level 5, Room 5209
In this dissertation, I present the methods I have developed for prediction of promoters for different organisms. Instead of focusing on the classification accuracy of the discrimination between promoter and non-promoter sequences, I predict the exact positions of the TSS inside the genomic sequences, testing every possible location. The developed methods significantly outperform the previous promoter prediction programs by considerably reducing the number of false positive predictions. Specifically, to reduce the false positive rate, the models are adaptively and iteratively trained by changing the distribution of samples in the training set based on the false positive errors made in the previous iteration. The new methods are used to gain insights into the design principles of the core promoters. Using model analysis, I have identified the most important core promoter elements and their effect on the promoter activity. I have developed a novel general approach to detect long range interactions in the input of a deep learning model, which was used to find related positions inside the promoter region. The final model was applied to the genomes of different species without a significant drop in the performance, demonstrating a high generality of the developed method.
Prof. Holger Theisel, Visual Computing, Magdeburg University
Monday, February 10, 2020, 12:00
- 13:00
Building 9, Level 2, Hall 1, Room 2322
Contact Person
In Visualization, the success or failure of an analysis often depends on the choice of some subtle parameters or design choices. While simple heuristics are often sufficient, in some cases they make the analysis miserably fail. We present three approaches in visualization where a careful choice of optimal parameters results in completely new algorithms: 1) the choice of a reference frame for finding objective vortices in flow visualization, 2) the choice of a scaling of high-dimensional data sets for finding linear projections to 2D in information visualization, and 3) the choice of a feature definition along with numerical extraction methods for visualizing recirculation phenomena in flows.
Wednesday, February 05, 2020, 12:00
- 13:00
Building 9, Hall 1, Room 2322
Contact Person
The Machine Learning Hub Seminar Series presents “Optimization and Learning in Computational Imaging” by Dr. Wolfgang Heidrich, Professor in Computer Science at KAUST. He leads the AI Initiative and is the Director of the KAUST Visual Computing Center. Computational imaging systems are based on the joint design of optics and associated image reconstruction algorithms. Historically, many such systems have employed simple transform-based reconstruction methods. Modern optimization methods and priors can drastically improve the reconstruction quality in computational imaging systems. Furthermore, learning-based methods can be used to design the optics along with the reconstruction method, yielding truly end-to-end learned imaging systems, blurring the boundary between imaging hardware and software.
Monday, February 03, 2020, 12:00
- 13:00
Building 9, Level 2, Hall 1, Room 2322
Contact Person
In this talk, I will introduce several ongoing projects at the networking lab in KAUST. I will start by highlighting our research profile along with our mission. Then, I will walk you through our projects and contributions in the domains of the internet of things, visible light communication, underwater communication, and future 6G networks. I will focus on the challenges facing each project, highlight our solution methodology, and discuss some performance evaluation results. I will focus on our work on Aqua-Fi, which aims at bringing the Internet into the underwater environment. I will also focus on our recent project on the communication via breath.
Yasser Shalabi, Graduate Student, Computer Science, University of Illinois Urbana-Champaign
Sunday, February 02, 2020, 12:00
- 13:00
Building 9, Level 2, Hall 2
Contact Person
The potential of indirect execution attacks – e.g. Return-Oriented-Programming and Transient-Execution based Side-channels – threaten all modern computing platforms. Standard security policies are unable to eliminate these threats. What is the role of hardware in mitigating these threats? Why are the latest processor designs no longer proactively eliminating threats? In this talk, we will explore these questions and reconsider the role of hardware in securing systems. I will present Record-and-Replay as a fundamental solution that can enable a hardware-software co-design that can strengthen the security of modern computing platforms.
Prof. Zhongmin Cai, Automation Department, Xi’an Jiaotong University, China
Wednesday, January 29, 2020, 12:00
- 13:00
Building 1, Level 4, Room 4214
Contact Person

Abstract

Our lives and society are digitized from every perspective by computers, smart devices an

Monday, January 27, 2020, 17:00
- 18:30
Building 1, Level 2, Room 2202
Contact Person
In this thesis, a variety of applications in computer vision and graphics of inverse problems using tomographic imaging modalities will be presented: (i) The first application focuses on the CT reconstruction with a specific emphasis on recovering thin 1D and 2D manifolds embedded in 3D volumes. (ii) The second application is about space-time tomography (iii) Base on the second application, the third one is aiming to improve the tomographic reconstruction of time-varying geometries undergoing faster, non-periodic deformations, by a warp-and-project strategy. Finally, with a physically plausible divergence-free prior for motion estimation, as  well as a novel  view synthesis technique,  we present applications to dynamic fluid imaging which further demonstrates the flexibility of our optimization frameworks