Monday, September 20, 2021, 12:00
- 13:00
Building 9, Room 2322, Hall 1
Contact Person
Classical imaging systems are characterized by the independent design of optics, sensors, and image processing algorithms. In contrast, computational imaging systems are based on a joint design of two or more of these components, which allows for greater flexibility of the type of captured information beyond classical 2D photos, as well as for new form factors and domain-specific imaging systems. In this talk, I will describe how numerical optimization and learning-based methods can be used to achieve truly end-to-end optimized imaging systems that outperform classical solutions.
Monday, September 13, 2021, 12:00
- 13:00
Building 9, Room 2322, Hall 1
Contact Person
In this seminar, I will go over our journey in the underwater networks research work. Basically, I will highlight our recent work on bringing the Internet to underwater environments by deploying a low power and compact underwater optical wireless system, called Aqua-Fi, that support today’s Internet applications.
Monday, September 06, 2021, 16:00
- 17:00
KAUST
Contact Person
Computational imaging differs from traditional imaging systems by integrating an encoded measurement system and a tailored computational algorithm to extract interesting scene features. This dissertation demonstrates two approaches which apply computational imaging methods to the fluid domain. In the first approach, we study the problem of reconstructing time-varying 3D-3C fluid velocity vector fields. We extend 2D Particle Imaging Velocimetry to three dimensions by encoding depth into color.
Gabriel Ghinita, Associate Professor, University of Massachusetts, Boston
Monday, September 06, 2021, 12:00
- 13:00
Building 9, Room 2322, Hall 1
Contact Person
The mobile revolution of the past decade led to the ubiquitous presence of location data in all application domains, ranging from public safety and healthcare to urban planning, transportation and commercial applications. Numerous services rely on location data to provide customized service to their users. At the same time, there are serious concerns with respect to protecting individual privacy, as location traces can disclose sensitive details to an untrusted service.
Monday, August 30, 2021, 12:00
- 13:00
Building 9, Room 2322 Lecture Hall #1
Contact Person
This talk will give an overview of the research of the High-Performance Visualization research group (vccvisualization.org) at the KAUST Visual Computing Center (VCC). Interactive visualization is crucial to exploring, analyzing, and understanding large-scale scientific data, such as the data acquired in medicine or neurobiology using computed tomography or electron microscopy, and data resulting from large-scale simulations such as fluid flow in the Earth’s atmosphere and oceans. The amount of data in data-driven science is increasing rapidly toward the petascale and further.
Thursday, August 12, 2021, 14:00
- 16:00
KAUST
Contact Person
This dissertation tackles the problem of entanglement in Generative Adversarial Networks (GANs). The key insight is that disentanglement in GANs can be improved by differentiating between the content, and the operations performed on that content. For example, the identity of a generated face can be thought of as the content, while the lighting conditions can be thought of as the operations.
Thursday, June 17, 2021, 12:00
- 14:00
KAUST
Contact Person
High Dynamic Range (HDR) image acquisition from a single image capture, also known as snapshot HDR imaging, is challenging because the bit depths of camera sensors are far from sufficient to cover the full dynamic range of the scene. Existing HDR techniques focus either on algorithmic reconstruction or hardware modification to extend the dynamic range. In this thesis, we propose a joint design for snapshot HDR imaging by devising a spatially varying modulation mask in the hardware combined with a deep learning algorithm to reconstruct the HDR image. In this approach, we achieve a reconfigurable HDR camera design that does not require custom sensors, and instead can be reconfigured between HDR and conventional mode with very simple calibration steps. We demonstrate that the proposed hardware-software solution offers a flexible, yet robust, way to modulate per-pixel exposures, and the network requires little knowledge of the hardware to faithfully reconstruct the HDR image. Comparative analysis demonstrated that our method outperforms the state-of-the-art in terms of visual perception quality.
Lenore J. Cowen is a Professor in the Computer Science Department at Tufts University
Monday, May 03, 2021, 18:30
- 19:30
KAUST
Contact Person
The 2016 DREAM Disease Module Identification Challenge was developed to systematically assess the state of computational module identification methods on a diverse collection of molecular networks. Six different anonymized networks were presented with the gene names anonymized. The goal was to partition the genes into non-overlapping modules of from 3-100 genes each, based soley on the patterns of network connectivity.
Muhammad Shafique , Professor, Division of Engineering, New York University Abu Dhabi (NYU-AD), United Arab Emirates
Monday, April 26, 2021, 12:00
- 13:00
KAUST
Contact Person
Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT), and Smart Cyber Physical Systems (CPS) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high-performance capabilities under tight power/energy envelop, but also need to be intelligent/cognitive and robust. This has given rise to a new age of Machine Learning (and, in general Artificial Intelligence) at different levels of the computing stack, ranging from Edge and Fog to the Cloud. In particular, Deep Neural Networks (DNNs) have shown tremendous improvement over the past years to achieve a significantly high accuracy for a certain set of tasks, like image classification, object detection, natural language processing, and medical data analytics. However, these DNN require highly complex computations, incurring huge processing, memory, and energy costs. To some extent, Moore’s Law help by packing more transistors in the chip.
Belen Masia, Associate Professor in the Computer Science Department at Universidad de Zaragoza
Monday, April 19, 2021, 12:00
- 13:00
KAUST
Contact Person
Virtual Reality (VR) can dramatically change the way we create and consume content in areas of our everyday life, including entertainment, training, design, communication or advertising. Understanding how people explore immersive virtual environments is crucial for many applications in VR, such as designing content, developing new compression algorithms, or improving the interaction with virtual humans. In this talk, we will focus on how to capture and model visual behavior of users in virtual environments.
Ana Klimovic, Assistant Professor, Systems Group of the Computer Science Department, ETH Zurich.
Monday, April 12, 2021, 12:00
- 13:00
KAUST
Contact Person
Machine learning applications have sparked the development of specialized software frameworks and hardware accelerators. Yet, in today’s machine learning ecosystem, one important part of the system stack has received far less attention and specialization for ML: how we store and preprocess training data. This talk will describe the key challenges for implementing high-performance ML input data processing pipelines.
Monday, April 05, 2021, 12:00
- 13:00
KAUST
Contact Person
Recent research showed that most of the existing machine learning algorithms are vulnerable to various privacy attacks. An effective way for defending these attacks is to enforce differential privacy during the learning process. As a rigorous scheme for privacy preserving, Differential Privacy (DP) has now become a standard for private data analysis. Despite its rapid development in theory, DP's adoption to the machine learning community remains slow due to various challenges from the data, the privacy models and the learning tasks. In this talk, I will give a brief introduction on DP and use the Empirical Risk Minimization (ERM) problem as an example and show how to overcome these challenges in DP model. Particularly, I will first talk about how to overcome the high dimensionality challenge from the data for Sparse Linear Regression in the local DP (LDP) model. Then, I will discuss the challenge from the non-interactive LDP model and show a series of results to reduce the exponential sample complexity of ERM. Next, I will present techniques on achieving DP for ERM with non-convex loss functions. Finally, I will discuss some future research along these directions.
Maha Al-Aslani, PhD Student, Computer Science, KAUST
Wednesday, March 31, 2021, 16:00
- 17:00
KAUST
Contact Person
In this thesis defense, I will explore the unique characteristics of IoT traffic and examine IoT systems. The work is motivated by the new capabilities offered by modern Software Defined Networks (SDN) and blockchain technology. We evaluate IoT Quality of Service (QoS) in traditional networking. We obtain mathematical expressions to calculate end-to-end delay, and dropping. Then, we analyze IoT traffic load and propose an intelligent edge that can identify volumetric traffic and address them in real-time using an instantaneous detection method for IoT applications (IDIoT). This approach can easily detect a large surge and potential variation in traffic patterns for an IoT application, which may contribute to safer and more efficient operation of the overall system. Our results provide insight into the advantages of an intelligent edge serving as a detection mechanism.
Mosharaf Chowdhury, Morris Wellman, Assistant Professor of CSE at the University of Michigan, Ann Arbor
Monday, March 29, 2021, 18:30
- 19:30
KAUST
Contact Person
GPUs have emerged as a popular choice for deep learning. To deal with ever-growing datasets, it is also common to use multiple GPUs in parallel for distributed deep learning. Although achieving cost-effectiveness in these clusters relies on efficient sharing, modern GPU hardware, deep learning frameworks, and cluster managers are not designed for efficient, fine-grained sharing of GPU resources. In this talk, I will present our recent works on efficient GPU resource management, both within a single GPU and across many GPUs in a cluster for hyperparameter tuning, training, and inference. The common thread across all our works is leveraging the interplay between short-term predictability and long-term unpredictability of deep learning workloads
Laura Kovacs, Professor in Computer Science at the TU Wien
Monday, March 22, 2021, 12:00
- 13:00
KAUST
Contact Person
In this talk I will present recent advancement in automated reasoning, in particular computer-supported theorem proving, for generating and proving software properties that prevent programmers from introducing errors while making changes in this software. When testing programs manipulating the computer memory, our initial results show our work is able to prove that over 80% of test cases are guaranteed to have the expected behavior.
Jesper Tegner, Professor, BESE Division, KAUST
Thursday, March 18, 2021, 12:00
- 13:00
KAUST
In essence, science is about discovering regularities in Nature. It turns out that such regularities (laws) are written in the language of mathematics. In many cases, such laws are formulated and refined from fundamental “first principles.” Yet, in phenomenological areas such as biology, we have an abundance of data but lack “first principles.” Machine learning and deep learning, in particular, are remarkably successful in classification and prediction tasks. However, such systems, when trained on data, do not, as a rule, provide compact mathematical laws or fundamental first principles. Here we ask how we can identify interpretable compact mathematical laws from complex data-sets when we don’t have access to first principles. I will give an overview of this problem and provide some vignettes of our ongoing work in attacking this problem.
Monday, March 08, 2021, 12:00
- 13:00
KAUST
Contact Person
We present a novel large-scale dataset and accompanying machine learning models aimed at providing a detailed understanding of the interplay between visual content, its emotional effect, and explanations for the latter in language. In contrast to most existing annotation datasets in computer vision, we focus on the affective experience triggered by visual artworks and ask the annotators to indicate the dominant emotion they feel for a given image and, crucially, to also provide a grounded verbal explanation for their emotion choice.
Monday, March 01, 2021, 12:00
- 13:00
KAUST
Contact Person
In this talk, I will introduce our recent efforts on developing novel computational models in the field of biological imaging. I will start with the examples in electron tomography, for which I will introduce a robust and efficient scheme for fiducial marker tracking, and then describe a novel constrained reconstruction model towards higher resolution sub-tomogram averaging. I will then show our work on developing deep learning methods for super-resolution fluorescence microscopy.
Tuesday, February 23, 2021, 15:00
- 16:30
KAUST
"A picture is worth a thousand words", and by going beyond static images, interactive visualization has become crucial to exploring, analyzing, and understanding large-scale scientific data. This is true for many areas of science and engineering, such as high-resolution imaging in neuroscience or materials science, as well as in large-scale fluid simulations of the Earth’s atmosphere and oceans, or of trillion-cell oil reservoirs. However, the fact that the amount of data in data-driven sciences is increasing rapidly toward the petascale, and further, presents a tremendous challenge to interactive visualization and analysis. Nowadays, an important enabler of interactivity is often the parallel processing power of GPUs, which, however, requires well-designed customized data structures and algorithms. Furthermore, scientific data sets do not only get larger, they also get more and more complex, and thus have become very hard to interpret and analyze. In this talk, I will give an overview of the research of my group in large-scale scientific visualization, from data structures and algorithms that enable petascale visualization on GPUs, to novel visual abstractions for interactive analysis of highly complex structures in neuroscience, to novel mathematical techniques that leverage differential geometric methods for the detection and visualization of features in large, complex fluid dynamics data on curved surfaces such as the Earth.
Manuela Waldner, Assistant Professor at the Research Unit of Computer Graphics of the Institute of Visual Computing and Human-Centered Technology at TU Wien, Austria
Monday, February 22, 2021, 12:00
- 13:00
KAUST
Contact Person
Drawing the user's attention to important items in an image, a complex visualization, or a cluttered graphical user interface is a non-trivial challenge. In the context of visualization, our goal is to effectively attract the user's attention to relevant items in large and complex scenes, while keeping noticeable modifications of the image to a minimum. In this talk, I will give an overview of common highlighting methods and present results from my research on attention guidance in complex, dynamic visualizations.
Ivan Viola, Associate Professor, Computer Science
Wednesday, February 17, 2021, 11:30
- 13:00
KAUST
Life at micron-scale is inaccessible to the naked eye. To aid the comprehension of nano- and micro-scale structural complexity, we utilize 3D visualization. Thanks to efficient GPU-accelerated algorithms, we witness a dramatic boost in the sheer size of structures that can be visually explored. As of today an atomistic model of the entire bacterial cell can be displayed interactively. On top of that, advanced 3D visualizations efficiently convey the multi-scale hierarchical architecture and cope with the high degree of structural occlusion which comes with dense packing of biological building blocks. To further scale up the size of life forms that can be visually explored, the rendering pipeline needs to integrate runtime construction of the biological structure. Assembly rules define how a body of a certain biological entity is composed. Such rules need to be applied on-the-fly, depending on where the viewer is currently located in the 3D scene to generate full structural detail for that part of the scene. We will review how to construct membrane-like structures, soluble protein distributions, and fiber strands through parallel algorithms, resulting in a collision-free biologically-valid scene. Assembly rules that define how a life form is structurally built need to be expressed in an intuitive way for the domain scientist, possibly directly in the three-dimensional space. Instead of placing one biological element next to another one for the entire biological structure by the modelers themselves, only assembly rules need to be specified and the algorithm will complete the application of those rules to form the entire biological entity. These rules are derived from current scientific knowledge and from all available experimental observations. The Cryo-EM Tomography is on rising and shows that we can already now reach the near-atomistic detail when employing smart algorithms. Our assembly rules extraction, therefore, needs to integrate with microscopic observations, to create an atomistic representation of specific, observed life forms, instead of generic models thereof. Such models then can be used in whole-cell simulations, and in the context of automated science dissemination.
Monday, February 15, 2021, 12:00
- 13:00
KAUST
Contact Person
In a nutshell, Resilient Computing is a new paradigm based on modelling, architecting and designing computer systems so that: they have built-in baseline defences; such defences cope with virtually any quality of threat, be it accidental faults, design errors, cyber-attacks, or unexpected operating conditions; provide incremental protection of, and automatically adapt to, a dynamic range of threat severity; provide sustainable operation.
Monday, February 08, 2021, 12:00
- 13:00
KAUST
Contact Person
I will review some background of generative adversarial modeling using GANs and discuss how our group has been using GANs for image editing.
Derry Wijaya, Assistant Professor, Computer Science, Boston University
Sunday, February 07, 2021, 15:00
- 16:00
KAUST
Contact Person
State-of-the-art Natural Language Processing (NLP) systems nowadays are dominated by machine learning and deep learning models. However, most of these models often only work well when there are abundant labeled data for training. Furthermore, as these models typically have a large number of parameters, they require large compute resources to train. For the majority of languages in the world and the researchers working on these languages, however, abundant labeled data are a privilege and so do compute resources. How can we train generalizable NLP models that are effective even when labeled data are scarce and compute resources are limited? In this talk, I will present some of our solutions that leverage unsupervised or few-shot learning and readily available multilingual resources or multimodal data to improve machine translation and nuanced text classification such as news framing under these low resource settings.