Thursday, June 17, 2021, 12:00
- 14:00
https://kaust.zoom.us/j/95088144914
Contact Person
High Dynamic Range (HDR) image acquisition from a single image capture, also known as snapshot HDR imaging, is challenging because the bit depths of camera sensors are far from sufficient to cover the full dynamic range of the scene. Existing HDR techniques focus either on algorithmic reconstruction or hardware modification to extend the dynamic range. In this thesis, we propose a joint design for snapshot HDR imaging by devising a spatially varying modulation mask in the hardware combined with a deep learning algorithm to reconstruct the HDR image. In this approach, we achieve a reconfigurable HDR camera design that does not require custom sensors, and instead can be reconfigured between HDR and conventional mode with very simple calibration steps. We demonstrate that the proposed hardware-software solution offers a flexible, yet robust, way to modulate per-pixel exposures, and the network requires little knowledge of the hardware to faithfully reconstruct the HDR image. Comparative analysis demonstrated that our method outperforms the state-of-the-art in terms of visual perception quality.
Lenore J. Cowen is a Professor in the Computer Science Department at Tufts University
Monday, May 03, 2021, 18:30
- 19:30
https://kaust.zoom.us/j/98889531668
Contact Person
The 2016 DREAM Disease Module Identification Challenge was developed to systematically assess the state of computational module identification methods on a diverse collection of molecular networks. Six different anonymized networks were presented with the gene names anonymized. The goal was to partition the genes into non-overlapping modules of from 3-100 genes each, based soley on the patterns of network connectivity.
Muhammad Shafique , Professor, Division of Engineering, New York University Abu Dhabi (NYU-AD), United Arab Emirates
Monday, April 26, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT), and Smart Cyber Physical Systems (CPS) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high-performance capabilities under tight power/energy envelop, but also need to be intelligent/cognitive and robust. This has given rise to a new age of Machine Learning (and, in general Artificial Intelligence) at different levels of the computing stack, ranging from Edge and Fog to the Cloud. In particular, Deep Neural Networks (DNNs) have shown tremendous improvement over the past years to achieve a significantly high accuracy for a certain set of tasks, like image classification, object detection, natural language processing, and medical data analytics. However, these DNN require highly complex computations, incurring huge processing, memory, and energy costs. To some extent, Moore’s Law help by packing more transistors in the chip.
Belen Masia, Associate Professor in the Computer Science Department at Universidad de Zaragoza
Monday, April 19, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
Virtual Reality (VR) can dramatically change the way we create and consume content in areas of our everyday life, including entertainment, training, design, communication or advertising. Understanding how people explore immersive virtual environments is crucial for many applications in VR, such as designing content, developing new compression algorithms, or improving the interaction with virtual humans. In this talk, we will focus on how to capture and model visual behavior of users in virtual environments.
Ana Klimovic, Assistant Professor, Systems Group of the Computer Science Department, ETH Zurich.
Monday, April 12, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
Machine learning applications have sparked the development of specialized software frameworks and hardware accelerators. Yet, in today’s machine learning ecosystem, one important part of the system stack has received far less attention and specialization for ML: how we store and preprocess training data. This talk will describe the key challenges for implementing high-performance ML input data processing pipelines.
Monday, April 05, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
Recent research showed that most of the existing machine learning algorithms are vulnerable to various privacy attacks. An effective way for defending these attacks is to enforce differential privacy during the learning process. As a rigorous scheme for privacy preserving, Differential Privacy (DP) has now become a standard for private data analysis. Despite its rapid development in theory, DP's adoption to the machine learning community remains slow due to various challenges from the data, the privacy models and the learning tasks. In this talk, I will give a brief introduction on DP and use the Empirical Risk Minimization (ERM) problem as an example and show how to overcome these challenges in DP model. Particularly, I will first talk about how to overcome the high dimensionality challenge from the data for Sparse Linear Regression in the local DP (LDP) model. Then, I will discuss the challenge from the non-interactive LDP model and show a series of results to reduce the exponential sample complexity of ERM. Next, I will present techniques on achieving DP for ERM with non-convex loss functions. Finally, I will discuss some future research along these directions.
Maha Al-Aslani, PhD Student, Computer Science, KAUST
Wednesday, March 31, 2021, 16:00
- 17:00
https://kaust.zoom.us/j/7665815153
Contact Person
In this thesis defense, I will explore the unique characteristics of IoT traffic and examine IoT systems. The work is motivated by the new capabilities offered by modern Software Defined Networks (SDN) and blockchain technology. We evaluate IoT Quality of Service (QoS) in traditional networking. We obtain mathematical expressions to calculate end-to-end delay, and dropping. Then, we analyze IoT traffic load and propose an intelligent edge that can identify volumetric traffic and address them in real-time using an instantaneous detection method for IoT applications (IDIoT). This approach can easily detect a large surge and potential variation in traffic patterns for an IoT application, which may contribute to safer and more efficient operation of the overall system. Our results provide insight into the advantages of an intelligent edge serving as a detection mechanism.
Mosharaf Chowdhury, Morris Wellman, Assistant Professor of CSE at the University of Michigan, Ann Arbor
Monday, March 29, 2021, 18:30
- 19:30
https://kaust.zoom.us/j/98889531668
Contact Person
GPUs have emerged as a popular choice for deep learning. To deal with ever-growing datasets, it is also common to use multiple GPUs in parallel for distributed deep learning. Although achieving cost-effectiveness in these clusters relies on efficient sharing, modern GPU hardware, deep learning frameworks, and cluster managers are not designed for efficient, fine-grained sharing of GPU resources. In this talk, I will present our recent works on efficient GPU resource management, both within a single GPU and across many GPUs in a cluster for hyperparameter tuning, training, and inference. The common thread across all our works is leveraging the interplay between short-term predictability and long-term unpredictability of deep learning workloads
Laura Kovacs, Professor in Computer Science at the TU Wien
Monday, March 22, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
In this talk I will present recent advancement in automated reasoning, in particular computer-supported theorem proving, for generating and proving software properties that prevent programmers from introducing errors while making changes in this software. When testing programs manipulating the computer memory, our initial results show our work is able to prove that over 80% of test cases are guaranteed to have the expected behavior.
Jesper Tegner, Professor, BESE Division, KAUST
Thursday, March 18, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/94262797011?pwd=ZXBBcnltQ3JvZkdhWFZjTEptL3FmUT09
In essence, science is about discovering regularities in Nature. It turns out that such regularities (laws) are written in the language of mathematics. In many cases, such laws are formulated and refined from fundamental “first principles.” Yet, in phenomenological areas such as biology, we have an abundance of data but lack “first principles.” Machine learning and deep learning, in particular, are remarkably successful in classification and prediction tasks. However, such systems, when trained on data, do not, as a rule, provide compact mathematical laws or fundamental first principles. Here we ask how we can identify interpretable compact mathematical laws from complex data-sets when we don’t have access to first principles. I will give an overview of this problem and provide some vignettes of our ongoing work in attacking this problem.
Monday, March 08, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
We present a novel large-scale dataset and accompanying machine learning models aimed at providing a detailed understanding of the interplay between visual content, its emotional effect, and explanations for the latter in language. In contrast to most existing annotation datasets in computer vision, we focus on the affective experience triggered by visual artworks and ask the annotators to indicate the dominant emotion they feel for a given image and, crucially, to also provide a grounded verbal explanation for their emotion choice.
Monday, March 01, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
In this talk, I will introduce our recent efforts on developing novel computational models in the field of biological imaging. I will start with the examples in electron tomography, for which I will introduce a robust and efficient scheme for fiducial marker tracking, and then describe a novel constrained reconstruction model towards higher resolution sub-tomogram averaging. I will then show our work on developing deep learning methods for super-resolution fluorescence microscopy.
Tuesday, February 23, 2021, 15:00
- 16:30
https://kaust.zoom.us/s/99564603569
"A picture is worth a thousand words", and by going beyond static images, interactive visualization has become crucial to exploring, analyzing, and understanding large-scale scientific data. This is true for many areas of science and engineering, such as high-resolution imaging in neuroscience or materials science, as well as in large-scale fluid simulations of the Earth’s atmosphere and oceans, or of trillion-cell oil reservoirs. However, the fact that the amount of data in data-driven sciences is increasing rapidly toward the petascale, and further, presents a tremendous challenge to interactive visualization and analysis. Nowadays, an important enabler of interactivity is often the parallel processing power of GPUs, which, however, requires well-designed customized data structures and algorithms. Furthermore, scientific data sets do not only get larger, they also get more and more complex, and thus have become very hard to interpret and analyze. In this talk, I will give an overview of the research of my group in large-scale scientific visualization, from data structures and algorithms that enable petascale visualization on GPUs, to novel visual abstractions for interactive analysis of highly complex structures in neuroscience, to novel mathematical techniques that leverage differential geometric methods for the detection and visualization of features in large, complex fluid dynamics data on curved surfaces such as the Earth.
Manuela Waldner, Assistant Professor at the Research Unit of Computer Graphics of the Institute of Visual Computing and Human-Centered Technology at TU Wien, Austria
Monday, February 22, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
Drawing the user's attention to important items in an image, a complex visualization, or a cluttered graphical user interface is a non-trivial challenge. In the context of visualization, our goal is to effectively attract the user's attention to relevant items in large and complex scenes, while keeping noticeable modifications of the image to a minimum. In this talk, I will give an overview of common highlighting methods and present results from my research on attention guidance in complex, dynamic visualizations.
Ivan Viola, Associate Professor, Computer Science
Wednesday, February 17, 2021, 11:30
- 13:00
https://kaust.zoom.us/s/97630407305
Life at micron-scale is inaccessible to the naked eye. To aid the comprehension of nano- and micro-scale structural complexity, we utilize 3D visualization. Thanks to efficient GPU-accelerated algorithms, we witness a dramatic boost in the sheer size of structures that can be visually explored. As of today an atomistic model of the entire bacterial cell can be displayed interactively. On top of that, advanced 3D visualizations efficiently convey the multi-scale hierarchical architecture and cope with the high degree of structural occlusion which comes with dense packing of biological building blocks. To further scale up the size of life forms that can be visually explored, the rendering pipeline needs to integrate runtime construction of the biological structure. Assembly rules define how a body of a certain biological entity is composed. Such rules need to be applied on-the-fly, depending on where the viewer is currently located in the 3D scene to generate full structural detail for that part of the scene. We will review how to construct membrane-like structures, soluble protein distributions, and fiber strands through parallel algorithms, resulting in a collision-free biologically-valid scene. Assembly rules that define how a life form is structurally built need to be expressed in an intuitive way for the domain scientist, possibly directly in the three-dimensional space. Instead of placing one biological element next to another one for the entire biological structure by the modelers themselves, only assembly rules need to be specified and the algorithm will complete the application of those rules to form the entire biological entity. These rules are derived from current scientific knowledge and from all available experimental observations. The Cryo-EM Tomography is on rising and shows that we can already now reach the near-atomistic detail when employing smart algorithms. Our assembly rules extraction, therefore, needs to integrate with microscopic observations, to create an atomistic representation of specific, observed life forms, instead of generic models thereof. Such models then can be used in whole-cell simulations, and in the context of automated science dissemination.
Monday, February 15, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
In a nutshell, Resilient Computing is a new paradigm based on modelling, architecting and designing computer systems so that: they have built-in baseline defences; such defences cope with virtually any quality of threat, be it accidental faults, design errors, cyber-attacks, or unexpected operating conditions; provide incremental protection of, and automatically adapt to, a dynamic range of threat severity; provide sustainable operation.
Monday, February 08, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
I will review some background of generative adversarial modeling using GANs and discuss how our group has been using GANs for image editing.
Derry Wijaya, Assistant Professor, Computer Science, Boston University
Sunday, February 07, 2021, 15:00
- 16:00
https://kaust.zoom.us/j/92314385369
Contact Person
State-of-the-art Natural Language Processing (NLP) systems nowadays are dominated by machine learning and deep learning models. However, most of these models often only work well when there are abundant labeled data for training. Furthermore, as these models typically have a large number of parameters, they require large compute resources to train. For the majority of languages in the world and the researchers working on these languages, however, abundant labeled data are a privilege and so do compute resources. How can we train generalizable NLP models that are effective even when labeled data are scarce and compute resources are limited? In this talk, I will present some of our solutions that leverage unsupervised or few-shot learning and readily available multilingual resources or multimodal data to improve machine translation and nuanced text classification such as news framing under these low resource settings.
Monday, February 01, 2021, 12:00
- 13:00
https://kaust.zoom.us/j/98889531668
Contact Person
The overarching goal of Prof. Michels' Computational Sciences Group within KAUST's Visual Computing Center is enabling accurate and efficient simulations for applications in Scientific and Visual Computing.
Simon Peter, Assistant professor, Computer Science, University of Texas, Austin
Monday, January 25, 2021, 18:30
- 19:30
https://kaust.zoom.us/j/93816047882
Contact Person
In this talk, I focus on the adoption of low latency persistent memory modules (PMMs). PMMs upend the long-established model of remote storage for distributed file systems. Instead, by colocating computation with PMM storage we can provide applications with much higher IO performance, sub-second application failover, and strong consistency. To demonstrate this, I present Assise, a new distributed file system, based on a persistent, replicated coherence protocol that manages client-local PMM as a linearizable and crash-recoverable cache between applications and slower (and possibly remote) storage.
Marios Kogias, Researcher, Computer Science, Microsoft Research, Cambridge
Sunday, January 24, 2021, 10:00
- 11:00
https://kaust.zoom.us/j/97426624669
Contact Person
In the first part of the talk, I will focus on ZygOS[SOSP 2017], a system optimized for μs-scale, in-memory computing on multicore servers. ZygOS implements a work-conserving scheduler within a specialized operating system designed for high request rates and a large number of network connections. ZygOS revealed the challenges associated with serving remote procedure calls (RPCs) on top of a byte-stream oriented protocol, such as TCP. In the second part of the talk, I will present R2P2[ATC 2019]. R2P2 is a transport protocol specifically designed for datacenter RPCs, that exposes the RPC abstraction to the endpoints and the network, making RPCs first-class datacenter citizens. R2P2 enables pushing functionality, such as scheduling, fault-tolerance, and tail-tolerance, inside the transport protocol, making it application-agnostic. I will show how using R2P2 allowed us to offload RPC scheduling to programmable switches that can schedule requests directly on individual cores.
Ahmed Saeed, Postdoctoral Associate, Computer Science, MIT
Sunday, January 17, 2021, 15:00
- 16:00
https://kaust.zoom.us/j/96516650800
Contact Person
This talk covers two research directions that address the shortcomings of existing network stacks. The first is on scalable software network stacks, solving problems in different components of operating systems and applications to allow a single server to handle data flows for tens of thousands of clients. The second is on Wide Area Network (WAN) congestion control, focusing on network-assisted congestion control schemes, where end-to-end solutions fail. The talk will conclude with a discussion of plans for future research in this area.
Thursday, December 03, 2020, 12:00
- 13:00
https://kaust.zoom.us/j/95474758108?pwd=WkwrdiszTE1uYTdmR3JRK09LVDErZz09
Contact Person
Biological systems are distinguished by their enormous complexity and variability. That is why mathematical modeling and computational simulation of those systems is very difficult, in particular thinking of detailed models which are based on first principles. The difficulties start with geometric modeling which needs to extract basic structures from highly complex and variable phenotypes, on the other hand also has to take the statistic variability into account. Moreover, the models of the processes running on these geometries are not yet well established, since these are equally complex and often couple many scales in space and time. Thus, simulating such systems always means to put the whole frame to test, from modelling to the numerical methods and software tools used for simulation. These need to be advanced in connection with validating simulation results by comparing them to experiments.
Monday, November 30, 2020, 14:30
- 16:00
https://kaust.zoom.us/s/94432699270
Contact Person
The overarching goal of Prof. Michels' Computational Sciences Group within KAUST's Visual Computing Center is enabling accurate and efficient simulations for applications in Scientific and Visual Computing. Towards this goal, the group develops new principled computational methods based on solid theoretical foundations. This talk covers a selection of previous and current work presenting a broad spectrum of research highlights ranging from simulating stiff phenomena such as the dynamics of fibers and textiles, over liquids containing magnetic particles, to the development of complex ecosystems and weather phenomena. Moreover, connection points to the growing field of machine learning are addressed and an outlook is provided with respect to selected technology transfer activities.