Monday, September 20, 2021, 12:00
- 13:00
Building 9, Room 2322, Hall 1
Contact Person
Classical imaging systems are characterized by the independent design of optics, sensors, and image processing algorithms. In contrast, computational imaging systems are based on a joint design of two or more of these components, which allows for greater flexibility of the type of captured information beyond classical 2D photos, as well as for new form factors and domain-specific imaging systems. In this talk, I will describe how numerical optimization and learning-based methods can be used to achieve truly end-to-end optimized imaging systems that outperform classical solutions.
Wednesday, September 15, 2021, 16:20
- 18:10
KAUST
Contact Person
Imaging systems have long been designed in separated steps: the experience-driven optical design followed by sophisticated image processing. Such a general-propose approach achieves success in the past but left the question open for specific tasks and the best compromise between optics and post-processing, as well as minimizing costs. Driven from this, a series of works are proposed to bring the imaging system design into end-to-end fashion step by step, from joint optics design, PSF optimization, phase map optimization to a general end-to-end complex lens camera.
Monday, September 13, 2021, 12:00
- 13:00
Building 9, Room 2322, Hall 1
Contact Person
In this seminar, I will go over our journey in the underwater networks research work. Basically, I will highlight our recent work on bringing the Internet to underwater environments by deploying a low power and compact underwater optical wireless system, called Aqua-Fi, that support today’s Internet applications.
Monday, September 06, 2021, 16:00
- 17:00
KAUST
Contact Person
Computational imaging differs from traditional imaging systems by integrating an encoded measurement system and a tailored computational algorithm to extract interesting scene features. This dissertation demonstrates two approaches which apply computational imaging methods to the fluid domain. In the first approach, we study the problem of reconstructing time-varying 3D-3C fluid velocity vector fields. We extend 2D Particle Imaging Velocimetry to three dimensions by encoding depth into color.
Gabriel Ghinita, Associate Professor, University of Massachusetts, Boston
Monday, September 06, 2021, 12:00
- 13:00
Building 9, Room 2322, Hall 1
Contact Person
The mobile revolution of the past decade led to the ubiquitous presence of location data in all application domains, ranging from public safety and healthcare to urban planning, transportation and commercial applications. Numerous services rely on location data to provide customized service to their users. At the same time, there are serious concerns with respect to protecting individual privacy, as location traces can disclose sensitive details to an untrusted service.
Monday, August 30, 2021, 12:00
- 13:00
Building 9, Room 2322 Lecture Hall #1
Contact Person
This talk will give an overview of the research of the High-Performance Visualization research group (vccvisualization.org) at the KAUST Visual Computing Center (VCC). Interactive visualization is crucial to exploring, analyzing, and understanding large-scale scientific data, such as the data acquired in medicine or neurobiology using computed tomography or electron microscopy, and data resulting from large-scale simulations such as fluid flow in the Earth’s atmosphere and oceans. The amount of data in data-driven science is increasing rapidly toward the petascale and further.
Thursday, August 12, 2021, 14:00
- 16:00
KAUST
Contact Person
This dissertation tackles the problem of entanglement in Generative Adversarial Networks (GANs). The key insight is that disentanglement in GANs can be improved by differentiating between the content, and the operations performed on that content. For example, the identity of a generated face can be thought of as the content, while the lighting conditions can be thought of as the operations.
Thursday, June 17, 2021, 12:00
- 14:00
KAUST
Contact Person
High Dynamic Range (HDR) image acquisition from a single image capture, also known as snapshot HDR imaging, is challenging because the bit depths of camera sensors are far from sufficient to cover the full dynamic range of the scene. Existing HDR techniques focus either on algorithmic reconstruction or hardware modification to extend the dynamic range. In this thesis, we propose a joint design for snapshot HDR imaging by devising a spatially varying modulation mask in the hardware combined with a deep learning algorithm to reconstruct the HDR image. In this approach, we achieve a reconfigurable HDR camera design that does not require custom sensors, and instead can be reconfigured between HDR and conventional mode with very simple calibration steps. We demonstrate that the proposed hardware-software solution offers a flexible, yet robust, way to modulate per-pixel exposures, and the network requires little knowledge of the hardware to faithfully reconstruct the HDR image. Comparative analysis demonstrated that our method outperforms the state-of-the-art in terms of visual perception quality.
Monday, March 08, 2021, 12:00
- 13:00
KAUST
Contact Person
We present a novel large-scale dataset and accompanying machine learning models aimed at providing a detailed understanding of the interplay between visual content, its emotional effect, and explanations for the latter in language. In contrast to most existing annotation datasets in computer vision, we focus on the affective experience triggered by visual artworks and ask the annotators to indicate the dominant emotion they feel for a given image and, crucially, to also provide a grounded verbal explanation for their emotion choice.
Tuesday, February 23, 2021, 15:00
- 16:30
KAUST
"A picture is worth a thousand words", and by going beyond static images, interactive visualization has become crucial to exploring, analyzing, and understanding large-scale scientific data. This is true for many areas of science and engineering, such as high-resolution imaging in neuroscience or materials science, as well as in large-scale fluid simulations of the Earth’s atmosphere and oceans, or of trillion-cell oil reservoirs. However, the fact that the amount of data in data-driven sciences is increasing rapidly toward the petascale, and further, presents a tremendous challenge to interactive visualization and analysis. Nowadays, an important enabler of interactivity is often the parallel processing power of GPUs, which, however, requires well-designed customized data structures and algorithms. Furthermore, scientific data sets do not only get larger, they also get more and more complex, and thus have become very hard to interpret and analyze. In this talk, I will give an overview of the research of my group in large-scale scientific visualization, from data structures and algorithms that enable petascale visualization on GPUs, to novel visual abstractions for interactive analysis of highly complex structures in neuroscience, to novel mathematical techniques that leverage differential geometric methods for the detection and visualization of features in large, complex fluid dynamics data on curved surfaces such as the Earth.
Ivan Viola, Associate Professor, Computer Science
Wednesday, February 17, 2021, 11:30
- 13:00
KAUST
Life at micron-scale is inaccessible to the naked eye. To aid the comprehension of nano- and micro-scale structural complexity, we utilize 3D visualization. Thanks to efficient GPU-accelerated algorithms, we witness a dramatic boost in the sheer size of structures that can be visually explored. As of today an atomistic model of the entire bacterial cell can be displayed interactively. On top of that, advanced 3D visualizations efficiently convey the multi-scale hierarchical architecture and cope with the high degree of structural occlusion which comes with dense packing of biological building blocks. To further scale up the size of life forms that can be visually explored, the rendering pipeline needs to integrate runtime construction of the biological structure. Assembly rules define how a body of a certain biological entity is composed. Such rules need to be applied on-the-fly, depending on where the viewer is currently located in the 3D scene to generate full structural detail for that part of the scene. We will review how to construct membrane-like structures, soluble protein distributions, and fiber strands through parallel algorithms, resulting in a collision-free biologically-valid scene. Assembly rules that define how a life form is structurally built need to be expressed in an intuitive way for the domain scientist, possibly directly in the three-dimensional space. Instead of placing one biological element next to another one for the entire biological structure by the modelers themselves, only assembly rules need to be specified and the algorithm will complete the application of those rules to form the entire biological entity. These rules are derived from current scientific knowledge and from all available experimental observations. The Cryo-EM Tomography is on rising and shows that we can already now reach the near-atomistic detail when employing smart algorithms. Our assembly rules extraction, therefore, needs to integrate with microscopic observations, to create an atomistic representation of specific, observed life forms, instead of generic models thereof. Such models then can be used in whole-cell simulations, and in the context of automated science dissemination.
Monday, November 30, 2020, 14:30
- 16:00
KAUST
Contact Person
The overarching goal of Prof. Michels' Computational Sciences Group within KAUST's Visual Computing Center is enabling accurate and efficient simulations for applications in Scientific and Visual Computing. Towards this goal, the group develops new principled computational methods based on solid theoretical foundations. This talk covers a selection of previous and current work presenting a broad spectrum of research highlights ranging from simulating stiff phenomena such as the dynamics of fibers and textiles, over liquids containing magnetic particles, to the development of complex ecosystems and weather phenomena. Moreover, connection points to the growing field of machine learning are addressed and an outlook is provided with respect to selected technology transfer activities.
Monday, November 30, 2020, 12:00
- 13:00
KAUST
Contact Person
In this talk, I will give an overview of research done in the Image and Video Understanding Lab (IVUL) at KAUST. At IVUL, we work on topics that are important to the computer vision (CV) and machine learning (ML) communities, with emphasis on three research themes: Theme 1 (Video Understanding), Theme 2 (Visual Computing for Automated Navigation), Theme 3 (Fundamentals/Foundations).
Marios Kogias, Researcher, Microsoft Research
Monday, November 02, 2020, 12:00
- 13:00
KAUST
I’ll cover thee different RPC policies implemented on top of R2P2. Specifically, we’ll see how R2P2 enables efficient in-network RPC load balancing based on a novel  join-bounded-shortest-queue (JBSQ) policy. JBSQ lowers tail latency by centralizing pending RPCs in the middle box and ensures that requests are only routed to servers with a bounded number of outstanding requests. Then, I’ll talk about SVEN, an SLO-aware RPC admission control mechanism implemented as an R2P2 policy on P4 programmable switches. Finally, I’ll describe HovercRaft, a new approach to building fault-tolerant generic RPC services by integrating state-machine replication in the transport layer.
Thursday, May 28, 2020, 16:00
- 18:00
KAUST
Contact Person
One of the main goals in computer vision is to achieve a human-like understanding of images. This understanding has been recently represented in various forms, including image classification, object detection, semantic segmentation, among many others. Nevertheless, image understanding has been mainly studied in the 2D image frame, so more information is needed to relate them to the 3D world. With the emergence of 3D sensors (e.g. the Microsoft Kinect), which provide depth along with color information, the task of propagating 2D knowledge into 3D becomes more attainable and enables interaction between a machine (e.g. robot) and its environment. This dissertation focuses on three aspects of indoor 3D scene understanding: (1) 2D-driven 3D object detection for single frame scenes with inherent 2D information, (2) 3D object instance segmentation for 3D reconstructed scenes, and (3) using room and floor orientation for automatic labeling of indoor scenes that could be used for self-supervised object segmentation. These methods allow capturing of physical extents of 3D objects, such as their sizes and actual locations within a scene.
Monday, March 30, 2020, 18:00
- 20:00
KAUST
Contact Person
In this dissertation, we aim at theoretically studying and analyzing deep learning models. Since deep models substantially vary in their shapes and sizes, in this dissertation, we restrict our work to a single fundamental block of layers that is common in almost all architectures. The block of layers of interest is the composition of an affine layer followed by a nonlinear activation function and then lastly followed by another affine layer. We study this block of layers from three different perspectives. (i) An Optimization Perspective. We try addressing the following question: Is it possible that the output of the forward pass through the block of layers highlighted above is an optimal solution to a certain convex optimization problem? As a result, we show an equivalency between the forward pass through this block of layers and a single iteration of certain types of deterministic and stochastic algorithms solving a particular class of tensor formulated convex optimization problems.
Wednesday, February 05, 2020, 12:00
- 13:00
Building 9, Hall 1, Room 2322
The Machine Learning Hub Seminar Series presents “Optimization and Learning in Computational Imaging” by Dr. Wolfgang Heidrich, Professor in Computer Science at KAUST. He leads the AI Initiative and is the Director of the KAUST Visual Computing Center. Computational imaging systems are based on the joint design of optics and associated image reconstruction algorithms. Historically, many such systems have employed simple transform-based reconstruction methods. Modern optimization methods and priors can drastically improve the reconstruction quality in computational imaging systems. Furthermore, learning-based methods can be used to design the optics along with the reconstruction method, yielding truly end-to-end learned imaging systems, blurring the boundary between imaging hardware and software.
Monday, January 27, 2020, 17:00
- 18:30
Building 1, Level 2, Room 2202
Contact Person
In this thesis, a variety of applications in computer vision and graphics of inverse problems using tomographic imaging modalities will be presented: (i) The first application focuses on the CT reconstruction with a specific emphasis on recovering thin 1D and 2D manifolds embedded in 3D volumes. (ii) The second application is about space-time tomography (iii) Base on the second application, the third one is aiming to improve the tomographic reconstruction of time-varying geometries undergoing faster, non-periodic deformations, by a warp-and-project strategy. Finally, with a physically plausible divergence-free prior for motion estimation, as  well as a novel  view synthesis technique,  we present applications to dynamic fluid imaging which further demonstrates the flexibility of our optimization frameworks
Mohib Khan, Hesham Abouelmagd, Shijaz Abdulla (AWS)
Monday, January 27, 2020, 08:30
- 16:15
Building 19, Hall 1
The ML Hub with the support of the AI Initiative, is excited to be hosting the AWS ML Immersion Day! Join us for a full-day immersion tutorial and hands-on lab on Amazon’s ML tools. The program includes an introduction to AWS AI and machine learning services and hands-on module on Amazon Lex and SageMaker.
Monday, December 02, 2019, 12:00
- 13:00
Building 9, Level 2, Hall 1, Room 2322
Contact Person
This talk will be a gentle introduction to proximal splitting algorithms to minimize a sum of possibly nonsmooth convex functions. Several such algorithms date back to the 60s, but the last 10 years have seen the development of new primal-dual splitting algorithms, motivated by the need to solve large-scale problems in signal and image processing, machine learning, and more generally data science. No background will be necessary to attend the talk, whose goal is to present the intuitions behind this class of methods.
Monday, November 11, 2019, 12:00
- 13:00
Building 9, Level 2, Hall 1, Room 2322
Contact Person
Adil Salim is mainly interested in stochastic approximation, optimization, and machine learning. He is currently a Postdoctoral Research Fellow working with Professor Peter Richtarik at the Visual Computing Center (VCC) at King Abdullah University of Science and Technology (KAUST).
Tuesday, November 05, 2019, 14:00
- 15:00
Building 2, Level 5, Room 5209
Contact Person
Large-scale particle data sets, such as those computed in molecular dynamics (MD) simulations, are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry.