Thursday, August 12, 2021, 14:00
This dissertation tackles the problem of entanglement in Generative Adversarial Networks (GANs). The key insight is that disentanglement in GANs can be improved by differentiating between the content, and the operations performed on that content. For example, the identity of a generated face can be thought of as the content, while the lighting conditions can be thought of as the operations.
Thursday, June 17, 2021, 12:00
High Dynamic Range (HDR) image acquisition from a single image capture, also known as snapshot HDR imaging, is challenging because the bit depths of camera sensors are far from sufficient to cover the full dynamic range of the scene. Existing HDR techniques focus either on algorithmic reconstruction or hardware modification to extend the dynamic range. In this thesis, we propose a joint design for snapshot HDR imaging by devising a spatially varying modulation mask in the hardware combined with a deep learning algorithm to reconstruct the HDR image. In this approach, we achieve a reconfigurable HDR camera design that does not require custom sensors, and instead can be reconfigured between HDR and conventional mode with very simple calibration steps. We demonstrate that the proposed hardware-software solution offers a flexible, yet robust, way to modulate per-pixel exposures, and the network requires little knowledge of the hardware to faithfully reconstruct the HDR image. Comparative analysis demonstrated that our method outperforms the state-of-the-art in terms of visual perception quality.
Tuesday, February 23, 2021, 15:00
"A picture is worth a thousand words", and by going beyond static images, interactive visualization has become crucial to exploring, analyzing, and understanding large-scale scientific data. This is true for many areas of science and engineering, such as high-resolution imaging in neuroscience or materials science, as well as in large-scale fluid simulations of the Earth’s atmosphere and oceans, or of trillion-cell oil reservoirs. However, the fact that the amount of data in data-driven sciences is increasing rapidly toward the petascale, and further, presents a tremendous challenge to interactive visualization and analysis. Nowadays, an important enabler of interactivity is often the parallel processing power of GPUs, which, however, requires well-designed customized data structures and algorithms. Furthermore, scientific data sets do not only get larger, they also get more and more complex, and thus have become very hard to interpret and analyze. In this talk, I will give an overview of the research of my group in large-scale scientific visualization, from data structures and algorithms that enable petascale visualization on GPUs, to novel visual abstractions for interactive analysis of highly complex structures in neuroscience, to novel mathematical techniques that leverage differential geometric methods for the detection and visualization of features in large, complex fluid dynamics data on curved surfaces such as the Earth.
Ivan Viola, Associate Professor, Computer Science
Wednesday, February 17, 2021, 11:30
Life at micron-scale is inaccessible to the naked eye. To aid the comprehension of nano- and micro-scale structural complexity, we utilize 3D visualization. Thanks to efficient GPU-accelerated algorithms, we witness a dramatic boost in the sheer size of structures that can be visually explored. As of today an atomistic model of the entire bacterial cell can be displayed interactively. On top of that, advanced 3D visualizations efficiently convey the multi-scale hierarchical architecture and cope with the high degree of structural occlusion which comes with dense packing of biological building blocks. To further scale up the size of life forms that can be visually explored, the rendering pipeline needs to integrate runtime construction of the biological structure. Assembly rules define how a body of a certain biological entity is composed. Such rules need to be applied on-the-fly, depending on where the viewer is currently located in the 3D scene to generate full structural detail for that part of the scene. We will review how to construct membrane-like structures, soluble protein distributions, and fiber strands through parallel algorithms, resulting in a collision-free biologically-valid scene. Assembly rules that define how a life form is structurally built need to be expressed in an intuitive way for the domain scientist, possibly directly in the three-dimensional space. Instead of placing one biological element next to another one for the entire biological structure by the modelers themselves, only assembly rules need to be specified and the algorithm will complete the application of those rules to form the entire biological entity. These rules are derived from current scientific knowledge and from all available experimental observations. The Cryo-EM Tomography is on rising and shows that we can already now reach the near-atomistic detail when employing smart algorithms. Our assembly rules extraction, therefore, needs to integrate with microscopic observations, to create an atomistic representation of specific, observed life forms, instead of generic models thereof. Such models then can be used in whole-cell simulations, and in the context of automated science dissemination.
Monday, November 30, 2020, 14:30
The overarching goal of Prof. Michels' Computational Sciences Group within KAUST's Visual Computing Center is enabling accurate and efficient simulations for applications in Scientific and Visual Computing. Towards this goal, the group develops new principled computational methods based on solid theoretical foundations. This talk covers a selection of previous and current work presenting a broad spectrum of research highlights ranging from simulating stiff phenomena such as the dynamics of fibers and textiles, over liquids containing magnetic particles, to the development of complex ecosystems and weather phenomena. Moreover, connection points to the growing field of machine learning are addressed and an outlook is provided with respect to selected technology transfer activities.
Monday, November 30, 2020, 12:00
In this talk, I will give an overview of research done in the Image and Video Understanding Lab (IVUL) at KAUST. At IVUL, we work on topics that are important to the computer vision (CV) and machine learning (ML) communities, with emphasis on three research themes: Theme 1 (Video Understanding), Theme 2 (Visual Computing for Automated Navigation), Theme 3 (Fundamentals/Foundations).
Marios Kogias, Researcher, Microsoft Research
Monday, November 02, 2020, 12:00
I’ll cover thee different RPC policies implemented on top of R2P2. Specifically, we’ll see how R2P2 enables efficient in-network RPC load balancing based on a novel join-bounded-shortest-queue (JBSQ) policy. JBSQ lowers tail latency by centralizing pending RPCs in the middle box and ensures that requests are only routed to servers with a bounded number of outstanding requests. Then, I’ll talk about SVEN, an SLO-aware RPC admission control mechanism implemented as an R2P2 policy on P4 programmable switches. Finally, I’ll describe HovercRaft, a new approach to building fault-tolerant generic RPC services by integrating state-machine replication in the transport layer.
Thursday, May 28, 2020, 16:00
One of the main goals in computer vision is to achieve a human-like understanding of images. This understanding has been recently represented in various forms, including image classification, object detection, semantic segmentation, among many others. Nevertheless, image understanding has been mainly studied in the 2D image frame, so more information is needed to relate them to the 3D world. With the emergence of 3D sensors (e.g. the Microsoft Kinect), which provide depth along with color information, the task of propagating 2D knowledge into 3D becomes more attainable and enables interaction between a machine (e.g. robot) and its environment. This dissertation focuses on three aspects of indoor 3D scene understanding: (1) 2D-driven 3D object detection for single frame scenes with inherent 2D information, (2) 3D object instance segmentation for 3D reconstructed scenes, and (3) using room and floor orientation for automatic labeling of indoor scenes that could be used for self-supervised object segmentation. These methods allow capturing of physical extents of 3D objects, such as their sizes and actual locations within a scene.
Monday, March 30, 2020, 18:00
In this dissertation, we aim at theoretically studying and analyzing deep learning models. Since deep models substantially vary in their shapes and sizes, in this dissertation, we restrict our work to a single fundamental block of layers that is common in almost all architectures. The block of layers of interest is the composition of an affine layer followed by a nonlinear activation function and then lastly followed by another affine layer. We study this block of layers from three different perspectives. (i) An Optimization Perspective. We try addressing the following question: Is it possible that the output of the forward pass through the block of layers highlighted above is an optimal solution to a certain convex optimization problem? As a result, we show an equivalency between the forward pass through this block of layers and a single iteration of certain types of deterministic and stochastic algorithms solving a particular class of tensor formulated convex optimization problems.
Wednesday, February 05, 2020, 12:00
Building 9, Hall 1, Room 2322
The Machine Learning Hub Seminar Series presents “Optimization and Learning in Computational Imaging” by Dr. Wolfgang Heidrich, Professor in Computer Science at KAUST. He leads the AI Initiative and is the Director of the KAUST Visual Computing Center. Computational imaging systems are based on the joint design of optics and associated image reconstruction algorithms. Historically, many such systems have employed simple transform-based reconstruction methods. Modern optimization methods and priors can drastically improve the reconstruction quality in computational imaging systems. Furthermore, learning-based methods can be used to design the optics along with the reconstruction method, yielding truly end-to-end learned imaging systems, blurring the boundary between imaging hardware and software.
Monday, January 27, 2020, 17:00
Building 1, Level 2, Room 2202
In this thesis, a variety of applications in computer vision and graphics of inverse problems using tomographic imaging modalities will be presented: (i) The first application focuses on the CT reconstruction with a specific emphasis on recovering thin 1D and 2D manifolds embedded in 3D volumes. (ii) The second application is about space-time tomography (iii) Base on the second application, the third one is aiming to improve the tomographic reconstruction of time-varying geometries undergoing faster, non-periodic deformations, by a warp-and-project strategy. Finally, with a physically plausible divergence-free prior for motion estimation, as well as a novel view synthesis technique, we present applications to dynamic fluid imaging which further demonstrates the flexibility of our optimization frameworks
Mohib Khan, Hesham Abouelmagd, Shijaz Abdulla (AWS)
Monday, January 27, 2020, 08:30
Building 19, Hall 1
The ML Hub, with the support of the AI Initiative, is excited to be hosting the AWS ML Immersion Day! Join us for a full-day immersion tutorial and hands-on lab on Amazon’s ML tools. The program includes an introduction to AWS AI and machine learning services and hands-on module on Amazon Lex and SageMaker. For details, please see the tutorial page on the ML Hub website. Registration is free but required. Please complete this form to register. Participants are encouraged to bring their fully-charged laptops and have a working Internet connection.
Monday, December 02, 2019, 12:00
Building 9, Level 2, Hall 1, Room 2322
This talk will be a gentle introduction to proximal splitting algorithms to minimize a sum of possibly nonsmooth convex functions. Several such algorithms date back to the 60s, but the last 10 years have seen the development of new primal-dual splitting algorithms, motivated by the need to solve large-scale problems in signal and image processing, machine learning, and more generally data science. No background will be necessary to attend the talk, whose goal is to present the intuitions behind this class of methods.
Monday, November 11, 2019, 12:00
Building 9, Level 2, Hall 1, Room 2322
Adil Salim is mainly interested in stochastic approximation, optimization, and machine learning. He is currently a Postdoctoral Research Fellow working with Professor Peter Richtarik at the Visual Computing Center (VCC) at King Abdullah University of Science and Technology (KAUST).
Prof. Anders Ynnerman, Linköping University, Sweden
Wednesday, November 06, 2019, 09:00
B9, Lecture Hall 2, Room 2325
Science communication is facing a p
Tuesday, November 05, 2019, 14:00
Building 2, Level 5, Room 5209
Large-scale particle data sets, such as those computed in molecular dynamics (MD) simulations, are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry.
Graham Johnson, Allen Institute for Cell Science, USA
Tuesday, November 05, 2019, 09:00
B9, Lecture Hall 2, Room 2325
The visual analysis, assembly, and
Prof. Arthur Olson, The Scripps Research Institute, USA
Monday, November 04, 2019, 09:00
B9, Lecture Hall 2, Room 2325
The ability to create structural mo
Pieter Barendrecht, PhD Student, Computer Science, University of Groningen, The Netherlands
Thursday, October 24, 2019, 14:00
Building 1, Level 4, Room 4214
There are many intriguing aspects a
Dr. Jos Lenders, Deputy Editor, Advanced Materials, Wiley
Tuesday, July 09, 2019, 14:00
B3 L5 Room 5209
Materials science is a multidisciplinary field of research with many different scientists and engineers having various backgrounds active in it. The literature landscape consequently is populated currently by a wide range of journals which greatly differ in purpose, scope, quality, and readership. Jos Lenders, Deputy Editor of Advanced Materials, Advanced Functional Materials, and Advanced Optical Materials, will track some of the most important developments and trends in the research field and the Advanced journals program. Last year, Advanced Materials reached an Impact Factor of 21.95 and received over 8,300 submissions – and Advanced Functional Materials over 9,200. Only around 15% of all those papers made it to publication in the journal, and this rate is similar for all other Advanced journals. So, what do editors do to select the very best papers, and what can authors do to optimize their chances of having their manuscripts accepted?
Prof. Liching Chiu, Graduate Program of Teaching Chinese as a Second Language (TCSL), National Taiwan University
Tuesday, July 02, 2019, 10:00
B3 L5 Room 5209
This series of lectures guide students to the preparation and analysis of a well-organized abstract. We will discuss the proper language (tense, voice, and person) for abstract writing, and learn how to meet the purposes of different abstracts. Finally, students will have a chance to compose and evaluate their writing. Topics: Overview of abstract writing; Conference abstract journal abstract; Organization of an abstract; Language conventions of abstract writing; Disciplinary abstract analysis; Frequent mistakes of abstract writing.
Tuesday, June 11, 2019, 15:00
B3, L5, 5220
We are stumbling across a video tsunami flooding our communication channels.
Tong Zhang, Professor of Computer Science and Mathematics, HKUST
Wednesday, May 29, 2019, 12:00
Building 9, Hall 1
Many problems in machine learning rely on statistics and optimization. To solve these problems, new techniques are needed. I will show some of these new techniques through selected machine learning problems I have recently worked on, such as nonconvex stochastic optimization, distributed training, adversarial attack, and generative models.
Tuesday, May 14, 2019, 16:00
B2 L5 Room 5220
This work investigates the problem of transfer from simulation to the real world in the context of autonomous navigation. To this end, we first present a photo-realistic training and evaluation simulator Sim4CV which enables several applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator features cars and unmanned aerial vehicles (UAVs) with a realistic physics simulation and diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning.