Learning for Visual Data Synthesis and Analysis

Abstract

In this talk, I will present our recent advances in deep learning to synthesize and analyze visual data, such as images, volumes, and point clouds. The presented approaches are loosely aligned along the classical computer graphics rendering pipeline, whereby both structured and unstructured data are handled. I will first present concepts for learning in object space, i.e., directly on the data to be rendered. To realize different visual tasks, such as normal estimation and segmentation, I will discuss how Monte Carlo integration can be used to realize convolutions on point cloud data, which represents the meshes to be rendered. Based on this unstructured learning approach, I will further show, how this technology can be modified to replace the conventional rendering process to generate shaded images. Finally, I will discuss a structured learning approach, which enables us to revert the image synthesis process, i.e., generating a volumetric data set out of a synthesized image. For all three approaches, I will discuss training data generation, network architectures, and the obtained testing results.

To watch the recorded seminar, please click here.

Brief Biography

Timo Ropinski is a Professor in Visual Computing at Ulm University, Germany, where he is heading the Visual Computing Research Group. Before his time in Ulm, he was Professor in Interactive Visualization at Linköping University, Sweden. Timo holds a Ph.D. from the University of Münster, Germany, where he also finished his Habilitation. His research interests lie in data visualization and visual data analysis. Together with his research group, Timo works on biomedical visualization techniques, rendering algorithms and deep learning models for spatial data.

Contact Person