Model-based and Learned, Inverse Rendering for 3D Scene Reconstruction and View Synthesis
Recent advancements in inverse rendering have exhibited promising results for 3D representation, novel view synthesis, scene parameter reconstruction, and direct graphical asset generation and editing.
Overview
Abstract
Recent advancements in inverse rendering have exhibited promising results for 3D representation, novel view synthesis, scene parameter reconstruction, and direct graphical asset generation and editing.
Inverse rendering attempts to recover the scene parameters of interest from a set of camera observations by optimizing the photometric error between rendering model output and the true observation with appropriate regularization.
The objective of this dissertation is to study inverse problems from several perspectives: (1) Software Framework: the general differentiable pipeline for solving physically-based or neural-based rendering problems, (2) Closed-Form: efficient and closed-form solutions in specific condition in inverse problems, (3) Representation Structure: hybrid 3D scene representation for efficient training and adaptive resource allocation, and (4) Robustness: enhanced robustness and accuracy from controlled lighting aspect.
We aim to solve the following tasks:
- How to address the challenge of rendering and optimizing scene parameters such as geometry, texture, and lighting, while considering multiple viewpoints from physically-based or neural 3D representations. To this end, we present a comprehensive software toolkit that provides support for diverse ray-based sampling and tracing schemes that enable the optimization of a wide range of targeting scene parameters. Our approach emphasizes the importance of maintaining differentiability throughout the entire pipeline to ensure efficient and effective optimization of the desired parameters.
- Is there a 3D representation that has a fixed computational complexity or closed-form solution for forward rendering when the target has specific geometry or simplified lighting cases for better relaxing computational problems or reducing complexity. We consider multi-bounce reflection inside the plane transparent medium, and design differentiable polarization simulation engines that jointly optimize the medium's parameters as well as the polarization state of reflection and transmission light.
- How can we use our hybrid, learned 3D scene representation to solve inverse rendering problems for scene reconstruction and novel view synthesis, with a particular interest in several scientific fields, including density, radiance field, signed distance function, etc.
- Unknown lighting conditions significantly influence object appearance, to enhance the robustness of inverse rendering, we adopt invisible co-located lighting to further control lighting and suppress unknown lighting by jointly optimizing separated channels of RGB and near infrared light, and enable accurate all scene parameters reconstruction from wider application environments.
Brief Biography
Rui is a Ph.D. student in KAUST, Computational Imaging Group, working with Prof. Wolfgang Heidrich. The research direction mainly focus on using various image measurements (e.g., light field, polarized images, RGB images, CT) to solve higher-level inverse problem or application-driven problems, such as photo-realistic 3D scene reconstruction/representation, viewpoint synthesis, segmentation, depth estimation, reflection removal, etc.