From Images to Neural Scenes: Efficient Rendering, Robust Editing, and Human-Aware Reconstruction
This thesis addresses these challenges by introducing efficient mesh-based distillation for real-time rendering, view-consistent text-guided scene editing, and human-aware reconstruction that improves geometry and camera estimation.
Overview
Neural scene representations enable photorealistic 3D reconstruction from casual multi-view images, but practical deployment remains limited by efficiency, editability, and robustness in human-centric scenes. This thesis addresses these challenges by introducing efficient mesh-based distillation for real-time rendering, view-consistent text-guided scene editing, and human-aware reconstruction that improves geometry and camera estimation. Together, these contributions make neural 3D reconstruction more practical for real-world, device-constrained applications.
Presenters
Brief Biography
Sara Rojas Martinez is a Ph.D. student in the KAUST Image and Video Understanding Lab (IVUL) under the supervision of Professor Bernard Ghanem. Before joining KAUST, Sara obtained a master’s degree in Biomedical Engineering from Universidad de Los Andes, Bogotá, Colombia.
Sara completed a research internship at Naver Labs Europe, where she worked on extending MAST3R to better understand humans in-the-wild. Her advisors were Gregory Rogez, Matthieu Armando, and Vincent Leroy.
Prior to that, Sara interned at Adobe Research, where she worked under the guidance of Kalyan Sunkavalli. She also collaborated with Reality Labs at Meta in Zurich, mentored by Albert Pumarola and Ali Thabet. Earlier, she conducted research at the University of Southern California with Autumn Kulaga.