CRS4

Automatic 3D modeling and editing of immersive indoor environments from a single omnidirectional image

Thèse - PhD Programme in Information and Communications Technology, University of A Coruña, Spain - november 2024
Télécharger la publication : 2024-phd-pintore-indoor-omni.pdf [2.5Mo]  
Over the past few years, there has been significant research interest in the automatic 3D reconstruction and modeling of indoor scenes, resulting in a well-defined emerging field. Within this context, 360-degree panoramic acquisition has emerged as an effective solution for indoor environments. It offers rapid and comprehensive coverage, even from a single viewpoint, and is compatible with a wide range of professional and consumer acquisition devices, making indoor data capture efficient and cost-effective. Panoramic images have also become integral to creating immersive content directly from real-world scenes and supporting various Virtual Reality (VR) applications. Notably, virtual tours based on spherical images have gained popularity in the real estate industry, especially during the pandemic period. To fully support immersion, a system must thus also respond to viewpoint translation. While many solutions have been proposed for multiview capture setups, performing view synthesis from single-shot panoramas is of primary importance, due to the convenience and diffusion of sparse capturing through monocular 360-degree cameras. However, view synthesis relies on estimating the geometric model of the imaged environment, explicitly or implicitly, to perform occlusion-aware reprojection and synthesize disoccluded content. This aspect is even more crucial if the purpose is also to derive other non-obvious information from the original view, such as, for example, deriving a model of the permanent structure without clutter. To achieve immersive visualization and effective editing in indoor 3D reconstruction, it is necessary to address several fundamental research questions related to depth and layout estimation and novel view synthesis. In our research project, we proposed to extend the state of the art in these fundamental tasks and particularly in their combination aimed at indoor immersive exploration and editing, just starting from a single 360-degree image. To this end, we researched novel approaches to exploit indoor architectural priors, that take in account the very specific man-made environment features, and effective data-driven solutions, that learn hidden relations from big-data examples. Our contributions result in several, innovative, end-to-end solutions, such as a novel methodology for 3D scene synthesis of Atlanta-world interiors from a single omnidirectional image, a novel approach for deep synthesis and exploration of omnidirectional stereoscopic environments from a monoscopic panoramic image, an innovative end-to-end technique for instant automatic emptying of panoramic indoor scenes. This thesis presents the results obtained during such a research.

Images et films

 

Références BibTex

@PhdThesis{Pin24,
  author       = {Pintore, G.},
  title        = {Automatic 3D modeling and editing of immersive indoor environments from a single omnidirectional image},
  school       = {PhD Programme in Information and Communications Technology, University of A Coruña, Spain},
  month        = {november},
  year         = {2024},
  keywords     = {visual computing, data-intensive computing},
  url          = {https://publications.crs4.it/pubdocs/2024/Pin24},
}

Autres publications dans la base

» Giovanni Pintore