CRS4

SliceNet: deep dense depth estimation from a single indoor panorama using a slice-based representation

Giovanni Pintore, Marco Agus, Eva Almansa, Jens Schneider, Enrico Gobbetti
Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), page 11536--11545 - 2021
Télécharger la publication : cvpr2021-slicenet.pdf [4.4Mo]  
We introduce a novel deep neural network to estimate a depth map from a single monocular indoor panorama. The network directly works on the equirectangular projection, exploiting the properties of indoor 360-degree images. Starting from the fact that gravity plays an important role in the design and construction of man-made indoor scenes, we propose a compact representation of the scene into vertical slices of the sphere, and we exploit long- and short-term relationships among slices to recover the equirectangular depth map. Our design makes it possible to maintain high-resolution information in the extracted features even with a deep network. The experimental results demonstrate that our method outperforms current state-of-the-art solutions in prediction accuracy, particularly for real-world data.

Images et films

 

Références BibTex

@InProceedings{PAASG21,
  author       = {Pintore, G. and Agus, M. and Almansa, E. and Schneider, J. and Gobbetti, E.},
  title        = {SliceNet: deep dense depth estimation from a single indoor panorama using a slice-based representation},
  booktitle    = {Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages        = {11536--11545},
  year         = {2021},
  note         = {Selected as oral presentation.},
  keywords     = {depth map estimation, neural network, indoor 3D layout},
  url          = {https://publications.crs4.it/pubdocs/2021/PAASG21},
}

Autres publications dans la base

» Giovanni Pintore
» Marco Agus
» Eva Almansa
» Jens Schneider
» Enrico Gobbetti