thumbnail

Virtual Staging of Indoor Panoramic Images via Multi-task Learning and Inverse Rendering

Uzair Shah, Sara Jashari, Muhammad Tukur, Mowafa Househ, Jens Schneider, Giovanni Pintore, Enrico Gobbetti, and Marco Agus

2025

Abstract

Capturing indoor environments with 360-degree images provides a cost-effective method for creating immersive content. However, virtual staging – removing existing furniture and inserting new objects with realistic lighting – remains challenging. We present VISPI (Virtual Staging Pipeline for Single Indoor Panoramic Images), a framework that enables interactive restaging of indoor scenes from a single panoramic image. Our approach combines multi-task deep learning with real-time rendering to extract geometric, semantic, and material information from cluttered scenes. The system includes: i) a vision transformer that simultaneously predicts depth, normals, semantics, albedo, and material properties; ii) spherical Gaussian lighting estimation; iii) real-time editing for interactive object placement; iv) stereoscopic Multiple-Center-Of-Projection generation for Head Mounted Display exploration. The framework processes input through two pathways: extracting clutter-free representations for virtual staging and estimating material properties including metallic and roughness signals. We evaluate VISPI on Structured3D and FutureHouse datasets, demonstrating applications in real estate visualization, interior design, and virtual environment creation.

Reference and download information

Uzair Shah, Sara Jashari, Muhammad Tukur, Mowafa Househ, Jens Schneider, Giovanni Pintore, Enrico Gobbetti, and Marco Agus. Virtual Staging of Indoor Panoramic Images via Multi-task Learning and Inverse Rendering. IEEE Computer Graphics and Applications, 2025. DOI: 10.1109/MCG.2025.3605806. To appear.

Related multimedia productions

Bibtex citation record

@article{Shah:2025:VSI,
    author = {Uzair Shah and Sara Jashari and Muhammad Tukur and Mowafa Househ and Jens Schneider and Giovanni Pintore and Enrico Gobbetti and Marco Agus},
    title = {Virtual Staging of Indoor Panoramic Images via Multi-task Learning and Inverse Rendering},
    journal = {IEEE Computer Graphics and Applications},
    year = {2025},
    abstract = {Capturing indoor environments with 360-degree images provides a cost-effective method for creating immersive content. However, virtual staging – removing existing furniture and inserting new objects with realistic lighting – remains challenging. We present VISPI (Virtual Staging Pipeline for Single Indoor Panoramic Images), a framework that enables interactive restaging of indoor scenes from a single panoramic image. Our approach combines multi-task deep learning with real-time rendering to extract geometric, semantic, and material information from cluttered scenes. The system includes: i) a vision transformer that simultaneously predicts depth, normals, semantics, albedo, and material properties; ii) spherical Gaussian lighting estimation; iii) real-time editing for interactive object placement; iv) stereoscopic Multiple-Center-Of-Projection generation for Head Mounted Display exploration. The framework processes input through two pathways: extracting clutter-free representations for virtual staging and estimating material properties including metallic and roughness signals. We evaluate VISPI on Structured3D and FutureHouse datasets, demonstrating applications in real estate visualization, interior design, and virtual environment creation.},
    doi = {10.1109/MCG.2025.3605806},
    note = {To appear},
    url = {http://vic.crs4.it/vic/cgi-bin/bib-page.cgi?id='Shah:2025:VSI'},
}