A new article authored by CRS4 researchers Giovanni Pintore and Enrico Gobbetti, from the Visual and Data-intensive Computing sector (ViDiC), in collaboration with colleagues from Hamad Bin Khalifa University (Qatar), has been published in the journal IEEE Computer Graphics and Applications.
The paper, entitled “Virtual Staging of Indoor Panoramic Images via Multi-task Learning and Inverse Rendering”, introduces VISPI, an innovative framework that allows the recreation and modification of indoor environments starting from a single 360° panoramic image.
Capturing indoor spaces with panoramic images offers an affordable and effective way to create immersive content. However, virtual staging—removing existing furniture and inserting new objects with realistic lighting—remains a challenging task.
VISPI addresses this challenge by combining multi-task deep learning and real-time rendering. Specifically, the system uses a vision transformer to simultaneously estimate depth, geometry, and material properties; a spherical Gaussian model to compute realistic lighting; interactive editing tools to place digital objects in real time; and stereoscopic Multi-Center-Of-Projection functions for VR exploration.
The framework was validated on two widely used datasets of indoor environments: Structured3D, which provides computer-generated but realistic-looking indoor scenes available in both furnished and unfurnished versions, and FutureHouse, which includes photorealistic panoramas of complete houses enriched with detailed material information.
The method opens the door to several practical applications. In the real estate sector, it can show potential buyers an “empty” apartment and instantly propose multiple furnishing solutions, without physically altering the space. In interior design, it allows designers and architects to experiment with combinations of furniture, materials, and colors, immediately visualizing the result. In the field of virtual reality and immersive environments, it enables interactive exploration of spaces with a VR headset, offering a first-person experience of how virtual objects integrate into real scenes.
This research builds on the results of the AIN2 project (funded by the Qatar National Research Fund – QNRF) and the HPCCN project (Italian National Center on HPC, Big Data and Quantum Computing, funded by the PNRR).
🔗 More details on the article are available on IEEE Xplore.