thumbnail

Web-based Exploration of Annotated Multi-Layered Relightable Image Models

Alberto Jaspe Villanueva, Moonisa Ahsan, Ruggero Pintus, Andrea Giachetti, and Enrico Gobbetti

May 2021

Abstract

We introduce a novel approach for exploring image-based shape and material models registered with structured descriptive information fused in multi-scale overlays. We represent the objects of interest as a series of registered layers of image-based shape and material data. These layers are represented at different scales, and can come out of a variety of pipelines. These layers can include both RTI representations, and spatially-varying normal and BRDF fields, possibly as a result of fusing multi-spectral data. An overlay image pyramid associates visual annotations to the various scales. The overlay pyramid of each layer is created at data preparation time by either one of the three subsequent methods: (1) by importing it from other pipelines; (2) by creating it with the simple annotation drawing toolkit available within the viewer; (3) with external image editing tools. This makes it easier for the user to seamlessly draw annotations over the region of interest. At run-time, clients can access an annotated multi-layered dataset by a standard web server. Users can explore these datasets on a variety of devices; they range from small mobile devices to large scale displays used in museum installations. On all these aforementioned platforms, JavaScript/WebGL2 clients running in browsers are fully-capable of performing layer selection, interactive relighting, enhanced visualization, and annotation display. We address the problem of clutter by embedding interactive lenses. This focus-and-context-aware (multiple-layer) exploration tool supports exploration of more than one representations in a single view. That allows mixing and matching of presentation modes and annotation display. The capabilities of our approach are demonstrated on a variety of cultural heritage use cases. That involves different kinds of annotated surface and material models.

Reference and download information

Alberto Jaspe Villanueva, Moonisa Ahsan, Ruggero Pintus, Andrea Giachetti, and Enrico Gobbetti. Web-based Exploration of Annotated Multi-Layered Relightable Image Models. ACM Journal on Computing and Cultural Heritage (JOCCH), 14(2): 24:1-24:31, May 2021. DOI: 10.1145/3430846.

Related multimedia productions

thumbnail
Moonisa Ahsan, Fabio Bettio, Enrico Gobbetti, Fabio Marton, Ruggero Pintus, and Antonio Zorcolo
EVOCATION: Reconstruction and exploration with an interactive lens of an annotated Nora Stone
CRS4 Video n. 183 - Date: July, 2022
thumbnail
Moonisa Ahsan, Fabio Bettio, Enrico Gobbetti, Fabio Marton, Ruggero Pintus, and Antonio Zorcolo
EVOCATION: Acquisition, reconstruction, and exploration of paintings from retable of San Bernardino.
CRS4 Video n. 182 - Date: July, 2022

Bibtex citation record

@Article{Jaspe:2021:WEA,
    author = {Alberto {Jaspe Villanueva} and Moonisa Ahsan and Ruggero Pintus and Andrea Giachetti and Enrico Gobbetti},
    title = {Web-based Exploration of Annotated Multi-Layered Relightable Image Models},
    journal = {ACM Journal on Computing and Cultural Heritage (JOCCH)},
    volume = {14},
    number = {2},
    pages = {24:1--24:31},
    month = {May},
    year = {2021},
    abstract = { We introduce a novel approach for exploring image-based shape and material models registered with structured descriptive information fused in multi-scale overlays. We represent the objects of interest as a series of registered layers of image-based shape and material data. These layers are represented at different scales, and can come out of a variety of pipelines. These layers can include both RTI representations, and spatially-varying normal and BRDF fields, possibly as a result of fusing multi-spectral data. An overlay image pyramid associates visual annotations to the various scales. The overlay pyramid of each layer is created at data preparation time by either one of the three subsequent methods: (1) by importing it from other pipelines; (2) by creating it with the simple annotation drawing toolkit available within the viewer; (3) with external image editing tools. This makes it easier for the user to seamlessly draw annotations over the region of interest. At run-time, clients can access an annotated multi-layered dataset by a standard web server. Users can explore these datasets on a variety of devices; they range from small mobile devices to large scale displays used in museum installations. On all these aforementioned platforms, JavaScript/WebGL2 clients running in browsers are fully-capable of performing layer selection, interactive relighting, enhanced visualization, and annotation display. We address the problem of clutter by embedding interactive lenses. This focus-and-context-aware (multiple-layer) exploration tool supports exploration of more than one representations in a single view. That allows mixing and matching of presentation modes and annotation display. The capabilities of our approach are demonstrated on a variety of cultural heritage use cases. That involves different kinds of annotated surface and material models. },
    doi = {10.1145/3430846},
    url = {http://vic.crs4.it/vic/cgi-bin/bib-page.cgi?id='Jaspe:2021:WEA'},
}