thumbnail

Audio-visual Annotation Graphs for Guiding Lens-based Scene Exploration

Moonisa Ahsan, Fabio Marton, Ruggero Pintus, and Enrico Gobbetti

2022

Abstract

We introduce a novel approach for guiding users in the exploration of annotated 2D models using interactive visualization lenses. Information on the interesting areas of the model is encoded in an annotation graph generated at authoring time. Each graph node contains an annotation, in the form of a visual and audio markup of the area of interest, as well as the optimal lens parameters that should be used to explore the annotated area and a scalar representing the annotation importance. Directed graph edges are used, instead, to represent preferred ordering relations in the presentation of annotations, by having each node point to the set of nodes that should be seen before presenting its associated annotation. A scalar associated to each edge determines the strength of this constraint. At run-time, users explore the scene with the lens, and the graph is exploited to select the annotations that have to be presented at a given time. The selection is based on the current view and lens parameters, the graph content and structure, and the navigation history. The best annotation under the lens is presented by playing the associated audio clip and showing the visual markup in overlay. When the user releases control, requests guidance, opts for automatic touring, or when no available annotations are under the lens, the system guides the user towards the next best annotation using glyphs, and potentially moves the lens towards it if the user remains inactive. This approach supports the seamless blending of an automatic tour of the data with interactive lens-based exploration. The approach is tested and discussed in the context of the exploration of multi-layer relightable models.

Reference and download information

Moonisa Ahsan, Fabio Marton, Ruggero Pintus, and Enrico Gobbetti. Audio-visual Annotation Graphs for Guiding Lens-based Scene Exploration. Computers & Graphics, 105: 131-145, 2022. DOI: 10.1016/j.cag.2022.05.003.

Related multimedia productions

thumbnail
Moonisa Ahsan, Fabio Bettio, Enrico Gobbetti, Fabio Marton, Ruggero Pintus, and Antonio Zorcolo
EVOCATION: Reconstruction and exploration with an interactive lens of an annotated Nora Stone
CRS4 Video n. 183 - Date: July, 2022

Bibtex citation record

@Article{Ahsan:2022:AAG,
    author = {Moonisa Ahsan and Fabio Marton and Ruggero Pintus and Enrico Gobbetti},
    title = {Audio-visual Annotation Graphs for Guiding Lens-based Scene Exploration},
    journal = {Computers \& Graphics},
    volume = {105},
    pages = {131--145},
    year = {2022},
    abstract = { We introduce a novel approach for guiding users in the exploration of annotated 2D models using interactive visualization lenses. Information on the interesting areas of the model is encoded in an annotation graph generated at authoring time. Each graph node contains an annotation, in the form of a visual and audio markup of the area of interest, as well as the optimal lens parameters that should be used to explore the annotated area and a scalar representing the annotation importance. Directed graph edges are used, instead, to represent preferred ordering relations in the presentation of annotations, by having each node point to the set of nodes that should be seen before presenting its associated annotation. A scalar associated to each edge determines the strength of this constraint. At run-time, users explore the scene with the lens, and the graph is exploited to select the annotations that have to be presented at a given time. The selection is based on the current view and lens parameters, the graph content and structure, and the navigation history. The best annotation under the lens is presented by playing the associated audio clip and showing the visual markup in overlay. When the user releases control, requests guidance, opts for automatic touring, or when no available annotations are under the lens, the system guides the user towards the next best annotation using glyphs, and potentially moves the lens towards it if the user remains inactive. This approach supports the seamless blending of an automatic tour of the data with interactive lens-based exploration. The approach is tested and discussed in the context of the exploration of multi-layer relightable models. },
    doi = {10.1016/j.cag.2022.05.003},
    url = {http://vic.crs4.it/vic/cgi-bin/bib-page.cgi?id='Ahsan:2022:AAG'},
}