thumbnail

HistoContours: a framework for visual annotation of histopathology whole slide images

Khaled Al-Thelaya, Faaiz Hussain Kahn Joad, Nauman Ullah Gilal, William Mifsud, Giovanni Pintore, Enrico Gobbetti, Marco Agus, and Jens Schneider

September 2022

Abstract

We present an end-to-end framework for histopathological analysis of whole slide images (WSIs). Our framework uses deep learning-based localization & classification of cell nuclei followed by spatial data aggregation to propagate classes of sparsely distributed nuclei across the entire slide. We use YOLO (“You Only Look Once”) for localization instead of more costly segmentation approaches and show that using HistAuGAN boosts its performance. YOLO finds bounding boxes around nuclei at good accuracy, but the classification accuracy can be improved by other methods. To this end, we extract patches around nuclei from the WSI and consider models from the SqueezeNet, ResNet, and EfficientNet families for classification. Where we do not achieve a clear separation between highest and second-highest softmax activation of the classifier, we use YOLO’s output as a secondary vote. The result is a sparse annotation of the WSI which we turn dense by using kernel density estimation. The result is a full vector of probabilities, per pixel, for each class of nucleus we consider. This allows us to visualize our results using both color-coding and iso-contouring, reducing visual clutter. Our novel nuclei-to-tissue coupling allows histopathologists to work at both the nucleus and the tissue level, a feature appreciated by domain experts in a qualitative user study.

Reference and download information

Khaled Al-Thelaya, Faaiz Hussain Kahn Joad, Nauman Ullah Gilal, William Mifsud, Giovanni Pintore, Enrico Gobbetti, Marco Agus, and Jens Schneider. HistoContours: a framework for visual annotation of histopathology whole slide images. In Proc. Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM). Pages 99-109, September 2022. DOI: 10.2312/vcbm.2022119. Best full paper award.

Related multimedia productions

Bibtex citation record

@inproceedings{Al-Thelaya:2022:FVA,
    author = {Khaled Al-Thelaya and {Faaiz Hussain Kahn} Joad and {Nauman Ullah} {Gilal} and William Mifsud and Giovanni Pintore and Enrico Gobbetti and Marco Agus and Jens Schneider},
    title = {HistoContours: a framework for visual annotation of histopathology whole slide images},
    booktitle = {Proc. Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM)},
    pages = {99--109},
    month = {September},
    year = {2022},
    abstract = { We present an end-to-end framework for histopathological analysis of whole slide images (WSIs). Our framework uses deep learning-based localization \& classification of cell nuclei followed by spatial data aggregation to propagate classes of sparsely distributed nuclei across the entire slide. We use YOLO (“You Only Look Once”) for localization instead of more costly segmentation approaches and show that using HistAuGAN boosts its performance. YOLO finds bounding boxes around nuclei at good accuracy, but the classification accuracy can be improved by other methods. To this end, we extract patches around nuclei from the WSI and consider models from the SqueezeNet, ResNet, and EfficientNet families for classification. Where we do not achieve a clear separation between highest and second-highest softmax activation of the classifier, we use YOLO’s output as a secondary vote. The result is a sparse annotation of the WSI which we turn dense by using kernel density estimation. The result is a full vector of probabilities, per pixel, for each class of nucleus we consider. This allows us to visualize our results using both color-coding and iso-contouring, reducing visual clutter. Our novel nuclei-to-tissue coupling allows histopathologists to work at both the nucleus and the tissue level, a feature appreciated by domain experts in a qualitative user study. },
    doi = {10.2312/vcbm.2022119},
    note = {Best full paper award},
    url = {http://vic.crs4.it/vic/cgi-bin/bib-page.cgi?id='Al-Thelaya:2022:FVA'},
}