Class-Based Attention Mechanism for Chest Radiograph Multi-Label Categorization

David Sriker, Hayit Greenspan, Jacob Goldberger

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

This work focuses on a new methodology for class-based attention, which is an extension to the more common image-based attention mechanism. The class-based attention mechanism learns a different attention mask for each class. This enables to simultaneously apply a different localization procedure for different pathologies in the same image, thus important for a multilabel categorization. We apply the method to detect and localize a set of pathologies in chest Radiographs. The proposed network architecture was evaluated on publicly available X-ray datasets and yielded improved classification results compared to standard image based attention.

Original languageEnglish
Title of host publicationISBI 2022 - Proceedings
Subtitle of host publication2022 IEEE International Symposium on Biomedical Imaging
PublisherIEEE Computer Society
ISBN (Electronic)9781665429238
DOIs
StatePublished - 2022
Event19th IEEE International Symposium on Biomedical Imaging, ISBI 2022 - Kolkata, India
Duration: 28 Mar 202231 Mar 2022

Publication series

NameProceedings - International Symposium on Biomedical Imaging
Volume2022-March
ISSN (Print)1945-7928
ISSN (Electronic)1945-8452

Conference

Conference19th IEEE International Symposium on Biomedical Imaging, ISBI 2022
Country/TerritoryIndia
CityKolkata
Period28/03/2231/03/22

Bibliographical note

Publisher Copyright:
© 2022 IEEE.

Funding

This research was supported by the Ministry of Science & Technology, Israel.

FundersFunder number
Ministry of science and technology, Israel

    Keywords

    • X-ray
    • attention mechanism
    • chest
    • localization

    Fingerprint

    Dive into the research topics of 'Class-Based Attention Mechanism for Chest Radiograph Multi-Label Categorization'. Together they form a unique fingerprint.

    Cite this