Confidence Calibration of a Medical Imaging Classification System That is Robust to Label Noise

Coby Penso, Lior Frenkel, Jacob Goldberger

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

A classification model is calibrated if its predicted probabilities of outcomes reflect their accuracy. Calibrating neural networks is critical in medical analysis applications where clinical decisions rely upon the predicted probabilities. Most calibration procedures, such as temperature scaling, operate as a post processing step by using holdout validation data. In practice, it is difficult to collect medical image data with correct labels due to the complexity of the medical data and the considerable variability across experts. This study presents a network calibration procedure that is robust to label noise. We draw on the fact that the confusion matrix of the noisy labels can be expressed as the matrix product between the confusion matrix of the clean labels and the label noises. The method is based on estimating the noise level as part of a noise-robust training method. The noise level is then used to estimate the network accuracy required by the calibration procedure. We show that despite the unreliable labels, we can still achieve calibration results that are on a par with the results of a calibration procedure using data with reliable labels.

Original languageEnglish
Pages (from-to)2050-2060
Number of pages11
JournalIEEE Transactions on Medical Imaging
Volume43
Issue number6
Early online date15 Jan 2024
StatePublished - 1 Jun 2024

Bibliographical note

Publisher Copyright:
© 1982-2012 IEEE.

Keywords

  • Network calibration
  • medical decision calibration
  • network interpretability
  • noisy labels
  • temperature scaling

Fingerprint

Dive into the research topics of 'Confidence Calibration of a Medical Imaging Classification System That is Robust to Label Noise'. Together they form a unique fingerprint.

Cite this