Purpose To develop a computerized detection system for the automatic classification of the presence/absence of mass lesions in digital breast tomosynthesis (DBT) annotated exams, based on a deep convolutional neural network (DCNN). Materials and Methods Three DCNN architectures working at image-level (DBT slice) were compared: two state-of-the-art pre-trained DCNN architectures (AlexNet and VGG19) customized through transfer learning, and one developed from scratch (DBT-DCNN). To evaluate these DCNN-based architectures we analysed their classification performance on two different datasets provided by two hospital radiology departments. DBT slice images were processed following normalization, background correction and data augmentation procedures. The accuracy, sensitivity, and area-under-the-curve (AUC) values were evaluated on both datasets, using receiver operating characteristic curves. A Grad-CAM technique was also implemented providing an indication of the lesion position in the DBT slice. Results Accuracy, sensitivity and AUC for the investigated DCNN are in-line with the best performance reported in the field. The DBT-DCNN network developed in this work showed an accuracy and a sensitivity of (90% ± 4%) and (96% ± 3%), respectively, with an AUC as good as 0.89 ± 0.04. A k-fold cross validation test (with k = 4) showed an accuracy of 94.0% ± 0.2%, and a F1-score test provided a value as good as 0.93 ± 0.03. Grad-CAM maps show high activation in correspondence of pixels within the tumour regions. Conclusions We developed a deep learning-based framework (DBT-DCNN) to classify DBT images from clinical exams. We investigated also a possible application of the Grad-CAM technique to identify the lesion position.

A deep learning classifier for digital breast tomosynthesis

Staffa, M.;
2021-01-01

Abstract

Purpose To develop a computerized detection system for the automatic classification of the presence/absence of mass lesions in digital breast tomosynthesis (DBT) annotated exams, based on a deep convolutional neural network (DCNN). Materials and Methods Three DCNN architectures working at image-level (DBT slice) were compared: two state-of-the-art pre-trained DCNN architectures (AlexNet and VGG19) customized through transfer learning, and one developed from scratch (DBT-DCNN). To evaluate these DCNN-based architectures we analysed their classification performance on two different datasets provided by two hospital radiology departments. DBT slice images were processed following normalization, background correction and data augmentation procedures. The accuracy, sensitivity, and area-under-the-curve (AUC) values were evaluated on both datasets, using receiver operating characteristic curves. A Grad-CAM technique was also implemented providing an indication of the lesion position in the DBT slice. Results Accuracy, sensitivity and AUC for the investigated DCNN are in-line with the best performance reported in the field. The DBT-DCNN network developed in this work showed an accuracy and a sensitivity of (90% ± 4%) and (96% ± 3%), respectively, with an AUC as good as 0.89 ± 0.04. A k-fold cross validation test (with k = 4) showed an accuracy of 94.0% ± 0.2%, and a F1-score test provided a value as good as 0.93 ± 0.03. Grad-CAM maps show high activation in correspondence of pixels within the tumour regions. Conclusions We developed a deep learning-based framework (DBT-DCNN) to classify DBT images from clinical exams. We investigated also a possible application of the Grad-CAM technique to identify the lesion position.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11367/97646
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 14
social impact