scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Towards Automated Breast Mass Classification using Deep Learning Framework

TL;DR: An automated deep CAD system performing both the functions: mass detection and classification of breast mass classification including the extraction of wavelet features is proposed in this work.
Abstract: Due to high variability in shape, structure and occurrence; the non-palpable breast masses are often missed by the experienced radiologists. To aid them with more accurate identification, computer-aided detection (CAD) systems are widely used. Most of the developed CAD systems use complex handcrafted features which introduce difficulties for further improvement in performance. Deep or high-level features extracted using deep learning models already have proven its superiority over the low or middle-level handcrafted features. In this paper, we propose an automated deep CAD system performing both the functions: mass detection and classification. Our proposed framework is composed of three cascaded structures: suspicious region identification, mass/no-mass detection and mass classification. To detect the suspicious regions in a breast mammogram, we have used a deep hierarchical mass prediction network. Then we take a decision on whether the predicted lesions contain any abnormal masses using CNN high-level features from the augmented intensity and wavelet features. Afterwards, the mass classification is carried out only for abnormal cases with the same CNN structure. The whole process of breast mass classification including the extraction of wavelet features is automated in this work. We have tested our proposed model on widely used DDSM and INbreast databases in which mass prediction network has achieved the sensitivity of 0.94 and 0.96 followed by a mass/no-mass detection with the area under the curve (AUC) of 0.9976 and 0.9922 respectively on receiver operating characteristic (ROC) curve. Finally, the classification network has obtained an accuracy of 98.05% in DDSM and 98.14% in INbreast database which we believe is the best reported so far.
Citations
More filters
Journal ArticleDOI
TL;DR: A novel adaptive channel and multiscale spatial context network for breast mass segmentation in full-field mammograms that can effectively remove false positives, predict difficult samples and achieve state-of-the-art results is proposed.
Abstract: Breast cancer is currently the second most fatal cancer in women, but timely diagnosis and treatment can reduce its mortality. Breast masses are the most obvious means of cancer identification, and thus, accurate segmentation of masses is critical. In contrast to mass-centered patch segmentation, accurate segmentation of breast masses in full-field mammograms is always a challenging topic because of the extremely low signal-to-noise ratio and the uncertainty with respect to the shape, size, and location of the mass. In this study, we propose a novel adaptive channel and multiscale spatial context network for breast mass segmentation in full-field mammograms. A standard encoder-decoder structure is employed, and an elaborate adaptive channel and multiscale spatial context module (ACMSC module) is embedded in a multilevel manner in our network for accurate mass segmentation. The proposed ACMSC module utilizes the self-attention mechanism to adaptively capture discriminative contextual information among channel and spatial dimensions.The multilevel embedding of the ACMSC module enables the network to learn distinguishing features on multiple scales of feature maps. Our proposed model is evaluated on two public datasets, CBIS-DDSM and INbreast. The experimental results show that by adaptively capturing the context of the channel and spatial dimensions, our model can effectively remove false positives, predict difficult samples and achieve state-of-the-art results, with Dice coefficients of 82.81% for CBIS-DDSM and 84.11% for INbreast, respectively. We hope that our work will contribute to the CAD system for breast cancer diagnosis and ultimately improve clinical diagnosis.

7 citations

Journal ArticleDOI
TL;DR: In this paper , a stacked ensemble of residual neural network (ResNet) models was used to classify the detected and segmented breast masses into malignant or benign and diagnosing the Breast Imaging Reporting and Data System (BI-RADS) assessment category with a score from 2 to 6 and the shape as oval, round, lobulated, or irregular.
Abstract: Abstract A computer-aided diagnosis (CAD) system requires automated stages of tumor detection, segmentation, and classification that are integrated sequentially into one framework to assist the radiologists with a final diagnosis decision. In this paper, we introduce the final step of breast mass classification and diagnosis using a stacked ensemble of residual neural network (ResNet) models (i.e. ResNet50V2, ResNet101V2, and ResNet152V2). The work presents the task of classifying the detected and segmented breast masses into malignant or benign, and diagnosing the Breast Imaging Reporting and Data System (BI-RADS) assessment category with a score from 2 to 6 and the shape as oval, round, lobulated, or irregular. The proposed methodology was evaluated on two publicly available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Comparative experiments were conducted on the individual models and an average ensemble of models with an XGBoost classifier. Qualitative and quantitative results show that the proposed model achieved better performance for (1) Pathology classification with an accuracy of 95.13%, 99.20%, and 95.88%; (2) BI-RADS category classification with an accuracy of 85.38%, 99%, and 96.08% respectively on CBIS-DDSM, INbreast, and the private dataset; and (3) shape classification with 90.02% on the CBIS-DDSM dataset. Our results demonstrate that our proposed integrated framework could benefit from all automated stages to outperform the latest deep learning methodologies.

5 citations

Proceedings ArticleDOI
24 May 2021
TL;DR: In this paper, the authors proposed a new CAD scheme for reducing the number of false positives in mammographic mass detection using a deep learning (DL) method, where image representations that include Hilbert's image representation and forest fire model which contain rich textural information are given as input to CNN for mammogram classification.
Abstract: Breast cancer is a prominent disease affecting women and is associated with low survival rate. Mammogram is a widely accepted and adopted modality for diagnosing breast cancer. The challenges faced in the early detection of breast cancer include poor contrast of mammograms, complex nature of abnormalities and difficulty in interpreting dense tissues. Computer-Aided Diagnosis (CAD) schemes help radiologists improve the sensitivity by rendering an objective diagnosis, in addition to reducing the time and cost involved. Conventional methods for automated diagnosis involve extracting handcrafted features from Region of Interest (ROI) followed by classification using Machine Learning (ML) techniques. The main challenge faced in CAD is higher false positive rate which adds to patient anxiety. This paper proposes a new CAD scheme for reducing the number of false positives in mammographic mass detection using a Deep Learning (DL) method. Convolutional Neural Network (CNN) can be considered as a prospective candidate for efficiently eliminating false positives in mammographic mass detection. More specifically, image representations that include Hilbert’s image representation and forest fire model which contain rich textural information are given as input to CNN for mammogram classification. The proposed system outperforms ML approach based on handcrafted features extracted from the image representations considered. In particular, forest fire- CNN combination achieves accuracy as high as 96%.

2 citations

Book ChapterDOI
21 Jan 2022
Journal ArticleDOI
TL;DR: In this paper , a deep learning architecture based on U-Net was proposed for the detection of breast masses and its characterization as benign or malignant, which achieved a true positive rate of 99.64% at 0.25 false positives per image for INbreast dataset while the same for DDSM are 97.36% and 0.38 FPs/I, respectively.
Abstract: Breast cancer, though rare in male, is very frequent in female and has high mortality rate which can be reduced if detected and diagnosed at the early stage. Thus, in this paper, deep learning architecture based on U-Net is proposed for the detection of breast masses and its characterization as benign or malignant. The evaluation of the proposed architecture in detection is carried out on two benchmark datasets- INbreast and DDSM and achieved a true positive rate of 99.64% at 0.25 false positives per image for INbreast dataset while the same for DDSM are 97.36% and 0.38 FPs/I, respectively. For mass characterization, an accuracy of 97.39% with an AUC of 0.97 is obtained for INbreast while the same for DDSM are 96.81%, and 0.96, respectively. The measured results are further compared with the state-of-the-art techniques where the introduced scheme takes an edge over others.
References
More filters
Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations


"Towards Automated Breast Mass Class..." refers methods in this paper

  • ...They have used VGG16 network [24] for the coarse detection....

    [...]

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Proceedings ArticleDOI
27 Jun 2016
TL;DR: Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background, and outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
Abstract: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.

27,256 citations


"Towards Automated Breast Mass Class..." refers background in this paper

  • ...Papers like [15], [16] have demonstrated the effectiveness of YOLO based CAD system [17]....

    [...]

Journal ArticleDOI
TL;DR: The American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival.
Abstract: Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data were collected by the Surveillance, Epidemiology, and End Results Program; the National Program of Cancer Registries; and the North American Association of Central Cancer Registries. Mortality data were collected by the National Center for Health Statistics. In 2017, 1,688,780 new cancer cases and 600,920 cancer deaths are projected to occur in the United States. For all sites combined, the cancer incidence rate is 20% higher in men than in women, while the cancer death rate is 40% higher. However, sex disparities vary by cancer type. For example, thyroid cancer incidence rates are 3-fold higher in women than in men (21 vs 7 per 100,000 population), despite equivalent death rates (0.5 per 100,000 population), largely reflecting sex differences in the "epidemic of diagnosis." Over the past decade of available data, the overall cancer incidence rate (2004-2013) was stable in women and declined by approximately 2% annually in men, while the cancer death rate (2005-2014) declined by about 1.5% annually in both men and women. From 1991 to 2014, the overall cancer death rate dropped 25%, translating to approximately 2,143,200 fewer cancer deaths than would have been expected if death rates had remained at their peak. Although the cancer death rate was 15% higher in blacks than in whites in 2014, increasing access to care as a result of the Patient Protection and Affordable Care Act may expedite the narrowing racial gap; from 2010 to 2015, the proportion of blacks who were uninsured halved, from 21% to 11%, as it did for Hispanics (31% to 16%). Gains in coverage for traditionally underserved Americans will facilitate the broader application of existing cancer control knowledge across every segment of the population. CA Cancer J Clin 2017;67:7-30. © 2017 American Cancer Society.

13,427 citations


"Towards Automated Breast Mass Class..." refers background in this paper

  • ...45% breast cancer patients died in India [2]....

    [...]

  • ...In 2012 nearly 48.45% breast cancer patients died in India [2]....

    [...]