scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A review of automatic mass detection and segmentation in mammographic images.

TL;DR: The aim of this paper is to review existing approaches to the automatic detection and segmentation of masses in mammographic images, highlighting the key-points and main differences between the used strategies.
About: This article is published in Medical Image Analysis.The article was published on 2010-04-01. It has received 388 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: A new mammographic database built with full-field digital mammograms, which presents a wide variability of cases, and is made publicly available together with precise annotations is presented and can be a reference for future works centered or related to breast cancer imaging.

724 citations


Cites background from "A review of automatic mass detectio..."

  • ...As noted by Oliver and colleagues (20), there is no public available database made with digital mammograms....

    [...]

  • ...Good results can have been obtained in databases with ‘‘easy’’ cases, whereas bad accuracies may have been achieved by using ‘‘difficult’’ databases (16,18,20)....

    [...]

  • ...These types of annotations are not considered sufficient for some studies, as the one done by Oliver et al (20), where all circumscribed and spiculated lesions had to be manually segmented....

    [...]

Journal ArticleDOI
TL;DR: This article discusses how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function and considers how imaging methods can be integrated with genomic and pathology data, instead of being developed in isolation.
Abstract: Tumors exhibit genomic and phenotypic heterogeneity which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks. These methods can establish whether one tumor is more or less heterogeneous than another and can identify sub-regions with differing biology. In this article we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, rather than be developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care.

471 citations


Cites methods from "A review of automatic mass detectio..."

  • ...Evidence for clinical benefit Texture analyses have been used extensively in x-ray mammography (52)....

    [...]

Journal ArticleDOI
TL;DR: Two automated methods to diagnose mass types of benign and malignant in mammograms are presented and different classifiers (such as random forest, naive Bayes, SVM, and KNN) are used to evaluate the performance of the proposed methods.
Abstract: CNN templates are generated using a genetic algorithm to segment mammograms.An adaptive threshold is computed in region growing process by using ANN and intensity features.In tumor classification, CNN produces better results than region growing.MLP produces the highest classification accuracy among other classifiers.Results on DDSM images are more appropriate than those of MIAS. Breast cancer is regarded as one of the most frequent mortality causes among women. As early detection of breast cancer increases the survival chance, creation of a system to diagnose suspicious masses in mammograms is important. In this paper, two automated methods are presented to diagnose mass types of benign and malignant in mammograms. In the first proposed method, segmentation is done using an automated region growing whose threshold is obtained by a trained artificial neural network (ANN). In the second proposed method, segmentation is performed by a cellular neural network (CNN) whose parameters are determined by a genetic algorithm (GA). Intensity, textural, and shape features are extracted from segmented tumors. GA is used to select appropriate features from the set of extracted features. In the next stage, ANNs are used to classify the mammograms as benign or malignant. To evaluate the performance of the proposed methods different classifiers (such as random forest, naive Bayes, SVM, and KNN) are used. Results of the proposed techniques performed on MIAS and DDSM databases are promising. The obtained sensitivity, specificity, and accuracy rates are 96.87%, 95.94%, and 96.47%, respectively.

323 citations


Cites methods from "A review of automatic mass detectio..."

  • ...(i) Region-based methods (such as region growing, split/merge using quad-tree decomposition) in which similarities are detected, and (ii) boundary-based methods (such as thresholding, gradient edge detection) in which discontinuities are detected and linked to form region boundaries (Oliver et al., 2010)....

    [...]

  • ...…methods (such as region growing, split/merge using quad-tree decomposition) in which similarities are detected, and (ii) boundary-based methods (such as thresholding, gradient edge detection) in which discontinuities are detected and linked to form region boundaries (Oliver et al., 2010)....

    [...]

Journal ArticleDOI
TL;DR: An integrated methodology for detecting, segmenting and classifying breast masses from mammograms with minimal user intervention is presented and the current state of the art detection, segmentation and classification results for the INbreast dataset are tested.

254 citations


Cites background or methods from "A review of automatic mass detectio..."

  • ...ratio of the mass visualisation, combined with the lack of consistent patterns of shape, size, appearance and location of breast masses (Oliver et al. (2010); Tang et al....

    [...]

  • ...ratio of the mass visualisation, combined with the lack of consistent patterns of shape, size, appearance and location of breast masses (Oliver et al. (2010); Tang et al. (2009))....

    [...]

  • ...Moreover, recently proposed segmentation methods (Rahmati et al. (2012); Cardoso et al....

    [...]

Journal ArticleDOI
TL;DR: The latest segmentation methods applied in medical image analysis are described and the advantages and disadvantages of each method are described besides examination of each algorithm with its application in Magnetic Resonance Imaging and Computed Tomography image analysis.
Abstract: Medical images have made a great impact on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. Many image segmentation methods for medical image analysis have been presented in this paper. In this paper, we have described the latest segmentation methods applied in medical image analysis. The advantages and disadvantages of each method are described besides examination of each algorithm with its application in Magnetic Resonance Imaging and Computed Tomography image analysis. Each algorithm is explained separately with its ability and features for the analysis of grey-level images. In order to evaluate the segmentation results, some popular benchmark measurements are presented in the final section.

253 citations


Cites methods from "A review of automatic mass detectio..."

  • ...Region growing has been widely used in mammograms in order to extract the potential lesion from its background [49]....

    [...]

References
More filters
01 Jan 1967
TL;DR: The k-means algorithm as mentioned in this paper partitions an N-dimensional population into k sets on the basis of a sample, which is a generalization of the ordinary sample mean, and it is shown to give partitions which are reasonably efficient in the sense of within-class variance.
Abstract: The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special

24,320 citations


"A review of automatic mass detectio..." refers methods in this paper

  • ...The traditional partitional clustering algorithm is the k-Means algorithm (MacQueen, 1967), which is characterised by simple implementation and low complexity....

    [...]

Book
Christopher M. Bishop1
17 Aug 2006
TL;DR: Probability Distributions, linear models for Regression, Linear Models for Classification, Neural Networks, Graphical Models, Mixture Models and EM, Sampling Methods, Continuous Latent Variables, Sequential Data are studied.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

22,840 citations

Book
01 Jan 1973

20,541 citations

Journal ArticleDOI
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Abstract: (2007). Pattern Recognition and Machine Learning. Technometrics: Vol. 49, No. 3, pp. 366-366.

18,802 citations


"A review of automatic mass detectio..." refers methods in this paper

  • ...Velthuizen used it to group pixels with similar grey-level values in the original images, while Chen and Lee used it over the set of local features extracted from the application of a multi-resolution wavelet transform and Markov Random Fields (MRF) analysis (Bishop, 2006). Moreover, the output of the FCM was the input of an Expectation Maximisation (EM) algorithm (Dempster et al., 1977) based on Gibbs Random Fields (Bishop, 2006). These final steps are closely related to the algorithm proposed by Comer et al. (1996). In contrast to FCM which improves k-Means using a fuzzy approach of the energy function, the Dogs and Rabbit (DaR) algorithm (McKenzie and Alder, 1994) performs a more robust seed placement. The DaR was used by Zheng et al. (1999b) and Zheng and Chan (2001) to obtain an initial set of regions which subsequently were used to initialise a MRF approach....

    [...]

  • ...Velthuizen used it to group pixels with similar grey-level values in the original images, while Chen and Lee used it over the set of local features extracted from the application of a multi-resolution wavelet transform and Markov Random Fields (MRF) analysis (Bishop, 2006). Moreover, the output of the FCM was the input of an Expectation Maximisation (EM) algorithm (Dempster et al., 1977) based on Gibbs Random Fields (Bishop, 2006). These final steps are closely related to the algorithm proposed by Comer et al. (1996). In contrast to FCM which improves k-Means using a fuzzy approach of the energy function, the Dogs and Rabbit (DaR) algorithm (McKenzie and Alder, 1994) performs a more robust seed placement. The DaR was used by Zheng et al. (1999b) and Zheng and Chan (2001) to obtain an initial set of regions which subsequently were used to initialise a MRF approach. As Li et al. stated (Li et al., 1995), MRF allow the modelling of joint distributions in terms of local spatial interactions, introducing thus, local region information into the algorithm. This information was also introduced in the work of Rogova et al. (1999) using a constrained stochastic relaxation algorithm with a disparity measure function, which estimated the similarity between two blocks of pixels in the feature space. In contrast, Cao et al. (2004a,b) used two information theory based clustering algorithms to segment masses. The first approach was the Deterministic Annealing approach (Rose, 1998), which is a global minimisation algorithm and incorporated ‘‘randomness” into the to be minimised energy function. In the second approach, they unified a fuzzy based clustering and Deterministic Annealing to obtain an improved algorithm. In contrast with these approaches, Bruynooghe (2006) segmented an enhanced image instead of the original mammogram. The enhanced image was obtained by removing the locally linear fine detail structure using a morphological algorithm based on successive geodesic openings (Davies, 1997) with linear structuring elements at various orientations. One of the earliest approaches to mass detection was the work of Brzakovic et al. (1990), which was based on a multi-resolution fuzzy pyramid linking approach, a data structure in which the input image formed the basis of the pyramid and each subsequent level (of lower resolution) was sequentially constructed. The links between each node and its four parents were propagated using a fuzzy function to upper levels. They demonstrated that this algorithm was directly correlated with the isodata clustering algorithm (Brzakovic et al., 1990). It has to be noted, that with this strategy, spatial information (region information) is taken into account. Like Fu and Mui (1981), we consider threshold methods as partitional clustering methods....

    [...]

  • ..., 1977) based on Gibbs Random Fields (Bishop, 2006)....

    [...]

  • ...Velthuizen used it to group pixels with similar grey-level values in the original images, while Chen and Lee used it over the set of local features extracted from the application of a multi-resolution wavelet transform and Markov Random Fields (MRF) analysis (Bishop, 2006)....

    [...]

  • ...Moreover, the output of the FCM was the input of an Expectation Maximisation (EM) algorithm (Dempster et al., 1977) based on Gibbs Random Fields (Bishop, 2006)....

    [...]