scispace - formally typeset
Search or ask a question
Book ChapterDOI

A texture-based probabilistic approach for lung nodule segmentation

TL;DR: A classification-based approach based on pixel-level texture features that produces soft (probabilistic) segmentations that will be useful for representing the uncertainty in nodule boundaries that is manifest in radiological image segmentations.
Abstract: Producing consistent segmentations of lung nodules in CT scans is a persistent problem of image processing algorithms. Many hard-segmentation approaches are proposed in the literature, but soft segmentation of lung nodules remains largely unexplored. In this paper, we propose a classification-based approach based on pixel-level texture features that produces soft (probabilistic) segmentations. We tested this classifier on the publicly available Lung Image Database Consortium (LIDC) dataset. We further refined the classification results with a post-processing algorithm based on the variability index. The algorithm performed well on nodules not adjacent to the chest wall, producing a soft overlap between radiologists' based segmentation and computer-based segmentation of 0.52. In the long term, these soft segmentations will be useful for representing the uncertainty in nodule boundaries that is manifest in radiological image segmentations.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The paper addresses several challenges that researchers face in each implementation step and outlines the strengths and drawbacks of the existing approaches for lung cancer CAD systems.
Abstract: This paper overviews one of the most important, interesting, and challenging problems in oncology, the problem of lung cancer diagnosis Developing an effective computer-aided diagnosis (CAD) system for lung cancer is of great clinical importance and can increase the patient's chance of survival For this reason, CAD systems for lung cancer have been investigated in a huge number of research studies A typical CAD system for lung cancer diagnosis is composed of four main processing steps: segmentation of the lung fields, detection of nodules inside the lung fields, segmentation of the detected nodules, and diagnosis of the nodules as benign or malignant This paper overviews the current state-of-the-art techniques that have been developed to implement each of these CAD processing steps For each technique, various aspects of technical issues, implemented methodologies, training and testing databases, and validation methods, as well as achieved performances, are described In addition, the paper addresses several challenges that researchers face in each implementation step and outlines the strengths and drawbacks of the existing approaches for lung cancer CAD systems

232 citations


Cites background or methods from "A texture-based probabilistic appro..."

  • ...With such GTs, various segmentation methods have been validated by a number of quantitative accuracy and error measures, such as (1) overlap ratio (a fraction of cardinality of the intersection and the union of voxel sets for a lesion’s segmentation and its GT) [156, 162, 163, 169, 170, 177, 180, 181, 183], (2) percentage voxel error rate (percentage of voxels missegmented with respect to the total number of voxels in a nodule) [160, 163, 172, 180], and (3) percentage volume error rate (percentage of error in volume measurement with respect to the GT’s volume) [154, 162, 176]....

    [...]

  • ...[183] General Discriminative classification Soft segmentation....

    [...]

  • ...[183] proposed a similar soft segmentation method by using a decision-tree classifier with a classification and regression tree (CART) algorithm [266]....

    [...]

  • ...Technical approaches previously reported for volumetric lung nodule segmentation can be roughly classified into the following eleven categories: (1) thresholding [140–144, 146, 154], (2) mathematical morphology [73, 76, 147, 152, 153, 158], (3) region growing [152, 153, 175–178], (4) deformable model [137, 138, 160, 161, 163, 168, 182, 255], (5) dynamic programming [145, 169, 180], (6) spherical/ellipsoidal model fitting [148, 149, 151, 256, 257], (7) probabilistic classification [97, 156, 157, 166, 167, 174, 181], (8) discriminative classification [162, 183], (9) mean shift [150, 151, 170], (10) graph-cuts [172, 173], and (11) watersheds [165]....

    [...]

  • ...Currently two datasets covering many types of nodules with multiple GT segmentations for each case are available through theirwebsite [310], which have already been used by many studies since 2005 [162, 163, 169, 176, 177, 180, 183, 186]....

    [...]

Journal ArticleDOI
07 Sep 2017-PLOS ONE
TL;DR: A new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN) is proposed and the experimental results show that this method rapidly, completely and accurately segments various types of lung nodsules image sequences.
Abstract: The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences.

94 citations

Proceedings ArticleDOI
01 Apr 2017
TL;DR: This paper presents in detail literature survey on various techniques that have been used in Pre-processing, nodule segmentation and classification of lung cancer detection using CT images.
Abstract: Lung cancer is the most common cancer for death among all cancers and CT scan is the best modality for imaging lung cancer. A good amount of research work has been carried out in the past towards CAD system for lung cancer detection using CT images. It is divided into four stages. They are pre-processing or lung segmentation, nodule detection, nodule segmentation and classification. This paper presents in detail literature survey on various techniques that have been used in Pre-processing, nodule segmentation and classification.

17 citations


Cites methods from "A texture-based probabilistic appro..."

  • ...[53] General Soft segmentation with CARRT algorithm Mean Soft overlap 0....

    [...]

Proceedings ArticleDOI
04 Dec 2013
TL;DR: A CAD system based on multiple computer-derived weak segmentations (WSCAD) is proposed and it is shown that its diagnosis performance is at least as good as the predictions developed using manual radiologist segmentations.
Abstract: Computer-aided diagnosis (CAD) can be used as "second readers" in the imaging diagnostic process. Typically to create a CAD system, the region of interest (ROI) has to be first detected and then delineated. This can be done either manually or automatically. Given that manually delineating ROIs is a time consuming and costly process, we propose a CAD system based on multiple computer-derived weak segmentations (WSCAD) and show that its diagnosis performance is at least as good as the predictions developed using manual radiologist segmentations. The proposed CAD system extracts a set of image features from the weak segmentations and uses them in an ensemble of classification algorithms to predict semantic ratings such as malignancy. These automated results are compared against a reference truth based on ratings and segmentations provided by radiologists to determine if it is necessary to obtain manual radiologist segmentations in order to develop a CAD. By developing a pair of CADs using the Lung Image Database Consortium (LIDC) data, we show that WSCADs are at least as accurate in predicting semantic ratings as CADs based on radiologist segmentation.

16 citations


Cites methods from "A texture-based probabilistic appro..."

  • ...This was used to create a classifier that determined if a particular pixel belonged to the nodule [7]....

    [...]

01 Aug 1976
TL;DR: The use of least squares-fit-to-a-polynomial smoothing of uniformly spaced digital data by convoluting the data with a smoothing array is reviewed in this paper.
Abstract: The technique of least-squares-fit-to-a-polynominal smoothing of uniformly spaced digital data by convoluting the data with a smoothing array is reviewed. The use of digital computers for this type of numerical filtering and for determining smoothed derivatives was first discussed by Savitzky and Golay (Anal. Chem. 36, 1627 (1964)). The report presents methods for extending the widths of the convolution arrays beyond the 25-point-width maximum of Savitzky and Golay. It also gives corrections to errors in their paper. Three new algebraic equations are derived that can be used to determine the convolution array coefficients for determining the smoothed first, second, and third derivative values by least-squares-fitting to a quadratic/cubic polynominal. Two simple tests for determining errors in least-squares-fit convolution arrays are given. The use of these convolution arrays for processing digital data is illustrated by examples that make use of a Rutherford backscattering spectrum from a cobalt molybdate catalyst.

12 citations

References
More filters
Journal ArticleDOI

17,427 citations


"A texture-based probabilistic appro..." refers methods in this paper

  • ...then passed through a built-in Matlab implementation of a Savitzky-Golay Filter[14] This filter reduces the impact of noise in an image by moving a frame of a specified size over each column of an image and performing a polynomial regression on the pixels in that frame....

    [...]

  • ...These p-maps were 6 Olga Zinoveva1, Dmitriy Zinovev2, Stephen A. Siena3, Daniela S. Raicu2, Jacob Furst2, Samuel G. Armato2 then passed through a built-in Matlab implementation of a Savitzky-Golay Filter[14] This filter reduces the impact of noise in an image by moving a frame of a specified size over each column of an image and performing a polynomial regression on the pixels in that frame....

    [...]

Journal ArticleDOI
TL;DR: Through a consensus process in which careful planning and proper consideration of fundamental issues have been emphasized, the LIDC database is expected to provide a powerful resource for the medical imaging research community.
Abstract: To stimulate the advancement of computer-aided diagnostic (CAD) research for lung nodules in thoracic computed tomography (CT), the National Cancer Institute launched a cooperative effort known as the Lung Image Database Consortium (LIDC) The LIDC is composed of five academic institutions from across the United States that are working together to develop an image database that will serve as an international research resource for the development, training, and evaluation of CAD methods in the detection of lung nodules on CT scans Prior to the collection of CT images and associated patient data, the LIDC has been engaged in a consensus process to identify, address, and resolve a host of challenging technical and clinical issues to provide a solid foundation for a scientifically robust database These issues include the establishment of (a) a governing mission statement, (b) criteria to determine whether a CT scan is eligible for inclusion in the database, (c) an appropriate definition of the term qualifying nodule, (d) an appropriate definition of "truth" requirements, (e) a process model through which the database will be populated, and (f) a statistical framework to guide the application of assessment methods by users of the database Through a consensus process in which careful planning and proper consideration of fundamental issues have been emphasized, the LIDC database is expected to provide a powerful resource for the medical imaging research community This article is intended to share with the community the breadth and depth of these key issues

386 citations


"A texture-based probabilistic appro..." refers methods in this paper

  • ...Many algorithms are trained on data from the Lung Image Database Consortium (LIDC)[2], which provides a reference truth based on the contours marked by four radiologists....

    [...]

Journal ArticleDOI
TL;DR: An efficient algorithm for segmenting different types of pulmonary nodules including high and low contrast nodules, nodules with vasculature attachment, and nodules in the close vicinity of the lung wall or diaphragm is presented.
Abstract: This paper presents an efficient algorithm for segmenting different types of pulmonary nodules including high and low contrast nodules, nodules with vasculature attachment, and nodules in the close vicinity of the lung wall or diaphragm. The algorithm performs an adaptive sphericity oriented contrast region growing on the fuzzy connectivity map of the object of interest. This region growing is operated within a volumetric mask which is created by first applying a local adaptive segmentation algorithm that identifies foreground and background regions within a certain window size. The foreground objects are then filled to remove any holes, and a spatial connectivity map is generated to create a 3-D mask. The mask is then enlarged to contain the background while excluding unwanted foreground regions. Apart from generating a confined search volume, the mask is also used to estimate the parameters for the subsequent region growing, as well as for repositioning the seed point in order to ensure reproducibility. The method was run on 815 pulmonary nodules. By using randomly placed seed points, the approach was shown to be fully reproducible. As for acceptability, the segmentation results were visually inspected by a qualified radiologist to search for any gross misssegmentation. 84% of the first results of the segmentation were accepted by the radiologist while for the remaining 16% nodules, alternative segmentation solutions that were provided by the method were selected.

251 citations


"A texture-based probabilistic appro..." refers methods in this paper

  • ...Demeshki et al employed region growing and fuzzy connectivity and evaluated segmentation results subjectively with the help of radiologists[6]....

    [...]

Journal ArticleDOI
TL;DR: A computer system that automatically identifies nodules at chest computed tomography, quantifies their diameter, and assesses for change in size at follow-up matched that by the thoracic radiologist (Spearman rank correlation coefficient).
Abstract: The authors developed a computer system that automatically identifies nodules at chest computed tomography, quantifies their diameter, and assesses for change in size at follow-up. The automated nodule detection system identified 318 (86%) of 370 nodules in 16 studies (eight initial and eight follow-up studies) obtained in eight oncology patients with known nodules. Assessment of change in nodule size by the computer matched that by the thoracic radiologist (Spearman rank correlation coefficient, 0.932).

222 citations


"A texture-based probabilistic appro..." refers background in this paper

  • ...An effective way of measuring the malignancy of a lung nodule is by taking repeated computed tomography (CT) scans at intervals of several months and measuring the change in the nodule’s volume[1]....

    [...]