scispace - formally typeset
Search or ask a question
Author

M. Nelson

Bio: M. Nelson is an academic researcher from Aerojet Rocketdyne. The author has contributed to research in topics: Digital image processing & Image segmentation. The author has an hindex of 3, co-authored 4 publications receiving 1635 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images and the results are compared to those obtained with other methods.
Abstract: Blood vessels usually have poor local contrast, and the application of existing edge detection algorithms yield results which are not satisfactory. An operator for feature extraction based on the optical and spatial properties of objects to be recognized is introduced. The gray-level profile of the cross section of a blood vessel is approximated by a Gaussian-shaped curve. The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images. Twelve different templates that are used to search for vessel segments along all possible directions are constructed. Various issues related to the implementation of these matched filters are discussed. The results are compared to those obtained with other methods. >

1,692 citations

Proceedings ArticleDOI
08 Jun 1988
TL;DR: Algorithms used to identify markedly different objects and to distinguish between those objects which appear very similar to the trained eye are discussed, which has been very successful when applied to color images of the retina.
Abstract: We are developing a system designed around an IBM PC-AT to perform automatic diagnosis of diseases from images of the retina. The system includes hardware for color image capture and display. We are developing software for performing image enhancement, image analysis, pattern recognition and artificial intelligence. The design goal of the system is to automatically segment a digitized photograph of the retina into its normal and abnormal structures, identifying these objects by various features such as color, size, shape, texture, orientation, etc., and ultimately to provide a list of possible diagnoses with varying degrees of probability. We will discuss algorithms used to identify markedly different objects and to distinguish between those objects which appear very similar to the trained eye. Implementation of these algorithms, which are typically applied to areas such as remote sensing, terrain mapping and robotics, has been very successful when applied to color images of the retina.

35 citations

Proceedings Article
08 Nov 1989
TL;DR: For the final step in interpreting the image, the backpropagation neural network is found to be able to learn to diagnose a set of diseases from the type of information in the coded description of the image.
Abstract: Interpretation of images of the ocular fundus by the STARE (STructured Analysis of the REtina) system requires many steps, including image enhancement, object segmentation, object identification, and scene analysis. We describe how these steps are performed and linked, and we demonstrate some success with the STARE system in each of these steps. We are currently able to segment the blood vessels, optic nerve, fovea, bright lesions, and dark lesions automatically. We describe the methods for these tasks and the development underway to complete the production of a database of objects that forms a coded description of the image. For the final step in interpreting the image, we found the backpropagation neural network to be able to learn to diagnose a set of diseases from the type of information in the coded description of the image.

32 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A method is presented for automated segmentation of vessels in two-dimensional color images of the retina based on extraction of image ridges, which coincide approximately with vessel centerlines, which is compared with two recently published rule-based methods.
Abstract: A method is presented for automated segmentation of vessels in two-dimensional color images of the retina. This method can be used in computer analyses of retinal images, e.g., in automated screening for diabetic retinopathy. The system is based on extraction of image ridges, which coincide approximately with vessel centerlines. The ridges are used to compose primitives in the form of line elements. With the line elements an image is partitioned into patches by assigning each image pixel to the closest line element. Every line element constitutes a local coordinate frame for its corresponding patch. For every pixel, feature vectors are computed that make use of properties of the patches and the line elements. The feature vectors are classified using a kNN-classifier and sequential forward feature selection. The algorithm was tested on a database consisting of 40 manually labeled images. The method achieves an area under the receiver operating characteristic curve of 0.952. The method is compared with two recently published rule-based methods of Hoover et al. and Jiang et al. . The results show that our method is significantly better than the two rule-based methods (p<0.01). The accuracy of our method is 0.944 versus 0.947 for a second observer.

3,416 citations

Journal ArticleDOI
TL;DR: An automated method to locate and outline blood vessels in images of the ocular fundus that uses local and global vessel features cooperatively to segment the vessel network is described.
Abstract: Describes an automated method to locate and outline blood vessels in images of the ocular fundus. Such a tool should prove useful to eye care specialists for purposes of patient screening, treatment evaluation, and clinical study. The authors' method differs from previously known methods in that it uses local and global vessel features cooperatively to segment the vessel network. The authors evaluate their method using hand-labeled ground truth segmentations of 20 images. A plot of the operating characteristic shows that the authors' method reduces false positives by as much as 15 times over basic thresholding of a matched filter response (MFR), at up to a 75% true positive rate. For a baseline, they also compared the ground truth against a second hand-labeling, yielding a 90% true positive and a 4% false positive detection rate, on average. These numbers suggest there is still room for a 15% true positive rate improvement, with the same false positive rate, over the authors' method. They are making all their images and hand labelings publicly available for interested researchers to use in evaluating related methods.

2,206 citations

Journal ArticleDOI
TL;DR: The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images and the results are compared to those obtained with other methods.
Abstract: Blood vessels usually have poor local contrast, and the application of existing edge detection algorithms yield results which are not satisfactory. An operator for feature extraction based on the optical and spatial properties of objects to be recognized is introduced. The gray-level profile of the cross section of a blood vessel is approximated by a Gaussian-shaped curve. The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images. Twelve different templates that are used to search for vessel segments along all possible directions are constructed. Various issues related to the implementation of these matched filters are discussed. The results are compared to those obtained with other methods. >

1,692 citations

Journal ArticleDOI
TL;DR: In this paper, a method for automated segmentation of the vasculature in retinal images is presented, which produces segmentations by classifying each image pixel as vessel or non-vessel, based on the pixel's feature vector.
Abstract: We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al.,2004) and STARE (Hoover et al.,2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods

1,435 citations