scispace - formally typeset
Search or ask a question

Showing papers on "Edge detection published in 1981"


Journal ArticleDOI
TL;DR: This survey summarizes some of the proposed segmentation techniques in the area of biomedical image segmentation, which fall into the categories of characteristic feature thresholding or clustering and edge detection.

1,160 citations


Journal ArticleDOI
TL;DR: In this correspondence, an operator is derived that finds the best oriented plane at each point in the image, which complements other approaches that are either interactive or heuristic extensions of 2-D techniques.
Abstract: Modern scanning techniques, such as computed tomography, have begun to produce true three-dimensional imagery of internal structures. The first stage in finding structure in these images, like that for standard two-dimensional images, is to evaluate a local edge operator over the image. If an edge segment in two dimensions is modeled as an oriented unit line segment that separates unit squares (i.e., pixels) of different intensities, then a three-dimensional edge segment is an oriented unit plane that separates unit volumes (i.e., voxels) of different intensities. In this correspondence we derive an operator that finds the best oriented plane at each point in the image. This operator, which is based directly on the 3-D problem, complements other approaches that are either interactive or heuristic extensions of 2-D techniques.

272 citations


Journal ArticleDOI
01 Sep 1981
TL;DR: A method of evaluating edge detector output based on the local good form of the detected edges, which combines two desirable qualities of well-formed edges-good continuation and thinness, which has the advantage of not requiring ideal positions to be known.
Abstract: A method of evaluating edge detector output is proposed, based on the local good form of the detected edges. It combines two desirable qualities of well-formed edges-good continuation and thinness. It yields results generally similar to those obtained with measures based on discrepancy of the detected edges from their known ideal positions, but it has the advantage of not requiring ideal positions to be known. It can be used as an aid to threshold selection in edge detection (pick the threshold that maximizes the measure), as a basis for comparing the performances of different detectors, and as a measure of the effectiveness of various types of preprocessing operations facilitating edge detection.

170 citations


Journal ArticleDOI
TL;DR: Hierarchical template matching as discussed by the authors allows both a savings in computation time (by a problem-dependent amount) and a considerable degree of insensitivity to noise, which is an important feature of template matching locations that would be difficult to enforce and evene to express.

107 citations


Journal ArticleDOI
TL;DR: Three-dimensional edge detectors applicable to multidimensional arrays of data-e.g., three-dimensional arrays obtained by reconstruction from projections-by locally fitting hypersurfaces to the data are defined.
Abstract: One way to define operators for detecting edges in digital images is to fit a surface (plane, quadric,...) to a neighborhood of each image point and take the magnitude of the gradient of the surface as an estimate of the rate of change of gray level in the image at that point. This approach is extended to define edge detectors applicable to multidimensional arrays of data-e.g., three-dimensional arrays obtained by reconstruction from projections-by locally fitting hypersurfaces to the data. The resulting operators, for hypersurfaces of degree 1 or 2, are closely analogous to those in the two-dimensional case. Examples comparing some of these three-dimensional operators with their twodimensional counterparts are given.

100 citations


Journal ArticleDOI
TL;DR: A detailed study of some sequential detectors of change in mean is presented, and some comparisons issued from simulations and new theoretical results concerning the best of them are presented.
Abstract: The first part of this paper has been devoted to the presentation of a sequential edge detection algorithm, the local detector of which uses sequential estimators of change in mean grey level. A detailed study of some sequential detectors of change in mean is presented in this second part; this study includes some comparisons issued from simulations and new theoretical results concerning the best of them.

87 citations


Journal ArticleDOI
TL;DR: In this paper, a line-by-line edge detection algorithm using a recursive edge-following scheme is presented. But the edge detection problem is solved by a Kalman filter, the state model corresponding to a noisy straight line.
Abstract: A sequential algorithm for edge detection using a line-byline detector of edge elements connected to a recursive edge-following scheme is presented. On each line, edge elements are detected by means of a filtering operation in order to follow the slow variations of the gray level and some sequential and recursive estimators for locating jumps in this level. The edge-following problem is solved by a Kalman filter, the state model corresponding to a noisy straight line. In this first part, the complete edge detection algorithm is presented after a brief survey of edge detection methods available in the literature. Two main examples of applications are given: detection of white and black targets in the landscape in order to perform automatic driving of vehicles and detection of blood vessels in stereographic images of the brain. In the second part, a detailed study of the sequential estimators for change in mean, which are used in the line-by-line detection, will be found.

68 citations


Journal ArticleDOI
TL;DR: Hueckel's edge detector finds the best-fitting ideal step edge to a given picture neighborhood by expanding the neighborhood and step edge in terms of a set of nine basis functions.
Abstract: Hueckel's edge detector finds the best-fitting ideal step edge to a given picture neighborhood by expanding the neighborhood and step edge in terms of a set of nine basis functions. A very simple case of this approach uses a 2 × 2 neighborhood and three basis functions. This case is solved explicitly using elementary methods. The magnitude of the best-fitting step edge for the neighborhood AB CD turns out to be the Roberts operator max (|A - D|, |B - C|).

62 citations


Journal ArticleDOI
TL;DR: A comparative study of generalized cooccurrence texture analysis tools is presented and three experiments are discussed-the first based on a nearest neighbor classifier, the second on a linear discriminant classifiers, and the third on the Battacharyya distance figure of merit.
Abstract: A comparative study of generalized cooccurrence texture analysis tools is presented. A generalized cooccurrence matrix (GCM) reflects the shape, size, and spatial arrangement of texture features. The particular texture features considered in this paper are 1) pixel-intensity, for which generalized cooccurrence reduces to traditional cooccurrence; 2) edge-pixel; and 3) extended-edges. Three experiments are discussed-the first based on a nearest neighbor classifier, the second on a linear discriminant classifier, and the third on the Battacharyya distance figure of merit.

61 citations


Proceedings ArticleDOI
12 Nov 1981
TL;DR: A method of evaluating edge detector output based on the local good form of the detected edges, which combines two desirable qualities of well-formed edges -- good continuation and thinness, which yields results generally similar to those obtained with measures based on discrepancy.
Abstract: A method of evaluating edge detector output is proposed, based on the local good form of the detected edges. It combines two desirable qualities of well-formed edges -- good continuation and thinness. The measure has the expected behavior for known input edges as a function of their blur and noise. It yields results generally similar to those obtained with measures based on discrepancy of the detected edges from their known ideal positions, but it has the advantage of not requiring ideal positions to be known. It can be used as an aid to threshold selection in edge detection (pick the threshold that maximizes the measure), as a basis for comparing the performances of different detectors, and as a measure of the effectiveness of various types of preprocessing operations facilitating edge detection.

60 citations


Journal ArticleDOI
TL;DR: This paper presents a method developed by the authors that performs well on a large class of targets and uses ROC curves to compare this method with other well-known edge detection operators, with favorable results.
Abstract: Edge detection in the presence of noise is a well-known problem. This paper examines an applications-motivated approach for solving the problem using novel techniques and presents a method developed by the authors that performs well on a large class of targets. ROC curves are used to compare this method with other well-known edge detection operators, with favorable results. A theoretical argument is presented that favors LMMSE filtering over median filtering in extremely noisy scenes. Simulated results of the research are presented.

01 Dec 1981
TL;DR: This thesis presents an algorithm for detecting man-made objects embedded in low resolution imagery using a modified Kirsch edge operator for initial image enhancing and a normal Kirsch operator for edge detection.
Abstract: : This thesis presents an algorithm for detecting man-made objects embedded in low resolution imagery. A modified Kirsch edge operator is used for initial image enhancing. A normal Kirsch operator is then used for edge detection. A two-dimensional threshold for edges and the original intensity detects the pixels on the edges of the objects only. These pixels are then subjected to connectedness and size tests to detect the blobs which most probable represent man-made objects. The algorithm was tried on 325 pictures and a detection probability of 83.3% was achieved. False alarm probability was less than 10%. (Author)

Proceedings ArticleDOI
12 Nov 1981
TL;DR: In this article, an intermediate level vision system that uses grey scale levels (typically 8 bits or 256 levels in our case) has been implemented which locates and links intensity discontinuities in a digitized image to subpixel precision.
Abstract: An intermediate level vision system that utilises grey scale levels (typically 8 bits or 256 levels in our case)has been implemented which locates and links intensitydiscontinuities in a digitized image to subpixel precision.The discontinuities are located and localised by utilizingthe zero crossings in the laterally inhibited image of thedigitized picture.Introduction As has recently been pointed out in earlier work in the literature (Nevatia & Babu 1978), the effectivenessof many machine vision systems is often limited by thelow level processing that constitutes the first stage of thesystem. Typically, this stage consists of operations suchas edge detection, thinning, thresholding, and linking -inother words- line finding.Given this current state of the art and the inspira-tion of earlier MIT work (Binford -Horn 1972, Binford1970, Herskovitz & Binford 1970), we have undertaken thetask of seeking an improvement to this stage of vision sys-tems. The processing reported here, a simplification of theBinford -Horn approach, differs markedly from most sys-tems reported in the literature but has similarities to some

Journal ArticleDOI
TL;DR: This paper introduces two new edge detection algorithms that uses multiple difference-based edge detectors, one of which selects peak center by absolute maximum or center of mass techniques and the other uses multiple three-state edge masks to find edge positions.
Abstract: This paper introduces two new edge detection algorithms. One uses multiple difference-based edge detectors. This scheme selects peak center by absolute maximum or center of mass techniques. The other algorithm is motivated by the observation that second-order enhancement improves human contour extraction, but generally confuses difference-based edge detectors. This algorithm translates intensity images into three state images (plus one, zero, and minus one), then uses multiple three-state edge masks to find edge positions. The second scheme has a multiple hardware implementation and interesting biological analogs. Finally, the two operators introduced are compared to some popular edge detection techniques from the literature.

Book ChapterDOI
01 Jan 1981
TL;DR: A number of methods grouped as local, regional, global, sequential, heuristic, dynamic and using relaxation are schematically illustrated with an attempt to give some insight to the ideas which originated each approach.
Abstract: The problem of extracting an edge in real images is briefly described. A number of methods grouped as local, regional, global, sequential, heuristic, dynamic and using relaxation are schematically illustrated with an attempt to give some insight to the ideas which originated each approac-. Evaluation of results and critical aspects of some methods, according to certain authors are also included, as well as some hints on the latest research trends in this area.

Journal ArticleDOI
TL;DR: A fast boundary finding algorithm is presented which works without threshold operation and without any interactive control and can be easily adapted to other problems by modification of a set of parameters.

Journal ArticleDOI
TL;DR: Two approaches to organ detection in abdominal computerized tomography scans, one local and one global, are developed, which involve an iterative, adaptive boundary-delineating algorithm suitable for single organ detection and amenable to the incorporation of organ-specific knowledge.

Proceedings ArticleDOI
01 Apr 1981
TL;DR: A capsular introduction to the theoretical framework and experimental applications of the Polar Exponential Grid (PEG) transformation, in the context of image analysis, and presents the PEG transform as a motif for a class of problems in stochastic estimation of object boundaries.
Abstract: This paper provides a capsular introduction to the theoretical framework and experimental applications of the Polar Exponential Grid (PEG) transformation, in the context of image analysis. The PEG transformation is an isomorphic (1) representation of the image intensity array that simplifies, and potentially offers new insights about, a variety of tasks in computational vision. We describe the PEG transform representation; we briefly survey its functional precursors in optical computing and image processing. We then give an example of PEG-based image analysis for rotation-and-scale variant template matching and, present the PEG transform as a motif for a class of problems in stochastic estimation of object boundaries.

Journal ArticleDOI
TL;DR: A method is described whereby the edges of computerised radionuclide images can be located by the application of an edge detection algorithm.
Abstract: A method is described whereby the edges of computerised radionuclide images can be located by the application of an edge detection algorithm. Results obtained using computer simulated phantoms and radioactive distributions demonstrate the potential accuracy of incorporating such an algorithm into clinical studies.

Journal ArticleDOI
TL;DR: Davis and Mitiche as mentioned in this paper analyzed the effects of neighborhood size on the computation of local maxima and showed that only small neighborhoods are required to attain reliable local minima selection, which is consistent with experience with real images.

Journal ArticleDOI
TL;DR: An algorithm is given for generating linked edge boundaries between adjacent regions of different gray levels and relies heavily upon a one-dimensional edge detector that defers the formation of local edge interpretations until more informed decisions can be made by the edge linking procedure.

Proceedings ArticleDOI
01 Apr 1981
TL;DR: A new 3×3 edge operator is presented, based on a suitable classification of binary configurations and corresponding decision table, and the fast implementation through the direct comparison with the reference table for the 256 configurations is described.
Abstract: A new 3×3 edge operator is presented, based on a suitable classification of binary configurations and corresponding decision table. It is shown how, setting the 8 pixels around the central one in a 3×3 block to binary level through comparison-decisions, the resulting 256 configurations are classified in two groups: those corresponding to estimate the central pixel as part of an edge or boundary and the other ones. The fast implementation of the operator through the direct comparison with the reference table for the 256 configurations is described and experimental results obtained by using a microprocessor system are presented.

Journal ArticleDOI
TL;DR: Only the nearest neighbor algorithm (NNA) was found to produce generally acceptable results at all information densities studied and preprocessing of the images by a Gaussian filter did not substantially alter these conclusions.
Abstract: Edge detection algorithms are critical to many of the current developments in nuclear medicine, but the distortions produced by applying these algorithms have not been adequately quantified for the low spatial resolution and low information density limit characteristic of nuclear medicine images. In this study, eleven edge detection methods were evaluated in terms of area determination, receiver operating characteristic (ROC) analysis, and shape preservation. Furthermore, edge detection is a multistep process in which allowance can be made for numerous variables. In this study, an adaptive approach was used to allow for variations in background and in information density. Only the nearest neighbor algorithm (NNA) was found to produce generally acceptable results at all information densities studied. At high information densities, the Sobel and Kirsch filters produced acceptable results. Preprocessing of the images by a Gaussian filter did not substantially alter these conclusions.

Journal ArticleDOI
TL;DR: Reduction of digitized images to the V-S-S graph format greatly reduces the number of independent pieces of image data (although each piece requires more bits to encode).

Proceedings ArticleDOI
01 Apr 1981
TL;DR: An algorithm for extracting different regional boundaries of x-ray images is described which includes three-stage enhancement of the image in the fuzzy property plane and a smoother before the detection of edges.
Abstract: An algorithm for extracting different regional boundaries of x-ray images is described which includes three-stage enhancement of the image in the fuzzy property plane and a smoother before the detection of edges. The property plane is extracted from the spatial domain using S and π functions and fuzzifiers. The operator 'contrast intensifier' is used as an enhancement tool. The final edge detection is achieved using ' \max ' or ' \min ' operator.

Book ChapterDOI
Robert M. Haralick1
01 Jan 1981
TL;DR: A facet model for image data is presented which motivates an image processing procedure that simultaneously permits image restoration as well as edge detection, region growing, and texture analysis.
Abstract: In this paper we present a facet model for image data which motivates an image processing procedure that simultaneously permits image restoration as well as edge detection, region growing, and texture analysis. We give a mathematical discussion of the model, the associated iterative processing procedure, and illustrate it with processed image examples.

Proceedings ArticleDOI
07 Dec 1981
TL;DR: In this paper, a general edge detection method is developed as a result of the noise analysis, and a wide class of edge detectors is shown to be insensitive to edge orientation, and an optimal design with respect to noise statistics is found and a comparison made between many common edge operators.
Abstract: Techniques and analyses for improving the signal-to-noise performance of edge detectors are presented. A general edge detection method is developed as a result of the noise analysis, and a wide class of edge detectors is shown to be insensitive to edge orientation. For this class, an optimal design with respect to noise statistics is found and a comparison made between many common edge operators. Edge and noise models characteristic of typical images are presented and used in the analysis of these edge detectors.© (1981) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Book
01 Jan 1981
TL;DR: This paper presents a series of experiments in Digital Image Processing of Remotely Sensed Imagery, a Decision Theory and Scene Analysis Based Approach to Remote Sensing and the application of Complexity of Computations to Signal Processing.
Abstract: 1. Issues of General Interest.- Application of Complexity of Computations to Signal Processing.- Clustering in Pattern Recognition.- Topologies on Discrete Spaces.- A Model of the Receptive Fields of Mammal Vision.- Image Coding System Using an Adaptive DPCM.- Looking for Parallelism in Sequential Algorithm: An Assistance in Hardware Design.- 2. Feature Detection and Evaluation.- Finding the Edge.- Contour Tracing and Encodings of Binary Patterns.- Optimal Edge Detection in Cellular Textures.- Restoration of Nuclear Images by a Cross-spectrum Equalizer Filter.- Image Texture Analysis Techniques - A Survey.- Texture Features in Remote Sensing Imagery.- Two Dimensional Time Series for Textures.- Segmentation by Shape Discrimination Using Spatial Filtering Techniques.- Relaxation and Optimization for Stochastic Labelings.- Cooperation and Visual Information Processing.- 3. Scenes and Shapes.- Shape Description.- Structural Shape Description for Two-Dimensional and Three-Dimensional Shapes.- Shape Grammar Compilers.- A Facet Model for Image Data: Regions, Edges, and Texture.- Patched Image Databases.- Map and Line-Drawing Processing.- 4. Applications.- Digital Image Processing of Remotely Sensed Imagery.- A Decision Theory and Scene Analysis Based Approach to Remote Sensing.- Experiments in Schema-Driven Interpretation of a Natural Scene.- Finding Chromosome Centromeres Using Boundary and Density Information.- Forensic Writer Recognition.- Evaluation of Image Sequences: A Look Beyond Applications.- Automated Image-to-Image Registration, a Way to Multo-Temporal Analysis of Remotely Sensed Data.- Occlusion in Dynamic Scene Analysis.- List of Participants.

Journal Article
TL;DR: In order to improve the signal-to-noise ratio the images of the recorded heart cycles are treated by the Karhunen-Loeve transformation and an index of cardiac efficiency is calculated which shows high correlation with the ejection fraction of contrast biplane left ventricular angiography.
Abstract: Some of the limitations affecting radioisotopic images are their insufficient resolution, structure boundaries and accurate subtraction of background noise. In order to improve the signal-to-noise ratio the images of the recorded heart cycles are treated by the Karhunen-Loeve transformation. Edge detection is based on compression of the image to a bitmap at a chosen level and following of all bits set to one by a pointer. The outlines of the structures as determined by edge tracking are used to define the regions of interest within the enhanced images. Background subtraction applies the principle of interpolative background subtraction by computing an individual background for each point in an irregular region of interest. The resultant images are used to calculate an index of cardiac efficiency which shows high correlation (r = 0.92) with the ejection fraction of contrast biplane left ventricular angiography.

Journal ArticleDOI
Richard Harris1
TL;DR: A brief summary is made of Landsat digital data structures, and two groups of image analysis techniques considered: the first, path sequential analysis by analysing pixels and window areas in series, and the second, segmentation and labelling by the use of methods such as edge detection and iterative relaxation to map Landsat data.
Abstract: A brief summary is made of Landsat digital data structures, and two groups of image analysis techniques considered: the first, path sequential analysis by analysing pixels and window areas in series, and the second, segmentation and labelling by the use of methods such as edge detection and iterative relaxation to map Landsat data.