scispace - formally typeset
Search or ask a question

Showing papers on "Edge detection published in 1989"


Journal ArticleDOI
TL;DR: The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images and the results are compared to those obtained with other methods.
Abstract: Blood vessels usually have poor local contrast, and the application of existing edge detection algorithms yield results which are not satisfactory. An operator for feature extraction based on the optical and spatial properties of objects to be recognized is introduced. The gray-level profile of the cross section of a blood vessel is approximated by a Gaussian-shaped curve. The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images. Twelve different templates that are used to search for vessel segments along all possible directions are constructed. Various issues related to the implementation of these matched filters are discussed. The results are compared to those obtained with other methods. >

1,692 citations


Journal ArticleDOI
TL;DR: In this paper, an edge operator based on two-dimensional spatial moments is presented, which can be implemented for virtually any size of window and has been shown to locate edges in digitized images to a twentieth of a pixel.
Abstract: Recent results in precision measurements using computer vision are presented. An edge operator based on two-dimensional spatial moments is given. The operator can be implemented for virtually any size of window and has been shown to locate edges in digitized images to a twentieth of a pixel. This accuracy is unaffected by additive or multiplicative changes to the data values. The precision is achieved by correcting for many of the deterministic errors caused by nonideal edge profiles using a lookup table to correct the original estimates of edge orientation and location. This table is generated using a synthesized edge which is located at various subpixel locations and various orientations. The operator is extended to accommodate nonideal edge profiles and rectangularly sampled pixels. The technique is applied to the measurement of imaged machined metal parts. Theoretical and experimental noise analyses show that the operator has relatively small bias in the presence of noise. >

311 citations


Journal ArticleDOI
TL;DR: The authors describe a hybrid approach to the problem of image segmentation in range data analysis, where hybrid refers to a combination of both region- and edge-based considerations.
Abstract: The authors describe a hybrid approach to the problem of image segmentation in range data analysis, where hybrid refers to a combination of both region- and edge-based considerations. The range image of 3-D objects is divided into surface primitives which are homogeneous in their intrinsic differential geometric properties and do not contain discontinuities in either depth of surface orientation. The method is based on the computation of partial derivatives, obtained by a selective local biquadratic surface fit. Then, by computing the Gaussian and mean curvatures, an initial region-gased segmentation is obtained in the form of a curvature sign map. Two additional initial edge-based segmentations are also computed from the partial derivatives and depth values, namely, jump and roof-edge maps. The three image maps are then combined to produce the final segmentation. Experimental results obtained for both synthetic and real range data of polyhedral and curved objects are given. >

257 citations


Journal ArticleDOI
TL;DR: This is the first implementation of a relaxation algorithm for edge detection in echocardiograms that compounds spatial and temporal information along with a physical model in its decision rule, whereas most other algorithms base their decisions on spatial data alone.
Abstract: An automatic algorithm has been developed for high-speed detection of cavity boundaries in sequential 2-D echocardiograms using an optimization algorithm called simulated annealing (SA). The algorithm has three stages. (1) A predetermined window of size n*m is decimated to size n'*m' after low-pass filtering. (2) An iterative radial gradient algorithm is employed to determine the center of gravity (CG) of the cavity. (3) 64 radii which originate from the CG defined in stage 2 are bounded by the high-probability region. Each bounded radius is defined as a link in a 1-D, 64-member cyclic Markov random field. This algorithm is unique in that it compounds spatial and temporal information along with a physical model in its decision rule, whereas most other algorithms base their decisions on spatial data alone. This is the first implementation of a relaxation algorithm for edge detection in echocardiograms. Results attained using this algorithm on real data have been highly encouraging. >

231 citations


Journal ArticleDOI
TL;DR: It is shown that zero-crossing edge detection algorithms can produce edges that do not correspond to significant image intensity changes, and it is seen that authentic edges are denser and stronger, on the average, than phantom edges.
Abstract: It is shown that zero-crossing edge detection algorithms can produce edges that do not correspond to significant image intensity changes. Such edges are called phantom or spurious. A method for classifying zero crossings as corresponding to authentic or phantom edges is presented. The contrast of an authentic edge is shown to increase and the contrast of phantom edges to decrease with a decrease in the filter scale. Thus, a phantom edge is truly a phantom in that the closer one examines it, the weaker it becomes. The results of applying the classification schemes described to synthetic and authentic signals in one and two dimensions are given. The significance of the phantom edges is examined with respect to their frequency and strength relative to the authentic edges, and it is seen that authentic edges are denser and stronger, on the average, than phantom edges. >

222 citations


Journal ArticleDOI
TL;DR: A quantitative evaluation shows that the edge detector developed robust enough to perform well over a wide range of signal-to-noise ratios performs at least as well—and in most cases much better—than edge detectors.
Abstract: An edge detection scheme is developed robust enough to perform well over a wide range of signal-to-noise ratios. It is based upon the detection of zero crossings in the output image of a nonlinear Laplace filter. Specific characterizations of the nonlinear Laplacian are its adaptive orientation to the direction of the gradient and its inherent masks which permit the development of approximately circular (isotropic) filters. We have investigated the relation between the locally optimal filter parameters, smoothing size, and filter size, and the SNR of the image to be processed. A quantitative evaluation shows that our edge detector performs at least as well—and in most cases much better—than edge detectors. At very low signal-to-noise ratios, our edge detector is superior to all others tested.

188 citations


Journal ArticleDOI
TL;DR: A novel method for reconstruction from zero crossings in scale space that is based on minimizing equation error is formulated and presented, and results showing that the reconstruction is possible but can be unstable are presented.
Abstract: In computer vision, the one-parameter family of images obtained from the Laplacian-of-a-Gaussian-filtered version of the image, parameterized by the width of the Gaussian, has proved to be a useful data structure for the extraction of feature data. In particular, the zero crossings of this so-called scale-space data are associated with edges and have been proposed by D. Marr (1982) and others as the basis of a representation of the image data. The question arises as to whether the representation is complete and stable. The authors survey some of the studies and results related to these questions as well as several studies that attempt reconstructions based on this or related representations. They formulate a novel method for reconstruction from zero crossings in scale space that is based on minimizing equation error, and they present results showing that the reconstruction is possible but can be unstable. They further show that the method applies when gradient data along the zero crossings are included in the representation, and they demonstrate empirically that the reconstruction is then stable. >

179 citations


Journal ArticleDOI
TL;DR: In this paper, a computer edge-detection algorithm for automatic delineation of mesoscale structure in digital satellite IR (infrared) images of the ocean is developed, which is based on the gray level cooccurrence matrix (GLC), which is commonly used in image texture analysis.
Abstract: A computer edge-detection algorithm for automatic delineation of mesoscale structure in digital satellite IR (infrared) images of the ocean is developed. The popular derivative-based edge operators are shown to be too sensitive to edge fine-structure and to weak gradients to be useful in this application. The edge-detection algorithm is based on the gray level cooccurrence matrix (GLC), which is commonly used in image texture analysis. The cluster shade texture measure derived from the GLC matrix is found to be an excellent edge detector that exhibits the characteristic of fine-structure rejection while retaining edge sharpness. This characteristic is highly desirable for analyzing oceanographic satellite images. >

128 citations


Journal ArticleDOI
TL;DR: An analysis of liner edges at different scales in images includes the mutual influence of edges and identifies at what scale neighboring edges start influencing the response of a Laplacian or Gaussian operator.
Abstract: An analysis is presented of the behavior of edges in scale space for deriving rules useful in reasoning. This analysis of liner edges at different scales in images includes the mutual influence of edges and identifies at what scale neighboring edges start influencing the response of a Laplacian or Gaussian operator. Dislocation of edges, false edges, and merging of edges in the scale space are examined to formulate rules for reasoning in the scale space. The theorems, corollaries, and assertions presented can be used to recover edges, and related features, in complex images. The results reported include one lemma, three theorems, a number of corollaries and six assertions. The rigorous mathematical proofs for the theorems and corollaries are presented. These theorems and corollaries are further applied to more general situations, and the results are summarized in six assertions. A qualitative description as well as some experimental results are presented for each assertion. >

121 citations


Journal ArticleDOI
TL;DR: The edge-detection problem is posed as one of detecting step discontinuities in the observed correlated image, using directional derivatives estimated with a random field model, whose parameters are adaptively estimated using a recursive least-squares algorithm.
Abstract: The edge-detection problem is posed as one of detecting step discontinuities in the observed correlated image, using directional derivatives estimated with a random field model. Specifically, the method consists of representing the pixels in a local window by a 2-D causal autoregressive (AR) model, whose parameters are adaptively estimated using a recursive least-squares algorithm. The directional derivatives are functions of parameter estimates. An edge is detected if the second derivative in the direction of the estimated maximum gradient is negatively sloped and the first directional derivative and a local estimate of variance satisfy some conditions. Because the ordered edge detector may not detect edges of all orientations well, the image scanned in four different directions, and the union of the four edge images is taken as the final output. The performance of the edge detector is illustrated using synthetic and real images. Comparisons to other edge detectors are given. A linear feature extractor that operates on the edges produced by the AR model is presented. >

107 citations


Journal ArticleDOI
TL;DR: A method to detect, locate, and estimate edges in a one-dimensional signal is presented which is inherently more accurate than all previous schemes as it explicitly models and corrects interaction between nearby edges.
Abstract: A method to detect, locate, and estimate edges in a one-dimensional signal is presented. It is inherently more accurate than all previous schemes as it explicitly models and corrects interaction between nearby edges. The method is iterative with initial estimation of edges provided by the zero crossings of the signal convolved with Laplacian of Gaussian (LoG) filter. The necessary computations necessitate knowledge of this convolved output only in a neighborhood around each zero crossing and in most cases, could be performed locally by independent parallel processors. Results on one-dimensional slices extracted from real images, and on images which have been proposed independently in the row and column directions are shown. An analysis of the method is provided including issues of complexity and convergence, and directions of future research are outlined. >

Journal ArticleDOI
TL;DR: The noise present in digital images of typical indoor scenes is small and the signal-to-noise ratio is high, so small that small filters can be used and the exact shape of the filter is not critical.
Abstract: Two aspects of edge detection are analyzed, namely accuracy of localization and sensitivity to noise. The detection of corners and trihedral vertices is analyzed for gradient schemes and zero-crossing schemes. It is shown that neither scheme correctly detects corners of trihedral vertices, but that the gradient schemes are less sensitive to noise. A simple but important conclusion is that the noise present in digital images of typical indoor scenes is small and the signal-to-noise ratio is high. The noise present in digital images is so small as to make the performances of a variety of filters almost indistinguishable. As a consequence small filters can be used and the exact shape of the filter is not critical. >

Journal ArticleDOI
TL;DR: The authors describe several fundamentally useful primitive operations and routines and illustrate their usefulness in a wide range of familiar version processes in terms of a vector machine model of parallel computation.
Abstract: The authors describe several fundamentally useful primitive operations and routines and illustrate their usefulness in a wide range of familiar version processes. These operations are described in terms of a vector machine model of parallel computation. They use a parallel vector model because vector models can be mapped onto a wide range of architectures. They also describe implementing these primitives on a particular fine-grained machine, the connection machine. It is found that these primitives are applicable in a variety of vision tasks. Grid permutations are useful in many early vision algorithms, such as Gaussian convolution, edge detection, motion, and stereo computation. Scan primitives facilitate simple, efficient solutions of many problems in middle- and high-level vision. Pointer jumping, using permutation operations, permits construction of extended image structures in logarithmic time. Methods such as outer products, which rely on a variety of primitives, play an important role of many high-level algorithms. >

Journal ArticleDOI
TL;DR: Results show that rapid, reliable and accurate measurement of joint space size can be achieved using digital image analysis and the application of image handling computers to radiological scoring is important in the understanding of joint destruction and the progression of the rheumatic diseases.
Abstract: We have developed an automatic system for the measurement of joint space in the knee using computerized analysis of digital stored images of knee radiographs. PA X-rays of the knee are positioned on a conventional viewing box. A video image of the radiograph is converted to digital form, stored and displayed on a closed circuit television monitor. This is edited and analysed by a microcomputer. After careful positioning under a computer generated grid on the television screen, automatic measurements are made using an edge detection facility. Two different edge detection algorithms were compared. Reproducibility is very good. Inter- and intra-observer relationships are also good with no significant difference between observers using a paired t-test and very good correlations. Results show that rapid, reliable and accurate measurement of joint space size can be achieved using digital image analysis. The application of image handling computers to radiological scoring is important in our understanding of joint destruction and the progression of the rheumatic diseases.

Journal ArticleDOI
TL;DR: An algorithm for unsupervised texture segmentation is developed that is based on detecting changes in textural characteristics of small local regions, and results with images containing regions of natural texture show that the algorithm performs very well.
Abstract: An algorithm for unsupervised texture segmentation is developed that is based on detecting changes in textural characteristics of small local regions. Six features derived from two, two-dimensional, noncausal random field models are used to represent texture. These features contain information about gray-level-value variations in the eight principal directions. An algorithm for automatic selection of the size of the observation windows over which textural activity and change are measured has been developed. Effects of changes in individual features are considered simultaneously by constructing a one-dimensional measure of textural change from them. Edges in this measure correspond to the sought-after textural edges. Experiments results with images containing regions of natural texture show that the algorithm performs very well. >

Journal ArticleDOI
01 Nov 1989
TL;DR: A comparative cost function that mathematically captures the intuitive idea of an edge is formulated that uses information from both image data and local edge structure in evaluating the relative quality of pairs of edge configurations.
Abstract: Edge detection is cast as a problem in cost minimization The concept of an edge that is based on criteria such as accurate localization, thinness, continuity, and length is described On the basis of this description, a comparative cost function that mathematically captures the intuitive idea of an edge is formulated The function uses information from both image data and local edge structure in evaluating the relative quality of pairs of edge configurations The function is a linear combination of weighted cost factors Computation of the function is performed efficiently by organizing information in the form of a decision tree Edges are detected using a heuristic iterative search algorithm based on the comparative cost function The detection process can be implemented largely in parallel The usefulness of this approach to edge detection is demonstrated by showing experimental results of detected edges for both real and synthetic images >

Journal ArticleDOI
TL;DR: Edge detection schemes based on Prewitt, Sobel, and Roberts operators are realized using optical symbolic substitution and the corresponding optical systems are compared in terms of hardware and performance.
Abstract: Edge detection schemes based on Prewitt, Sobel, and Roberts operators are realized using optical symbolic substitution. The corresponding optical systems are compared in terms of hardware and performance.

Patent
08 Nov 1989
TL;DR: In this paper, a mirror image of a curve that represents the log-polar image of the straight line is drawn, and a peak bin is selected to represent the position of the recognized line.
Abstract: Image processing method and apparatus wherein candidate points are first selected in a log-polar image domain. The points may be selected by edge detection, thresholding, or any method suitable for identifying image points which are likely to lie on some feature of interest. For every candidate point there is drawn, also in the log-polar domain, a mirror image of a curve that represents the log-polar image of a straight line. There is accumulated a log-polar domain histogram of "hits" from the drawing process. Finally, a peak bin is selected. The position of the peak bin corresponds to the position of the recognized line.

Journal ArticleDOI
TL;DR: It is shown that the energy feature detector is a true projection and does not proliferate edges when applied to a line-drawing, whereas several of the conventional operators do.

Proceedings ArticleDOI
27 Mar 1989
TL;DR: A system to accomplish reconstruction of image curves by exploiting principles of perceptual organization such as proximity and good continuation to identify co-curving or curvilinear structure is described herein.
Abstract: Image curves often correspond to the bounding contours of objects as they appear in the image. As such, they provide important structural information which may be exploited in matching and recognition tasks. However, these curves often do not appear as coherent events in the image; they must, therefore, be (re)constructed prior to their effective use by higher-level processes. A system (currently being built) to accomplish such reconstruction of image curves is described herein. It exploits principles of perceptual organization such as proximity and good continuation to identify co-curving or curvilinear structure. Components of each such structure are replaced by a single curve, thus making their coherence explicit. The system is iterative, operating over a range of perceptual scales--fine to coarse--and yielding a hierarchy of alternative descriptions. Results are presented for the first iteration, showing the performance of the system at the finest perceptual scale and indicating the reasonableness of the paradigm for subsequent iterations.


Journal ArticleDOI
TL;DR: A technique for determining the distortion parameters (location and orientation) of general three-dimensional objects from a single range image view is introduced, based on an extension of the straight-line Hough transform to three- dimensional space.
Abstract: A technique for determining the distortion parameters (location and orientation) of general three-dimensional objects from a single range image view is introduced. The technique is based on an extension of the straight-line Hough transform to three-dimensional space. It is very efficient and robust, since the dimensionality of the feature space is low and since it uses range images directly (with no preprocessing such as segmentation and edge or gradient detection). Because the feature space separates the translation and rotation effects, a hierarchical algorithm to detect object rotation and translation is possible. The new Hough space can also be used as a feature space for discriminating among three-dimensional objects. >

Journal ArticleDOI
TL;DR: A divide-and-conquer Hough transform technique for detecting a given number of straight edges or lines in an image that requires only O(log n) computational steps for an image of size n × n.

Journal ArticleDOI
TL;DR: It is shown that although the generalized equations incorporating detector FOV dependence reduce to the hemispherical FOV equations in some cases, in general integrating sphere behavior is altered through restriction of the detectors FOV.
Abstract: Integrating sphere theory is developed for restricted field of view (FOV) detectors using a simple series solution technique. The sphere throughput, sample reflectance, and sphere wall reflectance are calculated. The effects of the sample's scattering characteristics on sphere measurements are determined. It is shown that although the generalized equations incorporating detector FOV dependence reduce to the hemispherical FOV equations in some cases, in general integrating sphere behavior is altered through restriction of the detector FOV.

Patent
16 Jun 1989
TL;DR: Bar/space margin detection as discussed by the authors performs a ratiometric comparison of each bar and space to determine every occurrence of 10:1 ratios and both bar/space and space/bar. But it does not specify where in memory each margin ratio is located.
Abstract: Scanned barcodes represented by an electrical signal are preprocessed in a PLA and delivered to a microprocessor in a form that permits efficient final processing of the data. High speed edge detection produce edge markers for the control logic and counting circuits coupled with a Johnson state counter to logically sequence the processor. Logarithmic conversion compresses the counted time interval to a single data word that can most efficiently be processed by the host microprocessor. Logic between the preprocessor and the microprocessor transfer digital representations of each bar and space to preassigned respective areas in the microprocessor memory. Bar/space margin detection performs a ratiometric comparison of each bar and space to determine every occurrence of 10:1 ratios and both bar/space and space/bar. Pointer signal information is transferred to the microprocessor indicating precisely where in memory each 10:1 margin ratio is located. A digital filter excludes spurious bar/space width in the preprocessor.

Journal ArticleDOI
Liren Liu1
TL;DR: An optoelectronic implementation based on optical neighborhood operations and electronic nonlinear feedback is proposed to perform morphological image processing such as erosion, dilation, opening, closing, and edge detection.
Abstract: An optoelectronic implementation based on optical neighborhood operations and electronic nonlinear feedback is proposed to perform morphological image processing such as erosion, dilation, opening, closing, and edge detection. Results of a numerical simulation are given and experimentally verified.

Proceedings ArticleDOI
04 Jun 1989
TL;DR: A novel set of strategies for generating candidate states and a suitable temperature schedule are presented and a cost function is formulated that evaluates the quality of edge configurations in minimum-cost configurations.
Abstract: Edge detection is analyzed as a problem in cost minimization. A cost function is formulated that evaluates the quality of edge configurations. A mathematical description of edges is given, and the cost function is analyzed in terms of the characteristics of the edges in minimum-cost configurations. The cost function is minimized by the simulated annealing method. A novel set of strategies for generating candidate states and a suitable temperature schedule are presented. Sequential and parallel versions of the annealing algorithm are implemented and compared. Experimental results are presented. >

Journal ArticleDOI
01 Sep 1989
TL;DR: In this paper, a robust window operator is demonstrated that preserves gray-level and gradient discontinuities in digital images as it smooths and estimates derivatives, which is a common practice in computer vision and image processing.
Abstract: It is a common practice in computer vision and image processing to convolve rectangular constant coefficient windows with digital images to perform local smoothing and derivative estimation for edge detection and other purposes. If all data points in each image window belong to the same statistical population, this practice is reasonable and fast. But, as is well known, constant coefficient window operators produce incorrect results if more than one statistical population is present within a window, for example, if a gray-level or gradient discontinuity is present. This paper shows one way to apply the theory of robust statistics to the data smoothing and derivative estimation problem. A robust window operator is demonstrated that preserves gray-level and gradient discontinuities in digital images as it smooths and estimates derivatives.

Proceedings ArticleDOI
04 Jun 1989
TL;DR: The authors present a method to smooth a signal-whether it is an intensity image, a range image, or a contour-which preserves discontinuities and thus facilitates their detection and makes it possible to derive a novel scale-space representation of a signal using a small number of scales.
Abstract: The authors present a method to smooth a signal-whether it is an intensity image, a range image, or a contour-which preserves discontinuities and thus facilitates their detection. This is achieved by repeatedly convolving the signal with a very small averaging filter modulated by a measure of the signal discontinuity at each point. This process is related to the anisotropic diffusion reported by P. Perona and J. Malik (1987) but it has a much simpler formulation and is not subject to instability or divergence. Real examples show how this approach can be applied to the smoothing of various types of signals. The detected features do not move, and thus no tracking is needed. The last property makes it possible to derive a novel scale-space representation of a signal using a small number of scales. Finally, this process is easily implemented on parallel architectures: the running time on a 16 K connection machine is three orders of magnitude faster than on a serial machine. >

Journal ArticleDOI
TL;DR: A digitisation model based on which the pixels have UNI- FORM SQUARE RECEPTIVE FIELDS that TESSELATE the image plane and insights that can guide the application of image understanding and robot vision are provided.
Abstract: For a unit step edge, we calculate the gradient magnitude and direction reported by various simple differential edge operators as a function of the edge's actual orientation and offset with respect to the pixel grid. To be consistent with previous work (of Abdou and Pratt) we have initially chosen to analyse (among others) the Sobel, Prewitt, and Roberts Cross operators, using a digitization model in which the pixels have uniform square receptive fields that tesselate the image plane. Our quantitative results provide insights into the behavior of these commonly used operators, insights that can guide their proper application in problems of image understanding and robot vision. We also suggest novel techniques for improving the performance of these operators.