scispace - formally typeset
Search or ask a question

Showing papers on "Edge detection published in 1988"


Proceedings ArticleDOI
14 Nov 1988
TL;DR: A computationally efficient recursive filtering structure is presented for smoothing and, calculating the first and second directional derivatives and the Laplacian of an image with a fixed number of operations per output element, independently of the size of the neighborhood considered.
Abstract: A computationally efficient recursive filtering structure is presented for smoothing and, calculating the first and second directional derivatives and the Laplacian of an image with a fixed number of operations per output element, independently of the size of the neighborhood considered. It is shown how the recursive approach results on an implementation of low-level vision algorithms that is very efficient in terms of computational effort and how it renders the use of multiresolution techniques very attractive. Applications to edge detection problem are considered, and a novel edge detector allowing zero-crossings of an image, to be extracted with only 14 operations per output element at any resolution is provided. The algorithms have been tested for indoor scenes and noisy images and gave very good results for all of them. >

333 citations


Journal ArticleDOI
TL;DR: An algorithm for data compression of grey level images based on coding geometric and grey level information of the contours in the image and using the Laplacian pyramid coding algorithm to give intelligible reconstructed images.

192 citations


Journal ArticleDOI
05 Dec 1988
TL;DR: In this article, a method is presented for describing linked edge points at a range of scales by selecting intervals of the curve and scales of smoothing that are most likely to represent the underlying structure of the scene.
Abstract: While edge detection is an important first step for many vision systems, the linked lists of edge points produced by most existing edge detectors lack the higher level of curve description needed for many visual tasks. For example, they do not specify the tangent direction or curvature of an edge or the locations of tangent discontinuities. In this paper, a method is presented for describing linked edge points at a range of scales by selecting intervals of the curve and scales of smoothing that are most likely to represent the underlying structure of the scene. This multiscale analysis of curves is complementary to any multiscale detection of the original edge points. A solution is presented for the problem of shrinkage of curves during Gaussian smoothing, which has been a significant impediment to the use of smoothing for practical curve description. The curve segmentation method is based on a measure of smoothness minimizing the third derivative of Gaussian convolution. The smoothness measure is used to identify discontinuities of curve tangents simultaneously with selecting the appropriate scale of smoothing. The averaging of point locations during smoothing provides for accurate subpixel curve localization. This curve-description method can be implemented efficiently and should prove practical for a wide range of applications including correspondence matching, perceptual grouping, and model-based recognition.

182 citations


Journal ArticleDOI
21 Oct 1988-Science
TL;DR: A computational technique for integrating different visual cues has now been developed and implemented with encouraging results on a parallel supercomputer.
Abstract: Computer algorithms have been developed for several early vision processes, such as edge detection, stereopsis, motion, texture, and color, that give separate cues to the distance from the viewer of three-dimensional surfaces, their shape, and their material properties. Not surprisingly, biological vision systems still greatly outperform computer vision programs. One of the keys to the reliability, flexibility, and robustness of biological vision systems is their ability to integrate several visual cues. A computational technique for integrating different visual cues has now been developed and implemented with encouraging results on a parallel supercomputer.

160 citations


Journal ArticleDOI
TL;DR: The contrast and orientation estimation accuracy of several edge operators that have been proposed in the literature is examined both for the noiseless case and in the presence of additive Gaussian noise.
Abstract: The contrast and orientation estimation accuracy of several edge operators that have been proposed in the literature is examined both for the noiseless case and in the presence of additive Gaussian noise. The test image is an ideal step edge that has been sampled with a square-aperture grid. The effects of subpixel translations and rotations of the edge on the performance of the operators are studied. It is shown that the effect of subpixel translations of an edge can generate more error than moderate noise levels. Methods with improved results are presented for Sobel angle estimates and the Nevatia-Babu operator, and theoretical noise performance evaluations are also provided. An edge operator based on two-dimensional spatial moments is presented. All methods are compared according to worst-case and RMS error in an ideal noiseless situation and RMS error under various noise levels. >

148 citations


Journal ArticleDOI
A.F. Korn1
TL;DR: The symbolic representation of gray-value variations is studied, with emphasis on the gradient of the image function, and a procedure is proposed to select automatically a suitable scale, and with that, the size of the right convolution kernel.
Abstract: The symbolic representation of gray-value variations is studied, with emphasis on the gradient of the image function. The goal is to relate the results of this analysis to the structure of the picture, which is determined by the physics of the image generation process. Candidates for contour points are the maximal magnitudes of the gray-value gradient for different scales in the direction of the gradient. Based on the output of such a bank of gradient filters, a procedure is proposed to select automatically a suitable scale, and with that, the size of the right convolution kernel. The application of poorly adapted filters, which make the exact localization of gray-value corners or T-, X-, and Y-junctions more difficult, is thus avoided. Possible gaps at such junctions are discussed for images of real scenes, and possibilities for the closure of some of these gaps are demonstrated when the extrema of the magnitudes of the gray-value gradients are used. >

141 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider edge detection as the problem of measuring and localizing changes of light intensity in the image and show that the regularized solution that arises is then the solution to a variational principle.

131 citations


Journal ArticleDOI
TL;DR: A systematic framework is given that accommodates existing max-min filter methods and suggests new ones and can distinguish edges in ramp edges and texture (or noise) edges; all methods presented come in three versions: for edges, Ramp edges and non-ramp (“texture”) edges.

96 citations


Journal ArticleDOI
TL;DR: It is suggested that both filtering and edge detection should take place at the same time, and representation of the neighborhood by its mean and variance can be generalized by Haralick's sloped-facet model, which has a more complete characterization of the local changes of intensities.
Abstract: The conventional way of edge detection is to first filter the image and then use simple techniques to detect edges. However, filtering the noise will also blur the edges since edges correspond to the high frequencies. Our suggestion is that both filtering and edge detection should take place at the same time. The way of doing this is by statistical theory of hypothesis testing. A simple form of decision rule is derived and the generalization of this result to more complicated situations is also discussed in detail. The decision rule can make a decision whether in a given small neighborhood there is an edge, or a line, or a point, or a corner edge, or just a smooth region. During the computation of the decision rule, the by-products are the mean and variance of the neighborhood and these can be used for split and merge analysis. Calculation of the mean acts as filtering of the neighborhood pixels. In fact, representation of the neighborhood by its mean and variance can be generalized by Haralick's sloped-facet model, which has a more complete characterization of the local changes of intensities.

84 citations


Proceedings ArticleDOI
05 Dec 1988
TL;DR: A robust window operator is demonstrated that preserves gray-level and gradient discontinuities in digital images as it smooths and estimates derivatives.
Abstract: It is a common practice in computer vision and image processing to convolve rectangular constant coefficient windows with digital images to perform local smoothing and derivative estimation for edge detection and other purposes. If all data points in each image window belong to the same statistical population, this practice is reasonable and fast. But, as is well known, constant coefficient window operators produce incorrect results if more than one statistical population is present within a window, for example, if a gray-level or gradient discontinuity is present. This paper shows one way to apply the theory of robust statistics to the data smoothing and derivative estimation problem. A robust window operator is demonstrated that preserves gray-level and gradient discontinuities in digital images as it smooths and estimates derivatives.

80 citations


Proceedings ArticleDOI
05 Dec 1988
TL;DR: An approximation to the full regularization computation in which corresponding points are found by comparing local patches of the images is developed, which leads to dense optical flow fields.
Abstract: We describe a parallel algorithm for computing optical flow from short-range motion. Regularizing optical flow computation leads to a forruulation which minimizes matching error and, at the same time, maximises smoothness of the optical flow. We develop an approximation to the full regularization computation in which corresponding points are found by comparing local patches of the images. Selection aniong competing matches is performed using a winner-take-all scheme. The algorithm accommodates many different image transformations uniformly, with siniilar results, from brightness to edges. The optical flow computed froni different image transformations, such as edge detection and direct brightness computation, can be simply combined. The algorithm is easily implemented using local operations on a finegrained computer, and has been implemented on a Connection Machine. Experiments with natural images show that the scheme is effective and robust against noise. The algorithm leads to dense optical flow fields; in addition, inforniation from matching facilitates segmentation.

Book
01 Jan 1988
TL;DR: A novel algorithm for three-dimensional edge detection is proposed, which is an extension to the 3D case of the optimal 2D edge detector recently introduced by R. Deriche (1987).
Abstract: A novel algorithm for three-dimensional edge detection is proposed. This method is an extension to the 3D case of the optimal 2D edge detector recently introduced by R. Deriche (1987). The authors present better theoretical and experimental performances than some classical approaches used previously. Experimental results obtained on magnetic-resonance images and on echographic images are shown. It is pointed out that this approach can be used to detect edges in other multidimensional data, for instance, 2D+t or 3D+t images. >

Journal ArticleDOI
TL;DR: It is shown that the second derivative of the contrast function at the critical point is related to the classification of the associated edge as being phantom or authentic, and the contrast of authentic edges decreases with filter scale, while the Contrast of phantom edges are shown to increase with scale.
Abstract: The process of detecting edges in a one-dimensional signal by finding the zeros of the second derivative of the signal can be interpreted as the process of detecting the critical points of a general class of contrast functions that are applied to the signal. It is shown that the second derivative of the contrast function at the critical point is related to the classification of the associated edge as being phantom or authentic. The contrast of authentic edges decreases with filter scale, while the contrast of phantom edges are shown to increase with scale. As the filter scale increases, an authentic edge must either turn into a phantom edge or join with a phantom edge and vanish. The points in the scale space at which these events occur are seen to be singular points of the contrast function. Using ideas from singularity, or catastrophy theory, the scale map contours near these singular points are found to be either vertical or parabolic. >

Journal ArticleDOI
TL;DR: A generalised technique for selecting thresholds of edge strength maps from theoretical considerations of the known noise statistics of the image is derived and has been extended for use with combinations of edge operators.

Journal ArticleDOI
TL;DR: It is shown that spatial stability analysis leads to an edge detection scheme with good noise resilience characteristics and that it can lead to improvements in “shape from texture” methods.
Abstract: The scale-space S(x, σ) of a signal I(x) is defined as the space of the zero-crossings from {∇2G(σ)* I(x)}, where G is a Gaussian filter. We present a new method for parsing scale-space, spatial stability analysis, that allows the localization of region boundaries from scale space. Spatial stability analysis is based on the observation that zero-crossings of region boundaries remain spatially stable over changes in filter scale. It is shown that spatial stability analysis leads to an edge detection scheme with good noise resilience characteristics and that it can lead to improvements in “shape from texture” methods.

Journal ArticleDOI
TL;DR: Analysis of the optical and electronic parts of modern solid-state cameras shows that it is possible to determine the exact location of an edge to subpixel accuracy, independently of the system's modulation transfer function.
Abstract: A common problem in optical metrology is the determination of the exact location of an edge (a black/white transition). The use of cameras for this task has been restricted in the past because of their limited number of pixels and the lack of methods for subpixel accuracy edge detection. Analysis of the optical and electronic parts of modern solid-state cameras shows that it is possible to determine the exact location of an edge to subpixel accuracy, independently of the system's modulation transfer function. A novel algorithm for this purpose is presented together with an expression for the precision of the edge location as a function of pixel noise and edge step height. Experimental verification was carried out using a modified CCD camera coupled to an intelligent framestore (smart camera). Under optimum conditions the measured accuracy for the edge position was better than 1/140 of the pixel period, corresponding to less than 120 nm on the sensor surface of the camera. Applications of this novel method in metrology and micrometrology are discussed.

Journal ArticleDOI
TL;DR: A technique has been developed which calculates intersection points of edge gradient vectors storing response weights in one plane and the product of weights and radii in another plane effectively eliminating the radius dimension of the parameter space.
Abstract: The Hough transform for circle detection requires the search of a 3-dimensional parameter space for circles of arbitrary radius. A technique has been developed which calculates intersection points of edge gradient vectors storing response weights in one plane and the product of weights and radii in another plane effectively eliminating the radius dimension of the parameter space. The result is a significant savings in both search time and storage in the parameter space accumulator.

Journal ArticleDOI
TL;DR: The Sobel operator was found to be superior to the Roberts operator in edge enhancement and a theoretical explanation for the superior performance was developed based on the concept of analyzing the x and y Sobel masks as linear filters.
Abstract: Reference is made to the Sobel and Roberts gradient operators used to enhance image edges. Overall, the Sobel operator was found to be superior to the Roberts operator in edge enhancement. A theoretical explanation for the superior performance of the Sobel operator was developed based on the concept of analyzing the x and y Sobel masks as linear filters. By applying pill-box, Gaussian, or median filtering prior to applying a gradient operator, noise was reduced. The pill-box and Gaussian filters were more computationally efficient than the median filter with approximately equal effectiveness in noise reduction. >

Patent
Mitsuru Maeda1
22 Apr 1988
TL;DR: In this article, an apparatus for encoding color image data containing luminance information and color information in the unit of each block of predetermined size, by detecting the edge of the colour image data in the block based on the luminance and forming a code of a fixed length in response to the edge detection.
Abstract: An apparatus for encoding color image data containing luminance information and color information in the unit of each block of predetermined size, by detecting the edge of the color image data in the block based on the luminance information and forming a code of a fixed length in response to the edge detection. The luminance information and the color information are coded into codes of different lengths according to the presence of absence of an edge, and codes are formed from the code of fixed length and codes of different lengths.

Journal ArticleDOI
Vinciane Lacroix1
TL;DR: A postprocessing method called learning edges is proposed as a refinement of the nonmaximum-deletion algorithm, which enables one to postpone some deletion to the last module where contextual information is available and transmits the local edge direction in order to guide the contour following.
Abstract: The first module is a parallel process computing local edge strength and direction, while the last module is sequential process following edges. The originality of the overall method resides in the intermediate module, which is seen as a generalization of the nonmaximum-deletion algorithm. The role of this module is twofold: It enables one to postpone some deletion to the last module where contextual information is available, and it transmits the local edge direction in order to guide the contour following. A postprocessing method called learning edges is proposed as a refinement of the method. The binary edge images extracted from various gray-level images illustrate the power of the strategy. >

Journal ArticleDOI
TL;DR: In this article, a microprocessor-controlled line scan camera system for measuring edges and lengths of steel strips is described, and the problem of subpixel edge detection and estimation in a line image is considered.
Abstract: A microprocessor-controlled line scan camera system for measuring edges and lengths of steel strips is described, and the problem of subpixel edge detection and estimation in a line image is considered. The edge image is assumed to change gradually in its intensity, and the true edge location may be between pixels. Detection and estimation of edges are based on measurement of gray values of the line images at a limited number of pixels. A two-stage approach is presented. At the first stage, a computationally simple discrete-template-matching method is used to place the estimated edge point to the nearest pixel value. Three second-stage methods designed for subpixel estimation are examined. The modified Chebyshev polynomial and the three-point interpolation method do not require much knowledge on the shape of the edge intensity. If the functional form of the edge is known, a least-square estimation method may be used for better accuracy. In the case of nonstationary Poisson noise, a recursive maximum-likelihood method for the first-stage edge detection, followed by subpixel estimation, is proposed. >

Journal ArticleDOI
01 Jan 1988-Scanning
TL;DR: In this paper, the authors demonstrate the magnitude of the errors introduced by beam/specimen interactions and the mode of signal detection at a variety of beam acceleration voltages and discuss their relationship to precise and accurate metrology.
Abstract: The basic premise underlying the use of the scanning electron microscope (SEM) for linewidth metrology in semiconductor research and production applications is that the video image acquired, displayed, analyzed, and ultimately measured accurately reflects the structure of interest. However, it has been clearly demonstrated that image distortions can be caused by the detected secondary electrons not originating at the point of impact of the primary electron beam and by the type and location of the secondary electron detector. These effects and their contributions to the actual image or linewidth measurement have not been fully evaluated. Effects due to uncertainties in the actual location of electron origination do not affect pitch (line center-to-center or similar-edge-location-to-similar-edge-location spacing) measurements as long as the lines have the same edge geometries and similar profiles of their images in the SEM. However, in linewidth measurement applications, the effects of edge location uncertainty are additive and thus give twice the edge detection error to the measured width. The basic intent of this work is to demonstrate the magnitude of the errors introduced by beam/specimen interactions and the mode of signal detection at a variety of beam acceleration voltages and to discuss their relationship to precise and accurate metrology.

Journal ArticleDOI
TL;DR: A new algorithm for smoothing synthetic aperture radar (SAR) images, which is based on the estimation of the most homogeneous neighbourhood around each image pixel, is presented.

Proceedings ArticleDOI
07 Jun 1988
TL;DR: The authors present the major ideas behind the use of scale space and anisotropic diffusion for edge detection, show that an isotropic diffusion can enhance edges, suggest a network implementation of anisotrop diffusion, and provide design criteria for obtaining networks performingscale space and edge detection.
Abstract: Detecting edges of objects in their images is a basic problem in computational vision. The authors present the major ideas behind the use of scale space and anisotropic diffusion for edge detection, show that anisotropic diffusion can enhance edges, suggest a network implementation of anisotropic diffusion, and provide design criteria for obtaining networks performing scale space and edge detection. The results of a software implementation are shown. >

Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship between information and fidelity in image gathering and restoration, and found that the combined process of image retrieval and reconstruction behaves more as a communication channel in that the informationally optimized design of the image-gathering system tends to maximize the fidelity of optimally restored representations of the input.
Abstract: Image gathering and processing are assessed in terms of information and fidelity, and the relationship between these two figures of merit is examined. It is assumed that the system is linear and isoplanatic and that the signal and noise amplitudes are Gaussian, wide-sense stationary, and statistically independent. Within these constraints, it is found that the combined process of image gathering and reconstruction (which is intended to reproduce the output of the image-gathering system) behaves as optical, or photographic, image formation in that the informationally optimized design of the image-gathering system ordinarily does not maximize the fidelity of the reconstructed image. The combined process of image gathering and restoration (which is intended to reproduce the input of the image-gathering system) behaves more as a communication channel in that the informationally optimized design of the image-gathering system tends to maximize the fidelity of optimally restored representations of the input.

Proceedings ArticleDOI
05 Jun 1988
TL;DR: The authors provide theoretical justification for the use of zero crossings of residuals (between a filtered image and the original) for edge detection in smoothed images obtained by convolution with a Gaussian.
Abstract: The authors provide theoretical justification for the use of zero crossings of residuals (between a filtered image and the original) for edge detection. The smoothed version is obtained by bilinear interpolation as a result of two-dimensional discrete regularization of subsampled images. The method is also applicable to smoothed images obtained by convolution with a Gaussian. Examples of applications of the method are shown for three kinds of pictures: aerial photographs, low-quality pictures of tools, and a high-quality picture of a face. The same parameters are used in all the examples. In addition they show examples of the results of one of the Canny edge detectors on the same pictures. >

Journal ArticleDOI
TL;DR: In this version, curvatures are represented by straight lines connecting tree-, edge- and endpoints in the right sequence, and three basic feature points are detected using an efficient algorithm.

Book ChapterDOI
01 Jan 1988
TL;DR: In the course of implementing low-level (image to image) vision algorithms on Warp, the authors have understood the mapping of this class of algorithms well enough so that the programming of these algorithms is now a straightforward and stereotypical task.
Abstract: In computer vision, the first, and often most time-consuming, step in image processing is image-to-image operations. In this step, an input image is mapped into an output image through some local operation that applies to a window around each pixel of the input image. Algorithms that fall into this class include: edge detection, smoothing, convolutions in general, contrast enhancement, color transformations, and thresholding. Collectively, we call these operations low-level vision. Low-level vision is often time-consuming simply because images are quite large—a typical size is 512 × 512 pixels, so the operation must be applied 262,144 times.

Journal ArticleDOI
TL;DR: Three major contributions are reported: a method for sensing object surface patches without having to solve uniquely for stripe labels; the use of both an intensity image and a striped image, allowing scenes to be represented by detected edges along with 3D surface patches; and a pose-clustering algorithm, a uniform technique to accumulate matching evidence for recognition while averaging out substantial errors of pose.
Abstract: Word directed toward the development of a vision system for bin picking of rigid 3D objects is reported. Any such system must have components for sensing, feature extraction, modeling, and matching. A structured light system which attempts to deliver a rich 2/sup 1///sub 2/D representation of the scene is described. Surface patches are evident as connected sets of stripes whose 3D coordinates are computed by means of triangulation and constraint propagation. Object edges are detected by the intersection of surface patches or by backprojecting image edges to intersect with the patches. Two matching paradigms are given for drawing correspondence between structures in the scene representation and structures in models. Three major contributions are reported: a method for sensing object surface patches without having to solve uniquely for stripe labels; the use of both an intensity image and a striped image, allowing scenes to be represented by detected edges along with 3D surface patches; and a pose-clustering algorithm, a uniform technique to accumulate matching evidence for recognition while averaging out substantial errors of pose. >

Proceedings ArticleDOI
05 Jun 1988
TL;DR: The authors propose different techniques that combine the results of the convolution of two LoG operators of different deviations to detect true edges, and present an implementation of these techniques for edges in 2-D images.
Abstract: The Laplacian of Gaussian (LoG) operator is one of the most popular operators used in edge detection. This operator, however, has some problems: zero-crossings do not always correspond to edges, and edges with an asymmetric profile introduce a symmetric bias between edge and zero-crossing locations. The authors offer solutions to these two problems. First, for one-dimensional signals, such as slices from images, they propose a simple test to detect true edges, and, for the problem of bias, they propose different techniques: the first one combines the results of the convolution of two LoG operators of different deviations, whereas the others sample the convolution with a single LoG filter at two points besides the zero-crossing. In addition to localization, these methods allow them to further characterize the shape of the edge. The authors present an implementation of these techniques for edges in 2-D images. >