scispace - formally typeset
Search or ask a question

Showing papers on "Edge detection published in 1991"


Journal ArticleDOI
TL;DR: The authors present an efficient architecture to synthesize filters of arbitrary orientations from linear combinations of basis filters, allowing one to adaptively steer a filter to any orientation, and to determine analytically the filter output as a function of orientation.
Abstract: The authors present an efficient architecture to synthesize filters of arbitrary orientations from linear combinations of basis filters, allowing one to adaptively steer a filter to any orientation, and to determine analytically the filter output as a function of orientation. Steerable filters may be designed in quadrature pairs to allow adaptive control over phase as well as orientation. The authors show how to design and steer the filters and present examples of their use in the analysis of orientation and phase, angularly adaptive filtering, edge detection, and shape from shading. One can also build a self-similar steerable pyramid representation. The same concepts can be generalized to the design of 3-D steerable filters. >

3,365 citations


Journal ArticleDOI
TL;DR: It is shown that if just a small subset of the edge points in the image, selected at random, is used as input for the Hough Transform, the performance is often only slightly impaired, thus the execution time can be considerably shortened.

640 citations


Journal ArticleDOI
TL;DR: Different implementations of adaptive smoothing are presented, first on a serial machine, for which a multigrid algorithm is proposed to speed up the smoothing effect, then on a single instruction multiple data (SIMD) parallel machine such as the Connection Machine.
Abstract: A method to smooth a signal while preserving discontinuities is presented. This is achieved by repeatedly convolving the signal with a very small averaging mask weighted by a measure of the signal continuity at each point. Edge detection can be performed after a few iterations, and features extracted from the smoothed signal are correctly localized (hence, no tracking is needed). This last property allows the derivation of a scale-space representation of a signal using the adaptive smoothing parameter k as the scale dimension. The relation of this process to anisotropic diffusion is shown. A scheme to preserve higher-order discontinuities and results on range images is proposed. Different implementations of adaptive smoothing are presented, first on a serial machine, for which a multigrid algorithm is proposed to speed up the smoothing effect, then on a single instruction multiple data (SIMD) parallel machine such as the Connection Machine. Various applications of adaptive smoothing such as edge detection, range image feature extraction, corner detection, and stereo matching are discussed. >

436 citations


Journal ArticleDOI
TL;DR: An extension of edge detectors based on second-order differential operators to the case of multiple band (color) images is proposed, and a local directional measure of multispectral contrast is defined.

289 citations


Journal ArticleDOI
TL;DR: It is argued that the best way to model an edge is by assuming all ideal mathematical function passed through a low-pass filter and and immersed in noise.
Abstract: It is argued that the best way to model an edge is by assuming all ideal mathematical function passed through a low-pass filter and and immersed in noise. Using techniques similar to those developed by J. Canny (1983, 1986) and L.A. Spacek (1986), optimal filters are derived for ramp edges of various slopes. The optimal nonrecursive filter for ideal step edges is then derived as a limiting case of the filters for ramp edges. Because there are no step edges in images, edge detection is improved when the ramp filter is used instead of the filters developed for step edges. For practical purposes, some convolution masks are given which can be used directly for edge detection without the need to go into the details of the subject. >

214 citations


Journal ArticleDOI
TL;DR: Two decomposition theorems for the z-transform of least squares approximation systems are presented and one facilitates the determination of their impulse response, while the other allows an efficient implementation through successive causal and anticausal recursive filtering.
Abstract: Least squares approximation problems that are regularized with specified highpass stabilizing kernels are discussed. For each problem, there is a family of discrete regularization filters (R-filters) which allow an efficient determination of the solutions. These operators are stable symmetric lowpass filters with an adjustable scale factor. Two decomposition theorems for the z-transform of such systems are presented. One facilitates the determination of their impulse response, while the other allows an efficient implementation through successive causal and anticausal recursive filtering. A case of special interest is the design of R-filters for the first- and second-order difference operators. These results are extended for two-dimensional signals and, for illustration purposes, are applied to the problem of edge detection. This leads to a very efficient implementation (8 multiplies+10 adds per pixel) of the optimal Canny edge detector based on the use of a separable second-order R-filter. >

202 citations


Journal ArticleDOI
TL;DR: The purpose was to compare measurements of endocardial area manually traced from conventional echocardiograms with those obtained with the real-time automated edge detection system in normal subjects.
Abstract: BACKGROUNDAutomated edge detection of endocardial borders in echocardiograms provides objective, reproducible estimation of cavity area; however, most methods have required off-line analysis. A recently developed prototype echocardiographic imaging system permits real-time automated edge detection during imaging and thus, the potential for measurement of cyclic changes in cavity area and the assessment of left ventricular function on-line. Our purpose was to compare measurements of endocardial area manually traced from conventional echocardiograms with those obtained with the real-time automated edge detection system in normal subjects.METHODS AND RESULTSTwo training sets of images were used to establish optimal methods of gain setting; the settings were then evaluated in a test set of images. In the high-gain training group (n = 8 subjects, 119 images), gain settings were adjusted sufficiently high to display at least 90% of the endocardial border. Manually drawn and real-time area measurements correlate...

194 citations


Book
01 Aug 1991
TL;DR: An introduction to computer vision illumination and fixturing sensors image acquisition and representation fundamentals of digital image processing image analysis the segmentation problem 2-D shape description and recognition 3-D object representations robot programming and robot vision bin picking trends and aspirations.
Abstract: An introduction to computer vision illumination and fixturing sensors image acquisition and representation fundamentals of digital image processing image analysis the segmentation problem 2-D shape description and recognition 3-D object representations robot programming and robot vision bin picking trends and aspirations.

178 citations


Journal ArticleDOI
TL;DR: The problem of decomposing an extended boundary or contour into simple primitives is addressed with particular emphasis on Laplacian-of-Gaussian zero-crossing contours, and a technique is introduced for partitioning such contours into constant curvature segments.
Abstract: The problem of decomposing an extended boundary or contour into simple primitives is addressed with particular emphasis on Laplacian-of-Gaussian zero-crossing contours. A technique is introduced for partitioning such contours into constant curvature segments. A nonlinear 'blip' filter matched to the impairment signature of the curvature computation process, an overlapped voting scheme, and a sequential contiguous segment extraction mechanism are used. This technique is insensitive to reasonable changes in algorithm parameters and robust to noise and minor viewpoint-induced distortions in the contour shape, such as those encountered between stereo image pairs. The results vary smoothly with the data, and local perturbations induce only local changes in the result. Robustness and insensitivity are experimentally verified. >

143 citations


Journal ArticleDOI
TL;DR: This method is an extension to the 3D case of the optimal 2D edge detector recently introduced by R. Deriche and presents better theoretical and experimental performances than some classical approachs used at this date.
Abstract: This paper proposes a new algorithm for three-dimensional edge detection. This method is an extension to the 3D case of the optimal 2D edge detector recently introduced by R. Deriche ( Int. J. Comput. Vison 1, 1987). It presents better theoretical and experimental performances than some classical approachs used at this date. Experimental results obtained on magnetic resonance images and on echographic images are shown. We stress that this approach can be used to detect edges in other multidimensional data, for instance 2D − t or 3D − t images.

142 citations


01 Jan 1991
TL;DR: An introduction to computer vision illumination and fixturing sensors image acquisition and representation fundamentals of digital image processing image analysis the segmentation problem 2-D shape description and recognition 3-D object representations robot programming and robot vision bin picking trends and aspirations.
Abstract: An introduction to computer vision illumination and fixturing sensors image acquisition and representation fundamentals of digital image processing image analysis the segmentation problem 2-D shape description and recognition 3-D object representations robot programming and robot vision bin picking trends and aspirations.

Journal ArticleDOI
TL;DR: A new technique for directional analysis of linear patterns in images is proposed based on the notion of scale space which is illustrated through applications to synthetic patters and to scanning electron microscope images of collagen fibrils in rabbit ligaments.
Abstract: In this paper a new technique for directional analysis of linear patterns in images is proposed based on the notion of scale space. A given image is preprocessed by a sequence of filters which are second derivatives of 2-D Gaussian functions with different scales. This gives a set of zero crossing maps (the scale space) from which a stability map is generated. Significant linear patterns are detected from measurements on the stability map. Information regarding orientation of the linear patterns in the image and the area covered by the patterns in specific directions is then computed. The performance of the method is illustrated through applications to synthetic patters and to scanning electron microscope images of collagen fibrils in rabbit ligaments.

01 Jan 1991
TL;DR: This thesis proposes that the canonical way to construct a scale-space for discrete signals is by convolution with a kernel called the discrete analogue of the Gaussian kernel, or equivalently by solving a semi-discretized version of the diffusion equation.
Abstract: This thesis, within the subfield of computer science known as computer vision, deals with the use of scale-space analysis in early low-level processing of visual information. The main contributions comprise the following five subjects: The formulation of a scale-space theory for discrete signals. Previously, the scale-space concept has been expressed for continuous signals only. We propose that the canonical way to construct a scale-space for discrete signals is by convolution with a kernel called the discrete analogue of the Gaussian kernel, or equivalently by solving a semi-discretized version of the diffusion equation. Both the one-dimensional and two-dimensional cases are covered. An extensive analysis of discrete smoothing kernels is carried out for one-dimensional signals and the discrete scale-space properties of the most common discretizations to the continuous theory are analysed. A representation, called the scale-space primal sketch, which gives a formal description of the hierarchical relations between structures at different levels of scale. It is aimed at making information in the scale-space representation explicit. We give a theory for its construction and an algorithm for computing it. A theory for extracting significant image structures and determining the scales of these structures from this representation in a solely bottom-up data-driven way. Examples demonstrating how such qualitative information extracted from the scale-space primal sketch can be used for guiding and simplifying other early visual processes. Applications are given to edge detection, histogram analysis and classification based on local features. Among other possible applications one can mention perceptual grouping, texture analysis, stereo matching, model matching and motion. A detailed theoretical analysis of the evolution properties of critical points and blobs in scale-space, comprising drift velocity estimates under scale-space smoothing, a classification of the possible types of generic events at bifurcation situations and estimates of how the number of local extrema in a signal can be expected to decrease as function of the scale parameter. For two-dimensional signals the generic bifurcation events are annihilations and creations of extremum-saddle point pairs. Interpreted in terms of blobs, these transitions correspond to annihilations, merges, splits and creations. Experiments on different types of real imagery demonstrate that the proposed theory gives perceptually intuitive results.

Journal ArticleDOI
TL;DR: It is shown that the three-dimensional edge tracking algorithm extracts additional edges not provided by the filtering stage without introducing spurious edges.

Patent
23 Apr 1991
TL;DR: In this paper, an image pickup apparatus for a television and a depth-of-field control apparatus used in the same is described, where the focal point or length position of the television is controlled by a composition control circuit made up of a circuit for detecting individual powers of image signals corresponding to a plurality of different picture images, a circuit that compares the detected powers with each other, and a circuit detecting the position of an edge included in one of the image signals, wherein a control signal for the image composition produced by the power comparison circuit is compensated by the edge position information obtained
Abstract: An image pickup apparatus for a television and a depth-of-field control apparatus used in the same. Image signals corresponding to a plurality of picture images different in focal point or length position are obtained by a mechanism for changing a focal point or length position to produce a new image signal by composing these image signals through a composition circuit, and motion information of an object is obtained by a circuit for detecting a moving portion in the object to control the image composition by the motion information. The focal point or length position is moved in synchronism with an integer multiple of a vertical scanning period of the television. The image signals corresponding to the plurality of picture images different in focal point or length position are obtained within one vertical scanning period determined by the system of the television. The amount of movement of the focal point or length position is controlled in conjunction with a value of a lens aperture of the camera lens. The image composition is controlled by a composition control circuit made up of a circuit for detecting individual powers of image signals corresponding to a plurality of different picture images, a circuit for comparing the detected powers with each other, and a circuit for detecting the position of an edge included in one of the image signals, wherein a control signal for the image composition produced by the power comparison circuit is compensated by the edge position information obtained by the edge detection circuit.

Journal ArticleDOI
TL;DR: Filter parameters and performance criteria are presented for several designs, and experimental results are presented on a variety of images which demonstrate the behavior in the presence of very adverse noise, with respect to scale, and as compared to other “optimal” IIR filters which have been reported.
Abstract: We present formal optimality criteria and a complete design methodology for a family of zero crossing based, infinite impulse response (recursive) edge detection filters. In particular, we adapt the optimality criteria proposed by Canny (IEEE Trans. Pattern Anal. Mach. IntelligencePAMI-8, 1986, 679–714) to filters designed to respond with a zero crossing in the output at an edge location and additionally to impulse responses which are (allowed to be) infinite in extent. The spurious response criterion is captured directly by an appropriate measure of filter spatial extent for infinite responses. Infinite duration impulse responses may be implemented efficiently with recursive filtering techniques and so require constant computation time with respect to scale. As we will show, we can achieve both superior performance and increased speed by designing directly for an infinite impulse response than by any of the proposed finite duration approaches. We also show that the optimal filter which responds with a zero crossing in its output may not be implemented by designing the optimal peak responding filter (similar to Canny) and taking an additional derivative. It is necessary to formulate the criteria and design for a zero crossing response from the outset, else optimality is sacrificed. Filter parameters and performance criteria are presented for several designs, and experimental results are presented on a variety of images which demonstrate the behavior in the presence of very adverse noise, with respect to scale, and as compared to other “optimal” IIR filters which have been reported.

Proceedings ArticleDOI
01 Nov 1991
TL;DR: In this paper, a new method for detecting dominant points is presented, which does not require any input parameter, and the dominant points obtained by this method remain relatively the same even when the object curveis scaled or rotated.
Abstract: A new method for detecting dominant points is presented. It does not require any input parameter, and the dominant points obtained by this method remain relatively the same even when the object curveis scaled or rotated.Ill this method, for each boundary point, a support region is assigned to the point based on its localproperties. Each point is then smoothed by a Gaussian filter with a width proportional to its determined support region. A significance measure for each point is then computed. Dominant points are finally obtained through nonmaximum suppression.Unlike other dominant point detection algorithms which are sensitive to scaling and rotation of theobject curve, the new method will overcome this difficulty. Furthermore, it is robust in the presence of noise. The proposed new method is compared to a well-known dominant point detection algorithm in termsof the computational complexity and the approximation errors. 1. INTRODUCTION It has been suggested from the viewpoint of the human visual system1 that dominant points along anobject contour are rich in information content and are sufficient to characterize the object contour. Thedominant points are the high curvature points along a digital curve that have important shape attributes.Many algorithms216 have been suggested for detecting dominant points. They fall into two categories

Book
01 Oct 1991
TL;DR: This book discusses the image model, its applications, and some of the implications for decision-making in the domain of Frequency Domain Processing.
Abstract: Introduction The Image Model Image Acquisition Image Presentation Statistical Operations Spatial Operations and Transformations Segmentation and Edge Detection Morphological and Other Area Operations Finding Basic Shapes Labelling Lines and Regions Reasoning, Facts and Inferences Pattern Recognition and Training The Frequency Domain Applications of Frequency Domain Processing Image compression Texture Other Topics Applications Glossary Bibliography Acknowledgements

Journal ArticleDOI
TL;DR: This article proposes a controller that utilizes a tactile sensor in the feedback loop of a manipulator to track edges in real time and uses a Hybrid controller that uses both position and force set points.
Abstract: Object recognition through the use of input from multiple sensors is an important aspect of an autonomous manipulation system. In tactile object recognition, it is necessary to determine the location and orientation of object edges and surfaces. In this paper, we describe a controller that utilizes a Lord LTS 210 tactile sensor in the feedback loop of a manipulator to track edges in real-time. In our control system the data from the tactile sensor is processed in two stages to determine the location of edges. The parameters of these edges are then used to generate a control signal to drive the manipulator. The edge tracker has been implemented on the CMU Direct Drive Arm Il system. We describe both theory and experimental implementation of tactile edge detection and an edge tracking controller.

Journal ArticleDOI
TL;DR: The implementation method that is used is significantly superior to the classical ones in some aspects (e.g. computational time, memory storage) and experimental results show that the use of IIR filter preserves the detector properties.

Journal ArticleDOI
07 Jul 1991
TL;DR: In this paper, a technique for the construction of multi-scale representations of grey-level images is presented, which is based upon connecting singular points in the image with maximum gradient paths.
Abstract: We present a technique for the construction of multi-scale representations of grey-level images. Unlike conventional representations the scales are discrete as opposed to continuous and their level is solely determined by the data. The technique is based upon connecting singular points in the image with maximum gradient paths. We also describe two segmentation methods which use the maximum gradient paths generated during the construction of the multi-scale representation. In both segmentation techniques the paths are used to determine significant ridges and troughs. The first technique operates directly on the image, while the second technique uses the magnitude of the image derivative.

Patent
John L. Groezinger1
07 Feb 1991
TL;DR: In this paper, a system and method for delineating the edges of an object in an image frame formed of a two dimensional array of pixels represented by a plurality of gray scale values representing the gray scales of the pixels.
Abstract: A system and method for delineating the edges of an object in an image frame formed of a two dimensional array of pixels represented by a plurality of gray scale values representing the gray scales of the pixels. A reference contrast level based on the distribution of contrast levels between contiguous pixels is established, and the contrast levels of pairs of pixels in square groupings of four contigous pixels are compared with the reference contrast level. A two dimensional array of elements corresponding to the arrangement of the square groupings of pixels is formed in which each element has a first value only if the contrast level between a pair of pixels in a square grouping is greater than the reference contrast level.


Journal ArticleDOI
TL;DR: A system for automatically determining the contour of the left ventricle (LV) and its bounded area, from transesophageal echocardiographic (TEE) images is presented.
Abstract: A system for automatically determining the contour of the left ventricle (LV) and its bounded area, from transesophageal echocardiographic (TEE) images is presented. It uses knowledge of both heart anatomy and echocardiographic imaging to guide the selection of image processing methodologies for thresholding, edge detection, and contour following and the center-based boundary-finding technique to extract the contour of the LV region. To speed up the processing a rectangular region of interest from a TEE picture is first isolated and then reduced to a coarse version, one-ninth original size. All processing steps, except the final contour edge extraction, are performed on this reduced image. New methods developed for automatic threshold selection, region segmentation, noise removal, and region center determination are described. >

Journal ArticleDOI
01 Aug 1991
TL;DR: A novel approach to digital image stabilization (DIS) for video cameras is proposed that is designed mainly for hardware minimization without using additional Micoms in a video camera system.
Abstract: A novel approach to digital image stabilization (DIS) for video cameras is proposed. The proposed DIS system is composed of an edge detection unit, a motion detection unit, and a digital zooming unit. The edge detection unit is based on tristate adaptive linear neurons, the motion detection unit is based on the corresponding binary logical correlation computations instead of multiple-bit pixel-wise subtractions, and the digital zooming unit is based on the approximated bilinear interpolation technique. A motion decision procedure is also proposed for removing unwanted effects on the integrated motion vector. The proposed DIS system is designed mainly for hardware minimization without using additional Micoms in a video camera system. Experimental results show that the proposed system accurately detects and compensates for camera motion. >

Journal ArticleDOI
TL;DR: This work presents an algorithm for enhancement of edge quality through nonstationary filtering based on estimates of edge locations, which yields appreciable improvement when applied to images coded by pixel-domain methods.

01 Jan 1991
TL;DR: In this article, a metric for describing line segments is presented, which measures how well two line segments can be replaced by a single longer one, depending on collinearity and nearness of the line segments.
Abstract: This correspondence presents a metric for describing line segments. This metric measures how well two line segments can be replaced by a single longer one. This depends for example on collinearity and nearness of the line segments. The metric is constructed using a new technique using so-called neighborhood functions. The behavior of the metric depends on the neighborhood function chosen. In this correspondence, an appropriate choice for the case of line segments is presented. The quality of the metric is verified by using it in a simple clustering algorithm that groups line segments found by an edge detection algorithm in an image. The fact that the clustering algorithm can detect long linear structures in an image shows that the metric is a good measure for the groupability of line segments. >

Proceedings ArticleDOI
03 Jun 1991
TL;DR: The authors determine the uncertainties inherent in edge (and surface) detection and 2D and 3D images by quantitatively analyzing the uncertainty in edge position, orientation, and magnitude produced by the multidimensional (2D and3D) versions of the Monga-Deriche-Canny recursive separable edge-detector.
Abstract: A theoretical link is established between the 3D edge detection and the local surface approximation using uncertainty. As a practical application of the theory, a method is presented for computing typical curvature features from 3D medical images. The authors determine the uncertainties inherent in edge (and surface) detection and 2D and 3D images by quantitatively analyzing the uncertainty in edge position, orientation, and magnitude produced by the multidimensional (2D and 3D) versions of the Monga-Deriche-Canny recursive separable edge-detector. The uncertainty is shown to depend on edge orientation, e.g. the position uncertainty may vary with a ratio larger than 2.8 in the 2D case, and 3.5 in the 3D case. These uncertainties are then used to compute local geometric models (quadric surface patches) of the surface, which are suitable for reliably estimating local surface characteristics, for example, Gaussian and mean curvature. The authors demonstrate the effectiveness of these methods compared to previous techniques. >

Proceedings ArticleDOI
11 Jun 1991
TL;DR: A novel image segmentation technique is presented which combines region growing, edge detection, and a novel edge preserving smoothing algorithm which helps to avoid characteristic segmentation errors which occur when using region growing or edge detection separately.
Abstract: A novel image segmentation technique is presented which combines region growing, edge detection, and a novel edge preserving smoothing algorithm. The combined method helps to avoid characteristic segmentation errors which occur when using region growing or edge detection separately. The method is applied to segment MRI (magnetic resonance imaging) images for subsequent 3D visualization, and experimental results are presented. >

Journal ArticleDOI
TL;DR: An approach to the detection of straight lines and circular arcs in images using a measure based on significance proposed by D. G. Lowe (Three-dimensional object recognition from single two-dimensional images, Artificial Intelligence 31, 355–395 (1987).