scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Using Canny's criteria to derive a recursively implemented optimal edge detector

01 Jun 1987-International Journal of Computer Vision (Kluwer Academic Publishers)-Vol. 1, Iss: 2, pp 167-187
TL;DR: It is shown that a solution to Canny's precise formulation of detection and localization for an infinite extent filter leads to an optimal operator in one dimension, which can be efficiently implemented by two recursive filters moving in opposite directions.
Abstract: A highly efficient recursive algorithm for edge detection is presented. Using Canny's design [1], we show that a solution to his precise formulation of detection and localization for an infinite extent filter leads to an optimal operator in one dimension, which can be efficiently implemented by two recursive filters moving in opposite directions. In addition to the noise truncature immunity which results, the recursive nature of the filtering operations leads, with sequential machines, to a substantial saving in computational effort (five multiplications and five additions for one pixel, independent of the size of the neighborhood). The extension to the two-dimensional case is considered and the resulting filtering structures are implemented as two-dimensional recursive filters. Hence, the filter size can be varied by simply changing the value of one parameter without affecting the time execution of the algorithm. Performance measures of this new edge detector are given and compared to Canny's filters. Various experimental results are shown.
Citations
More filters
Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations

Journal ArticleDOI
TL;DR: This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction and the resulting methods are accurate, noise resistant and fast.
Abstract: This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new feature detectors are based on the minimization of this local image region, and the noise reduction method uses this region as the smoothing neighbourhood. The resulting methods are accurate, noise resistant and fast. Details of the new feature detectors and of the new noise reduction method are described, along with test results.

3,669 citations


Cites methods from "Using Canny's criteria to derive a ..."

  • ...In (Deriche, 1987; Shen and Castan, 1992) similar analytical approaches to that of Canny are taken, resulting in efficient algorithms which have exact recursive implementations....

    [...]

Journal ArticleDOI
TL;DR: A model of deformation which solves some of the problems encountered with the original method of energy-minimizing curves and makes the curve behave like a balloon which is inflated by an additional force.
Abstract: The use of energy-minimizing curves, known as “snakes,” to extract features of interest in images has been introduced by Kass, Witkin & Terzopoulos (Int. J. Comput. Vision 1, 1987, 321–331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections.

2,432 citations

Journal ArticleDOI
TL;DR: The main idea is to consider the objects boundaries in one image as semi-permeable membranes and to let the other image, considered as a deformable grid model, diffuse through these interfaces, by the action of effectors situated within the membranes.

2,277 citations

Journal ArticleDOI
TL;DR: The article provides arguments in favor of an alternative approach that uses splines, which is equally justifiable on a theoretical basis, and which offers many practical advantages, and brings out the connection with the multiresolution theory of the wavelet transform.
Abstract: The article provides arguments in favor of an alternative approach that uses splines, which is equally justifiable on a theoretical basis, and which offers many practical advantages. To reassure the reader who may be afraid to enter new territory, it is emphasized that one is not losing anything because the traditional theory is retained as a particular case (i.e., a spline of infinite degree). The basic computational tools are also familiar to a signal processing audience (filters and recursive algorithms), even though their use in the present context is less conventional. The article also brings out the connection with the multiresolution theory of the wavelet transform. This article attempts to fulfil three goals. The first is to provide a tutorial on splines that is geared to a signal processing audience. The second is to gather all their important properties and provide an overview of the mathematical and computational tools available; i.e., a road map for the practitioner with references to the appropriate literature. The third goal is to give a review of the primary applications of splines in signal and image processing.

1,732 citations

References
More filters
Book
01 Jan 1976
TL;DR: The rapid rate at which the field of digital picture processing has grown in the past five years had necessitated extensive revisions and the introduction of topics not found in the original edition.
Abstract: The rapid rate at which the field of digital picture processing has grown in the past five years had necessitated extensive revisions and the introduction of topics not found in the original edition.

4,231 citations

Book ChapterDOI
01 Jan 1987
TL;DR: Scale-space filtering is a method that describes signals qualitatively, managing the ambiguity of scale in an organized and natural way.
Abstract: The extrema in a signal and its first few derivatives provide a useful general-purpose qualitative description for many kinds of signals. A fundamental problem in computing such descriptions is scale: a derivative must be taken over some neighborhood, but there is seldom a principled basis for choosing its size. Scale-space filtering is a method that describes signals qualitatively, managing the ambiguity of scale in an organized and natural way. The signal is first expanded by convolution with gaussian masks over a continuum of sizes. This "scale-space" image is then collapsed, using its qualitative structure, into a tree providing a concise but complete qualitative description covering all scales of observation. The description is further refined by applying a stability criterion, to identify events that persist of large changes in scale.

3,008 citations


"Using Canny's criteria to derive a ..." refers methods in this paper

  • ...This property is very useful in a multi-scaled description of a shape as described by Witkin [ 9 ] since it allows multi-scale edge detection to be performed with the same computational complexity for all the scales....

    [...]

Journal ArticleDOI
Robert M. Haralick1
TL;DR: The facet model is used to accomplish step edge detection and the Marr-Hildreth zero crossing of the Laplacian operator is found that it is the best performer; next is the Prewitt gradient operator.
Abstract: We use the facet model to accomplish step edge detection. The essence of the facet model is that any analysis made on the basis of the pixel values in some neighborhood has its final authoritative interpretation relative to the underlying gray tone intensity surface of which the neighborhood pixel values are observed noisy samples. With regard to edge detection, we define an edge to occur in a pixel if and only if there is some point in the pixel's area having a negatively sloped zero crossing of the second directional derivative taken in the direction of a nonzero gradient at the pixel's center. Thus, to determine whether or not a pixel should be marked as a step edge pixel, its underlying gray tone intensity surface must be estimated on the basis of the pixels in its neighborhood. For this, we use a functional form consisting of a linear combination of the tensor products of discrete orthogonal polynomials of up to degree three. The appropriate directional derivatives are easily computed from this kind of a function. Upon comparing the performance of this zero crossing of second directional derivative operator with the Prewitt gradient operator and the Marr-Hildreth zero crossing of the Laplacian operator, we find that it is the best performer; next is the Prewitt gradient operator. The Marr-Hildreth zero crossing of the Laplacian operator performs the worst.

1,130 citations


"Using Canny's criteria to derive a ..." refers methods in this paper

  • ...For a large survey of these techniques, one can refer to Davis [2], Grimson and Hildreth [3], Haralick [ 4 ], Hildreth [5], and Rosenfeld and Kak [6]....

    [...]

01 Jun 1983
TL;DR: This thesis is an attempt to formulate a set of edge detection criteria that capture as directly as possible the desirable properties of an edge operator.
Abstract: : The problem of detecting intensity changes in images is canonical in vision. Edge detection operators are typically designed to optimally estimate first or second derivative over some (usually small) support. Other criteria such as output signal to noise ratio or bandwidth have also been been argued for. This thesis is an attempt to formulate a set of edge detection criteria that capture as directly as possible the desirable properties of an edge operator. Variational techniques are used to find a solution over the space of all linear shift invariant operators. The first criterion is that the detector have low probability of error i.e. failing to mark edges or falsely marking non-edges. The second is that the marked points should b The third criterion is that there should be low probability of more than one response to a single edge. The technique is used to find optimal operators for step edges and for extended impulse profiles (ridges or valleys in two dimensions). The extension of the one dimensional operators to two dimensions is then discussed. The result is a set of operators of varying width, length and orientation. The problem of combining these outputs into a single description is discussed, and a set of heuristics for the integration are given. (Author)

986 citations

Journal ArticleDOI
TL;DR: Methods of detecting “edges,” i.e., boundaries between regions in a picture, are reviewed and both parallel and sequential methods are reviewed.

849 citations