scispace - formally typeset
Search or ask a question

Showing papers on "Edge detection published in 1994"


Journal ArticleDOI
TL;DR: A spatially selective noise filtration technique based on the direct spatial correlation of the wavelet transform at several adjacent scales is introduced and can reduce noise contents in signals and images by more than 80% while maintaining at least 80% of the value of the gradient at most edges.
Abstract: Wavelet transforms are multiresolution decompositions that can be used to analyze signals and images They describe a signal by the power at each scale and position Edges can be located very effectively in the wavelet transform domain A spatially selective noise filtration technique based on the direct spatial correlation of the wavelet transform at several adjacent scales is introduced A high correlation is used to infer that there is a significant feature at the position that should be passed through the filter The authors have tested the technique on simulated signals, phantom images, and real MR images It is found that the technique can reduce noise contents in signals and images by more than 80% while maintaining at least 80% of the value of the gradient at most edges The authors did not observe any Gibbs' ringing or significant resolution loss on the filtered images Artifacts that arose from the filtration are very small and local The noise filtration technique is quite robust There are many possible extensions of the technique The authors see its applications in spatially dependent noise filtration, edge detection and enhancement, image restoration, and motion artifact removal They have compared the performance of the technique to that of the Weiner filter and found it to be superior >

793 citations


Journal ArticleDOI
TL;DR: Using physical models for charged-coupled device (CCD) video cameras and material reflectance, the variation in digitized pixel values that is due to sensor noise and scene variation is quantify.
Abstract: Changes in measured image irradiance have many physical causes and are the primary cue for several visual processes, such as edge detection and shape from shading. Using physical models for charged-coupled device (CCD) video cameras and material reflectance, we quantify the variation in digitized pixel values that is due to sensor noise and scene variation. This analysis forms the basis of algorithms for camera characterization and calibration and for scene description. Specifically, algorithms are developed for estimating the parameters of camera noise and for calibrating a camera to remove the effects of fixed pattern nonuniformity and spatial variation in dark current. While these techniques have many potential uses, we describe in particular how they can be used to estimate a measure of scene variation. This measure is independent of image irradiance and can be used to identify a surface from a single sensor band over a range of situations. Experimental results confirm that the models presented in this paper are useful for modeling the different sources of variation in real images obtained from video cameras. >

775 citations


Journal ArticleDOI
TL;DR: A switching scheme for median filtering which is suitable to be a prefilter before some subsequent processing e.g. edge detection or data compression is presented to remove impulse noises in digital images with small signal distortion.

717 citations


Journal ArticleDOI
TL;DR: A modified variational scheme for contour modeling is proposed, which uses no edge detection step, but local computations instead—only around contour neighborhoods—as well as an “anticipating” strategy that enhances the modeling activity of deformable contour curves.
Abstract: The variational method has been introduced by Kass et al. (1987) in the field of object contour modeling, as an alternative to the more traditional edge detection-edge thinning-edge sorting sequence. since the method is based on a pre-processing of the image to yield an edge map, it shares the limitations of the edge detectors it uses. in this paper, we propose a modified variational scheme for contour modeling, which uses no edge detection step, but local computations instead—only around contour neighborhoods—as well as an “anticipating” strategy that enhances the modeling activity of deformable contour curves. many of the concepts used were originally introduced to study the local structure of discontinuity, in a theoretical and formal statement by leclerc & zucker (1987), but never in a practical situation such as this one. the first part of the paper introduces a region-based energy criterion for active contours, and gives an examination of its implications, as compared to the gradient edge map energy of snakes. then, a simplified optimization scheme is presented, accounting for internal and external energy in separate steps. this leads to a complete treatment, which is described in the last sections of the paper (4 and 5). the optimization technique used here is mostly heuristic, and is thus presented without a formal proof, but is believed to fill a gap between snakes and other useful image representations, such as split-and-merge regions or mixed line-labels image fields.

694 citations


Journal ArticleDOI
TL;DR: The problem of scale is dealt with, so that the system can locate unknown human faces spanning a wide range of sizes in a complex black-and-white picture.

655 citations


Proceedings ArticleDOI
13 Nov 1994
TL;DR: The authors propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable, which yields two algorithms, ARTUR and LEGEND, which are applied to the problem of SPECT reconstruction.
Abstract: Many image processing problems are ill-posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. The authors first give sufficient conditions for the design of such an edge-preserving regularization. Under these conditions, it is possible to introduce an auxiliary variable whose role is twofold. Firstly, it marks the discontinuities and ensures their preservation from smoothing. Secondly, it makes the criterion half-quadratic. The optimization is then easier. The authors propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This yields two algorithms, ARTUR and LEGEND. The authors apply these algorithms to the problem of SPECT reconstruction. >

628 citations


Journal ArticleDOI
TL;DR: The system successfully extracts moving edges from dynamic images even when the pan/tilt angles between successive frames are as large as 3.5m, and the use of morphological filtering of motion images is explored to desensitize the detection algorithm to inaccuracies in background compensation.
Abstract: This paper describes a method for real-time motion detection using an active camera mounted on a pan/tilt platform. Image mapping is used to align images of different viewpoints so that static camera motion detection can be applied. In the presence of camera position noise, the image mapping is inexact and compensation techniques fail. The use of morphological filtering of motion images is explored to desensitize the detection algorithm to inaccuracies in background compensation, Two motion detection techniques are examined, and experiments to verify the methods are presented. The system successfully extracts moving edges from dynamic images even when the pan/tilt angles between successive frames are as large as 3. >

407 citations


Journal ArticleDOI
TL;DR: In this article, a new class of filters for noise elimination and edge enhancement by using shock filters and anisotropic diffusion was defined, and some nonlinear partial differential equations used as models f...
Abstract: The authors define a new class of filters for noise elimination and edge enhancement by using shock filters and anisotropic diffusion. Some nonlinear partial differential equations used as models f...

378 citations


Journal ArticleDOI
TL;DR: To decide if two regions should be merged, instead of comparing the difference of region feature means with a predefined threshold, the authors adaptively assess region homogeneity from region feature distributions, resulting in an algorithm that is robust with respect to various image characteristics.
Abstract: Proposes a simple, yet general and powerful, region-growing framework for image segmentation. The region-growing process is guided by regional feature analysis; no parameter tuning or a priori knowledge about the image is required. To decide if two regions should be merged, instead of comparing the difference of region feature means with a predefined threshold, the authors adaptively assess region homogeneity from region feature distributions. This results in an algorithm that is robust with respect to various image characteristics. The merge criterion also minimizes the number of merge rejections and results in a fast region-growing process that is amenable to parallelization. >

321 citations


Book ChapterDOI
07 May 1994
TL;DR: The paper presents a framework for extracting low level features, which leads to new techniques for deriving image parameters, to either the elimination or the elucidation of ”buttons”, like thresholds, and to interpretable quality measures for the results, which may be used in subsequent steps.
Abstract: The paper presents a framework for extracting low level features. Its main goal is to explicitely exploit the information content of the image as far as possible. This leads to new techniques for deriving image parameters, to either the elimination or the elucidation of ”buttons”, like thresholds, and to interpretable quality measures for the results, which may be used in subsequent steps. Feature extraction is based on local statistics of the image function. Methods are available for blind estimation of a signal dependent noise variance, for feature preserving restoration, for feature detection and classification, and for the location of general edges and points. Their favorable scale space properties are discussed.

312 citations


Journal ArticleDOI
TL;DR: A hierarchy of image processing steps which rapidly detects both the contours of the myocardial boundaries of the left ventricle and the tags within theMyocardium, which is currently being used in the analysis of cardiac strain and as a basis for theAnalysis of alternate tag geometries.
Abstract: Tracking magnetic resonance tags in myocardial tissue promises to be an effective tool for the assessment of myocardial motion. The authors describe a hierarchy of image processing steps which rapidly detects both the contours of the myocardial boundaries of the left ventricle and the tags within the myocardium. The method works on both short axis and long axis images containing radial and parallel tag patterns, respectively. Left ventricular boundaries are detected by first removing the tags using morphological closing and then selecting candidate edge points. The best inner and outer boundaries are found using a dynamic program that minimizes a nonlinear combination of several local cost functions. Tags are tracked by matching a template of their expected profile using a least squares estimate. Since blood pooling, contiguous and adjacent tissue, and motion artifacts sometimes cause detection errors, a graphical user interface was developed to allow user correction of anomalous points. The authors present results on several tagged images of a human. A fully automated run generally finds the endocardial boundary and the tag lines extremely well, requiring very little manual correction. The epicardial boundary sometimes requires more intervention to obtain an acceptable result. These methods are currently being used in the analysis of cardiac strain and as a basis for the analysis of alternate tag geometries. >

01 Jan 1994
TL;DR: The approach explored here is guided by the best-basis paradigm which consists of three steps: select a "best" basis (or coordinate system) for the problem at hand from a library of bases from a fixed yet flexible set of bases that can capture the local features and provide an array of tools unifying the conventional techniques.
Abstract: Extracting relevant features from signals is important for signal analysis such as compression, noise removal, classification, or regression (prediction). Often, important features for these problems, such as edges, spikes, or transients, are characterized by local information in the time (space) domain and the frequency (wave number) domain. The conventional techniques are not efficient to extract features localized simultaneously in the time and frequency domains. These methods include: the Fourier transform for signal/noise separation, the Karhunen-Loeve transform for compression, and the linear discriminant analysis for classification. The features extracted by these methods are of global nature either in time or in frequency domain so that the interpretation of the results may not be straightforward. Moreover, some of them require solving the eigenvalue systems so that they are fragile to outliers or perturbations and are computationally expensive, i.e., $O(n\sp3$), where n is a dimensionality of a signal. The approach explored here is guided by the best-basis paradigm which consists of three steps: (1) select a "best" basis (or coordinate system) for the problem at hand from a library of bases (a fixed yet flexible set of bases consisting of wavelets, wavelet packets, local trigonometric bases, and the autocorrelation functions of wavelets), (2) sort the coordinates (features) by "importance" for the problem at hand and discard "unimportant" coordinates, and (3) use these survived coordinates to solve the problem at hand. What is "best" and "important" clearly depends on the problem: for example, minimizing a description length (or entropy) is important for signal compression whereas maximizing class separation (or relative entropy among classes) is important for classification. These bases "fill the gap" between the standard Euclidean basis and the Fourier basis so that they can capture the local features and provide an array of tools unifying the conventional techniques. Moreover, these tools provide efficient numerical algorithms, e.g., $O(n$ (log n) $\sp{p}),$ where p = 0, 1, 2, depending on the basis. In present thesis, these methods have been applied usefully to a variety of problems: simultaneous noise suppression and signal compression, classification, regression, multiscale edge detection and representation, and extraction of geological information from acoustic waveforms.

Journal ArticleDOI
TL;DR: Dennis Gabor examined the problem of image deblurring and was the first to suggest a method for edge enhancement based on principles widely accepted today and implemented in advanced image processing systems.

Journal ArticleDOI
01 Apr 1994
TL;DR: A novel technique is presented for rapid partitioning of surfaces in range images into planar patches based on region growing where the segmentation primitives are scan line grouping features instead of individual pixels.
Abstract: A novel technique is presented for rapid partitioning of surfaces in range images into planar patches. The method extends and improves Pavlidis' algorithm (1976), proposed for segmenting images from electron microscopes. The new method is based on region growing where the segmentation primitives are scan line grouping features instead of individual pixels. We use a noise variance estimation to automatically set thresholds so that the algorithm can adapt to the noise conditions of different range images. The proposed algorithm has been tested on real range images acquired by two different range sensors. Experimental results show that the proposed algorithm is fast and robust.

Journal ArticleDOI
TL;DR: A method to locate three vanishing points on an image, corresponding to three orthogonal directions of the scene, based on two cascaded Hough transforms is proposed, which is efficient, even in the case of real complex scenes.
Abstract: We propose a method to locate three vanishing points on an image, corresponding to three orthogonal directions of the scene. This method is based on two cascaded Hough transforms. We show that, even in the case of synthetic images of high quality, a naive approach may fail, essentially because of the errors due to the limitation of the image size. We take into account these errors as well as errors due to detection inaccuracy of the image segments, and provide a method efficient, even in the case of real complex scenes. >

Journal ArticleDOI
TL;DR: A simple and effective method for image contrast enhancement based on the multiscale edge representation of images that offers flexibility to selectively enhance features of different sizes and ability to control noise magnification is presented.
Abstract: Experience suggests the existence of a connection between the contrast of a gray-scale image and the gradient magnitude of intensity edges in the neighborhood where the contrast is measured. This observation motivates the development of edge-based contrast enhancement techniques. We present a simple and effective method for image contrast enhancement based on the multiscale edge representation of images. The contrast of an image can be enhanced simply by stretching or upscaling the multiscale gradient maxima of the image. This method offers flexibility to selectively enhance features of different sizes and ability to control noise magnification. We present some experimental results from enhancing medical images and discuss the advantages of this wavelet approach over other edge-based techniques.

Proceedings ArticleDOI
24 Oct 1994
TL;DR: Two applications of ARCADE as the first stage of processing for a lane sensing task are described: the extraction of the locations of the features defining the visible lane structure of the road; and the generation of training instances for an ALVINN-like neural network road follower.
Abstract: The ARCADE (Automated Road Curvature And Direction Estimation) algorithm estimates road curvature and tangential road orientation relative to the camera line-of-sight. The input to ARCADE consists of edge point locations and orientations extracted from an image, and it performs the estimation without the need for any prior perceptual grouping of the edge points into individual lane boundaries. It is able to achieve this through the use of global constraints on the individual lane boundary shapes derived from an explicit model of road structure in the world. The use of the least median squares robust estimation technique allows the algorithm to function correctly in cases where up to 50% of the input edge data points are contaminating noise. Two applications of ARCADE as the first stage of processing for a lane sensing task are described: 1) the extraction of the locations of the features defining the visible lane structure of the road; and 2) the generation of training instances for an ALVINN-like neural network road follower.

Journal ArticleDOI
TL;DR: The genetic algorithm-based cost minimization technique is shown to perform very well in terms of robustness to noise, rate of convergence and quality of the final edge image.

Journal ArticleDOI
TL;DR: A new computationally efficient three-dimensional (3-D) object segmentation technique based on the detection of edges in the image that can be implemented in parallel, as edge growing from different regions can be carried out independently of each other.
Abstract: In this correspondence, we present a new computationally efficient three-dimensional (3-D) object segmentation technique. The technique is based on the detection of edges in the image. The edges can be classified as belonging to one of the three categories: fold edges, semistep edges (defined here), and secondary edges. The 3-D image is sliced to create equidepth contours (EDCs). Three types of critical points are extracted from the EDCs. A subset of the edge pixels is extracted first using these critical points. The edges are grown from these pixels through the application of some masks proposed in this correspondence. The constraints of the masks can be adjusted depending on the noise present in the image. The total computational effort is small since the masks are applied only over a small neighborhood of critical points (edge regions). Furthermore, the algorithm can be implemented in parallel, as edge growing from different regions can be carried out independently of each other. >

Journal ArticleDOI
TL;DR: In all applications, the proposed filter suggested better detail preservation, noise suppression, and edge detection than all other approaches and it may prove to be a useful tool for computer-assisted diagnosis in digital mammography.
Abstract: A new class of nonlinear filters with more robust characteristics for noise suppression and detail preservation is proposed for processing digital mammographic images. The new algorithm consists of two major filtering blocks: (a) a multistage tree-structured filter for image enhancement that uses central weighted median filters as basic sub-filtering blocks and (b) a dispersion edge detector. The design of the algorithm also included the use of linear and curved windows to determine whether variable shape windowing could improve detail preservation. First, the noise-suppressing properties of the tree-structured filter were compared to single filters, namely the median and the central weighted median with conventional square and variable shape adaptive windows; simulated images were used for this purpose. Second, the edge detection properties of the tree-structured filter cascaded with the dispersion edge detector were compared to the performance of the dispersion edge detector alone, the Sobel operator, and the single median filter cascaded with the dispersion edge detector. Selected mammographic images with representative biopsy-proven malignancies were processed with all methods and the results were visually evaluated by an expert mammographer. In all applications, the proposed filter suggested better detail preservation, noise suppression, and edge detection than all other approaches and it may prove to be a useful tool for computer-assisted diagnosis in digital mammography. >

Journal ArticleDOI
TL;DR: A new feature is presented that is more resistant to the blurring process, the image, and waveform peaks, and the recognition algorithm showed a 43% performance improvement over current commercial bar code reading equipment.
Abstract: Traditionally, zero crossings of the second derivative provide edge features for the classification of blurred waveforms. The accuracy of these edge features deteriorates in the case of severely blurred images. In this paper, a new feature is presented that is more resistant to the blurring process, the image, and waveform peaks. In addition, an estimate of the standard deviation /spl sigma/ of the blurring kernel is used to perform minor deblurring of the waveform. Statistical pattern recognition is used to classify the peaks as bar code characters. The noise tolerance of this recognition algorithm is increased by using an adaptive, histogram-based technique to remove the noise. In a bar code environment that requires a misclassification rate of less than one in a million, the recognition algorithm showed a 43% performance improvement over current commercial bar code reading equipment. >

Proceedings ArticleDOI
26 Jun 1994
TL;DR: It is shown how FIRE operators can be designed in order to comply with the following two requirements: 1) extraction of edges from a noiseless image by means of the simplest possible rule-base; 2) extraction from a noisy image.
Abstract: FIRE (fuzzy inference ruled by else-action) operators are a recently proposed family of fuzzy operators for image processing. After an introduction of the generalized structure of the FIRE edge extractor, in this paper it is shown how FIRE operators can be designed in order to comply with the following two requirements: 1) extraction of edges from a noiseless image by means of the simplest possible rule-base; 2) extraction of edges from a noisy image. Some experimental results show the performances of the proposed approach. >

Journal ArticleDOI
M. Bichsel1
TL;DR: A new segmentation algorithm is derived, based on an object-background probability estimate exploiting the experimental fact that the statistics of local image derivatives show a Laplacian distribution, which avoids early thresholding, explicit edge detection, motion analysis, and grouping.
Abstract: A new segmentation algorithm is derived, based on an object-background probability estimate exploiting the experimental fact that the statistics of local image derivatives show a Laplacian distribution. The objects' simple connectedness is included directly into the probability estimate and leads to an iterative optimization approach that can be implemented efficiently. This new approach avoids early thresholding, explicit edge detection, motion analysis, and grouping. >

Journal ArticleDOI
TL;DR: A family of optimal DSNR edge detectors based on the expansion filter for several edge models is introduced and the optimal step expansion filter (SEF) is compared with the widely used Canny edge detector (CED).
Abstract: Discusses the application of a newly developed expansion matching method for edge detection. Expansion matching optimizes a novel matching criterion called the discriminative signal-to-noise ratio (DSNR) and has been shown to robustly recognize templates under conditions of noise, severe occlusion and superposition. The DSNR criterion is better suited to evaluate matching in practical conditions than the traditional SNR since it considers as "noise" even the off-center response of the filter to the template itself. We introduce a family of optimal DSNR edge detectors based on the expansion filter for several edge models. For step edges, the optimal DSNR step expansion filter (SEF) is compared with the widely used Canny edge detector (CED). Experimental comparisons show that our edge detector yields better performance than the CED in terms of DSNR even under very adverse noise conditions. As for boundary detection, the SEF consistently yields higher figures of merit than the CED on a synthetic binary image over a wide range of noise levels. Results also show that the design parameters of size or width of the SEF are less critical than the CED variance. This means that a single scale of the SEF spans a larger range of input noise than a single scale of the CED. Experiments on a noisy image reveal that the SEF yields less noisy edge elements and preserves structural details more accurately. On the other hand, the CED output has better suppression of multiple responses than the corresponding SEF output. >

Journal ArticleDOI
TL;DR: In this article, the authors present an approach to computing the two-dimensional velocities of moving objects that are occluded and transparent by coarsely segmenting an image into regions of coherent motion, providing an estimate of velocity in each region and actively selecting the most reliable estimates.
Abstract: We present a new approach to computing from image sequences the two-dimensional velocities of moving objects that are occluded and transparent. The new motion model does not attempt to provide an accurate representation of the velocity flow field at fine resolutions but coarsely segments an image into regions of coherent motion, provides an estimate of velocity in each region, and actively selects the most reliable estimates. The model uses motion-energy filters in the first stage of processing and computes, in parallel, two different sets of retinotopically organized spatial arrays of unit responses: one set of units estimates the local velocity, and the second set selects from these local estimates those that support global velocities. Only the subset of local-velocity measurements that are the most reliable is included in estimation of the velocity of objects. The model is in agreement with many of the constraints imposed by the physiological response properties of cells in primate visual cortex, and its performance is similar to that of primates on motion transparency.

Journal ArticleDOI
TL;DR: The new so-called SLIDE (subspace-based line detection) algorithm then exploits the spatial coherence between the contributions of each line in different rows of the image to enhance and distinguish a signal subspace that is defined by the desired line parameters.
Abstract: An analogy is made between each straight line in an image and a planar propagating wavefront impinging on an array of sensors so as to obtain a mathematical model exploited in recent high resolution methods for direction-of-arrival estimation in sensor array processing. The new so-called SLIDE (subspace-based line detection) algorithm then exploits the spatial coherence between the contributions of each line in different rows of the image to enhance and distinguish a signal subspace that is defined by the desired line parameters. SLIDE yields closed-form and high resolution estimates for line parameters, and its computational complexity and storage requirements are far less than those of the standard method of the Hough transform. If unknown a priori, the number of lines is also estimated in the proposed technique. The signal representation employed in this formulation is also generalized to handle grey-scale images as well. The technique has also been generalized to fitting planes in 3-D images. Some practical issues of the proposed technique are given. >

Journal ArticleDOI
TL;DR: A new technique for ID and 2D edge feature extraction to subpixel accuracy using edge models and the local energy approach is described.
Abstract: In this paper we describe a new technique for ID and 2D edge feature extraction to subpixel accuracy using edge models and the local energy approach. A candidate edge is modeled as one of a number of parametric edge models, and the fit is refined by a least-squared error fitting technique. >

Proceedings ArticleDOI
24 Oct 1994
TL;DR: A method for detecting and recognizing road signs in grey-level images acquired by a single camera mounted on a moving vehicle that is robust against low-level noise corrupting edge detection and contour following, and works for images of cluttered urban streets as well as country roads and highways is described.
Abstract: This paper describes a method for detecting and recognizing road signs in grey-level images acquired by a single camera mounted on a moving vehicle. An extensive experimentation has shown that the method is robust against low-level noise corrupting edge detection and contour following, and works for images of cluttered urban streets as well as country roads and highways. A further improvement on the detection and recognition scheme has been obtained by means of a Kalman-filter-based temporal integration of the extracted information. The proposed approach can be very helpful for the development of a system for driving assistance.

Journal ArticleDOI
TL;DR: An algorithm for the detection of dominant points and for building a hierarchical approximation of a digital curve is proposed and is shown to perform well for a wide variety of shapes, including scaled and rotated ones.
Abstract: An algorithm for the detection of dominant points and for building a hierarchical approximation of a digital curve is proposed. The algorithm does not require any parameter tuning and is shown to perform well for a wide variety of shapes, including scaled and rotated ones. Dominant points are first located by a coarse-to-fine detector scheme. They constitute the vertices of a polygon closely approximating the curve. Then, a criterion of perceptual significance is used to repeatedly remove suitable vertices until a stable polygonal configuration, the contour sketch, is reached. A highly compressed hierarchical description of the shape also becomes available. >

Journal ArticleDOI
01 Apr 1994
TL;DR: This work analyzes the propagation of the uncertainty in edge point position to the 2D measurements made by the vision system, from 2D curve extraction, through point determination, to measurement.
Abstract: Machine vision systems that perform inspection tasks must be capable of making measurements. A vision system measures an image to determine a measurement of the object being viewed. The image measurement depends on several factors, including sensing, image processing, and feature extraction. We consider the error that can occur in measuring the distance between two corner points of the 2D image. We analyze the propagation of the uncertainty in edge point position to the 2D measurements made by the vision system, from 2D curve extraction, through point determination, to measurement. We extend earlier work on the relationship between random perturbation of edge point position and variance of the least squares estimate of line parameters and analyze the relationship between the variance of 2D points.