scispace - formally typeset
Search or ask a question

Showing papers on "Image gradient published in 1999"


Proceedings ArticleDOI
20 Sep 1999
TL;DR: A nonparametric estimator of density gradient, the mean shift, is employed in the joint, spatial-range (value) domain of gray level and color images for discontinuity preserving filtering and image segmentation and its convergence on lattices is proven.
Abstract: A nonparametric estimator of density gradient, the mean shift, is employed in the joint, spatial-range (value) domain of gray level and color images for discontinuity preserving filtering and image segmentation. Properties of the mean shift are reviewed and its convergence on lattices is proven. The proposed filtering method associates with each pixel in the image the closest local mode in the density distribution of the joint domain. Segmentation into a piecewise constant structure requires only one more step, fusion of the regions associated with nearby modes. The proposed technique has two parameters controlling the resolution in the spatial and range domains. Since convergence is guaranteed, the technique does not require the intervention of the user to stop the filtering at the desired image quality. Several examples, for gray and color images, show the versatility of the method and compare favorably with results described in the literature for the same images.

1,067 citations


Journal ArticleDOI
TL;DR: The main conclusion drawn from the analysis is that the data-closeness constraint improves the efficiency of shape-from-shading and that both the topographic and gradient consistency constraints improve the fidelity of the recovered needle-map.
Abstract: This paper makes two contributions to the problem of needle-map recovery using shape-from-shading. First, we provide a geometric update procedure which allows the image irradiance equation to be satisfied as a hard constraint. This not only improves the data closeness of the recovered needle-map, but also removes the necessity for extensive parameter tuning. Second, we exploit the improved ease of control of the new shape-from-shading process to investigate various types of needle-map consistency constraint. The first set of constraints are based on needle-map smoothness. The second avenue of investigation is to use curvature information to impose topographic constraints. Third, we explore ways in which the needle-map is recovered so as to be consistent with the image gradient field. In each case we explore a variety of robust error measures and consistency weighting schemes that can be used to impose the desired constraints on the recovered needle-map. We provide an experimental assessment of the new shape-from-shading framework on both real world images and synthetic images with known ground truth surface normals. The main conclusion drawn from our analysis is that the data-closeness constraint improves the efficiency of shape-from-shading and that both the topographic and gradient consistency constraints improve the fidelity of the recovered needle-map.

227 citations


Journal ArticleDOI
TL;DR: A new approach called gradient-direction corner detector for the corner detection is presented which is developed from the popular Plessey corner detection and is based on the measure of the gradient module of the image gradient direction and the constraints of the false corner response suppression.

199 citations


Proceedings ArticleDOI
23 Jun 1999
TL;DR: The compass operator detects step edges without assuming that the regions on either side have constant color and finds the orientation of a diameter that maximizes the difference between two halves of a circular window.
Abstract: The compass operator detects step edges without assuming that the regions on either side have constant color. Using distributions of pixel colors rather than the mean, the operator finds the orientation of a diameter that maximizes the difference between two halves of a circular window. Junctions can also be detected by exploiting their lack of bilateral symmetry. This approach is superior to a multi-dimensional gradient method in situations that often result in false negatives, and it localizes edges better as scale increases.

173 citations


Journal ArticleDOI
TL;DR: Operations based on mathematical morphology which have been developed for binary and grayscale images are extended to color images and a set-theoretic analysis of these vector operations is presented.
Abstract: In this paper operations based on mathematical morphology which have been developed for binary and grayscale images are extended to color images. We investigate two approaches for ‘‘color morphology’’—a vector approach and a component-wise approach. New vector morphological filtering operations are defined, and a set-theoretic analysis of these vector operations is presented. We also present experimental results comparing the performance of the vector approach and the component-wise approach for multiscale color image analysis and for noise suppression in color images.

173 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed operator for color image compression can have output picture quality acceptable to human eyes, and the proposed edge operator can detect the color edge at the subpixel level.
Abstract: This paper presents a new moment-preserving thresholding technique, called the binary quaternion-moment-preserving (BQMP) thresholding, for color image data. Based on representing color data by the quaternions, the statistical parameters of color data can be expressed through the definition of quaternion moments. Analytical formulas of the BQMP thresholding can thus be determined by using the algebra of the quaternions. The computation time for the BQMP thresholding is of order of the data size. By using the BQMP thresholding, quaternion-moment-based operators are designed for the application of color image processing, such as color image compression, multiclass clustering of color data, and subpixel color edge detection. The experimental results show that the proposed operator for color image compression can have output picture quality acceptable to human eyes. In addition, the proposed edge operator can detect the color edge at the subpixel level. Therefore, the proposed BQMP thresholding can be used as a tool for color image processing.

155 citations


Journal ArticleDOI
01 Aug 1999
TL;DR: This work presents a new approach for edge detection, where the actual gray level image is locally thresholded using the local mean value to make a binary image and the actual image is globally thresholded by the variance value of the image.
Abstract: Localization of edge points in images is one of the most important starting steps in image processing. Many varied edge detection techniques have been proposed. Different edge detectors present distinct and different responses to the same image, showing different details. This work presents a new approach for edge detection. The actual gray level image is locally thresholded using the local mean value to make a binary image. The binary image is checked for edges by comparing with the known edge like patterns, utilizing Boolean algebra. This approach recognizes nearly all, real edges and edges due to noise. For removing edges due to noise, we adopt another approach. This time the actual image is globally thresholded by the variance value of the image. The two resulting images are logically ANDed to get the final edge map.

110 citations


Proceedings ArticleDOI
09 May 1999
TL;DR: In this paper, the vector angle between two adjacent pixels is calculated to distinguish differences in chromaticity, independent of luminance or intensity, and the Euclidean distance in RGB space is used for edge detection.
Abstract: This paper introduces a new edge detection approach for color images. The method is based on the calculation of the vector angle between two adjacent pixels. Unlike Euclidean distance in RGB space, the vector angle distinguishes differences in chromaticity, independent of luminance or intensity. It is particularly well suited to applications where differences in illumination are irrelevant. Both metrics were implemented as modified Roberts edge operators to determine their effectiveness on an artificial image. The Euclidean method found edges across both luminance and chromatic boundaries whereas the vector angle method detected only chromatic differences.

94 citations


Book ChapterDOI
TL;DR: A robust statistical measure of the gradient variation is computed and used in an anisotropic diffusion framework to determine a spatially varying "edge-stopping" parameter σ and it is shown how to determine this parameter for two edge-st stopping functions described in the literature.
Abstract: Edges are viewed as statistical outliers with respect to local image gradient magnitudes. Within local image regions we compute a robust statistical measure of the gradient variation and use this in an anisotropic diffusion framework to determine a spatially varying "edge-stopping" parameter σ. We show how to determine this parameter for two edge-stopping functions described in the literature (Perona-Malik and the Tukey biweight). Smoothing of the image is related the local texture and in regions of low texture, small gradient values may be treated as edges whereas in regions of high texture, large gradient magnitudes are necessary before an edge is preserved. Intuitively these results have similarities with human perceptual phenomena such as masking and "popout". Results are shown on a variety of standard images.

89 citations


Patent
19 Jan 1999
TL;DR: An image processing system for automatically extracting image frames suitable for printing and/or visual presentation from a compressed image data is described in this article, which includes a face detector that detects if an image frame contains at least a face.
Abstract: An image processing system for automatically extracting image frames suitable for printing and/or visual presentation from a compressed image data is described. The image processing system includes a face detector that detects if an image frame contains at least a face. The image processing system also includes a blur detector that determines the blur indicator value of the image frame directly using the information contained in the compressed image data if the image frame is determined to contain a face. The blur detector indicates that the image frame is suitable for printing and/or visual presentation if the blur indicator value of the image frame is less than a predetermined threshold. The image processing system may also include a motion analyzer that determines if the image frame is a super-resolution image frame suitable for printing and/or visual presentation if the image frame does not contain any face. The image processing system may also include a face tracker that detects if the image frame contains a non-frontal face.

83 citations


Patent
04 Aug 1999
TL;DR: In this paper, an imaging system with brightness control includes an image capture subsystem and an image control block and is adapted for use in conjunction with an image processing application to detect roadway lane markings from a moving vehicle.
Abstract: An imaging system with brightness control includes an image capture subsystem and an image control block and is adapted for use in conjunction with an image processing application. The image capture subsystem receives an image and converts this image into digital image data. The digital signal is then stored in a video buffer for access by the image control block. The image control block provides brightness control of an image sensor in the image control subsystem to optimize the brightness of the desired area of interest in the image relative to the background. In one application, the imaging system is used in conjunction with a lane tracking system image processing application to detect roadway lane markings from a moving vehicle.

Journal ArticleDOI
TL;DR: Variations are introduced to both the vector order statistic opera- tors and the difference vector operators to improve noise performance to demonstrate the ability to attenuate noise with added algo- rithm complexity.
Abstract: Various approaches to edge detection for color images, in- cluding techniques extended from monochrome edge detection as well as vector space approaches, are examined. In particular, edge detection techniques based on vector order statistic operators and difference vec- tor operators are studied in detail. Numerous edge detectors are ob- tained as special cases of these two classes of operators. The effect of distance measures on the performance of different color edge detectors is studied by employing distance measures other than the Euclidean norm. Variations are introduced to both the vector order statistic opera- tors and the difference vector operators to improve noise performance. They both demonstrate the ability to attenuate noise with added algo- rithm complexity. Among them, the difference vector operator with adap- tive filtering shows the most promising results. Other vector directional filtering techniques are also introduced and utilized for color edge detec- tion. Both quantitative and subjective tests are performed in evaluating the performance of the edge detectors, and a detailed comparison is presented. © 1999 Society of Photo-Optical Instrumentation Engineers. (S0091-3286(99)00904-6)

Journal ArticleDOI
01 May 1999
TL;DR: A new shape-from-shading (SFS) algorithm which replaces the brightness constraint with an intensity gradient constraint is proposed, which obtains the solution by the minimization of an error function over the entire image.
Abstract: We propose a new shape-from-shading (SFS) algorithm which replaces the brightness constraint with an intensity gradient constraint. This is a global approach which obtains the solution by the minimization of an error function over the entire image. Through the linearization of the gradient of the reflectance map and the discretization of the surface gradient, the intensity gradient can be expressed as a linear function of the surface height. A quadratic error function, which involves the intensity gradient constraint and the traditional smoothness constraint, is minimized efficiently by solving a sparse linear system using the multigrid technique. Neither the information at singular points nor the information at occluding boundaries is needed for the initialization of the height. Results for real images are presented to show the robustness of the algorithm, and the execution time is demonstrated to prove its efficiency.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A segmentation method and associated file format for storing images of color documents that can produce very highly-compressed document files that nonetheless retain excellent image quality.
Abstract: We describe a segmentation method and associated file format for storing images of color documents. We separate each page of the document into three layers, containing the background (usually one or more photographic images), the text, and the color of the text. Each of these layers has different properties, making it desirable to use different compression methods to represent the three layers. The background layers are compressed using any method designed for photographic images, the text layers are compressed using a token-based representation, and the text color layers are compressed by augmenting the representation used for the text layers. We also describe an algorithm for segmenting images into these three layers. This representation and algorithm can produce very highly-compressed document files that nonetheless retain excellent image quality.

Patent
26 Oct 1999
TL;DR: In this paper, a method for characterizing an image where a number of test areas of predefined shape and size are located on the image is described by statistical descriptions of the frequency distribution of color or texture of the test areas.
Abstract: A method for characterizing an image where a number of test areas of predefined shape and size are located on the image. The color or the texture of the image over each of the test areas is quantified. The image can be characterized by statistical descriptions of the frequency distribution of color or texture of the test areas.

Dissertation
01 Jan 1999
TL;DR: The work in this thesis is motivated from a practical point of view by several shortcomings of current methods, including the inability of all known methods to properly segment objects from the background without interference from object shadows and highlights.
Abstract: ii I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. iii Acknowledgements I would like to thank Dr. Ed Jernigan for his continued support and help in the long years that lead to the preparation of this thesis. I am also grateful to him for allowing me to choose my own topic of study and for letting me carry out the research at my own pace. I would also like to thank Dr. Bob Dony for the rich discussions on color image processing and in particular for the insight of using the vector angle as a color similarity measure. I would like to thank him, as well as Dr. Catherine Burns for taking the time to review this thesis and for providing very helpful comments. To NCR belongs the credit of allowing me to attend the University of Waterloo on a part-time basis while I worked there full time and for the tuition reimbursement program which provided much needed financial support. I would also like to acknowledge the warm support of my dear companion, my family and my friends in encouraging me to finish this work. Abstract This work is based on Shafer's Dichromatic Reflection Model as applied to color image formation. The color spaces RGB, XYZ, CIELAB, CIELUV, rgb, l 1 l 2 l 3 , and the new h 1 h 2 h 3 color space are discussed from this perspective. Two color similarity measures are studied: the Euclidean distance and the vector angle. The work in this thesis is motivated from a practical point of view by several shortcomings of current methods. The first problem is the inability of all known methods to properly segment objects from the background without interference from object shadows and highlights. The second shortcoming is the non-examination of the vector angle as a distance measure that is capable of directly evaluating hue similarity without considering intensity especially in RGB. Finally, there is inadequate research on the combination of hue-and intensity-based similarity measures to improve color similarity calculations given the advantages of each color distance measure. These distance measures were used for two image understanding tasks: edge detection, and one strategy for color image segmentation, namely color clustering. Edge …

Proceedings ArticleDOI
24 Oct 1999
TL;DR: This paper presents a methodology to perform edge detection in range images in order to provide a reliable and meaningful edge map, which helps to guide and improve range image segmentation by clustering techniques.
Abstract: Edge detection is an unsolved problem in that, so far, there is no general optimal solution. However, edge detection provides rich information about the scene being observed. This is particularly true in range images, where 3D information is explicit. Many researchers have been taking advantage of edge detection information to improve the segmentation of range images by integrating edge detection with other different segmentation techniques. This paper presents a methodology to perform edge detection in range images in order to provide a reliable and meaningful edge map, which helps to guide and improve range image segmentation by clustering techniques. The obtained edge map leads to three important improvements: (1) the definition of the ideal number of regions to initialize the clustering algorithm; (2) the selection of suitable initial cluster centers; and (3) the successful identification of distinct regions with similar features. Experimental results that substantiate the effectiveness of this work are presented.

Patent
17 Mar 1999
TL;DR: In this article, a boundary between a character and a background is detected in color image data, and if the detected edge exists at the side of a character, it is decided as a true edge, and the image data are subjected to edge emphasis.
Abstract: In order to improve character edge reproducibility in reproducing a color document image, a boundary between a character and a background is detected in color image data. Further, if the detected edge exists at the side of a character, it is decided as a true edge, and the image data are subjected to edge emphasis. Then, when an image is reproduced, undesirable partial whitening in characters in the image can be prevented.

Patent
23 Mar 1999
TL;DR: In this article, an image stabilizer selectively adds image data from a background image to the current image to compensate for missing data in the original image that is missing due to a sudden shift in the current images relative to the previous images.
Abstract: An image stabilizer selectively adds image data from a background image to the current image to compensate for data in the current image that is missing due to a sudden shift in the current image relative to the previous images. The current image is warped into the coordinate system of the background image and then the warped current image is merged with the background image to replace any blank areas in the current image with corresponding pixel values from the background image. The image data from the background image which is to be substituted into the warped current image is subject to a low-pass filtering operation before it is merged with the warped current image. The warped current image is merged with the background image to form a modified background image which is then merged with the warped current image. The background image is, itself, warped to track camera motion in obtaining the current image before the background image is merged with the warped current image.

Patent
Min-Cheol Hong1
24 Jun 1999
TL;DR: In this paper, a hybrid motion compensation discrete cosine transform (hybrid MC/DCT) was used to restore a compressed image by using a hybrid smoothing functional having a smoothing degree of an image and reliability for an original image.
Abstract: The present invention relates to an image processing technique, and in particular to a method for restoring a compressed image by using a hybrid motion compensation discrete cosine transform (hybrid MC/DCT) mechanism, including: a step of defining a smoothing functional having a smoothing degree of an image and reliability for an original image by pixels having an identical property in image block units; and a step of computing a restored image by performing a gradient operation on the smoothing functional in regard to the original image, thereby preventing the blocking artifacts and the ringing effects in regard to the pixels having an identical property in image blocks.

Proceedings ArticleDOI
27 Sep 1999
TL;DR: A measure for the ambiguity of image points that allows matching of distinctive points first and breaks down the matching task into smaller and separate subproblems is proposed.
Abstract: Stereo correspondence is hard because different image features can look alike. We propose a measure for the ambiguity of image points that allows matching of distinctive points first and breaks down the matching task into smaller and separate subproblems. Experiments with an algorithm based on this measure demonstrate the ensuing efficiency and low likelihood of incorrect matches.

Journal ArticleDOI
TL;DR: Extensive simulations show that multichannel image processing with the proposed algorithms (VRFL) and (VRFd) based on l p-norm and directional pro- cessing, respectively; significantly outperform linear and some nonlinear techniques, e.g., vector FIR median hybrid filters (VFMH).
Abstract: Rational filters are extended to multichannel signal process- ing and applied to image interpolation. Two commonly used decimation schemes are considered: a rectangular grid and a quincunx grid. For each decimation lattice, we propose a number of adaptive resampling algorithms based on the vector rational filter (VRF). These algorithms exhibit desirable properties such as edge and detail preservation and accurate chromaticity estimation. In these approaches, color image pix- els are considered as three-component vectors in the color space. Therefore, the inherent correlation that exists between the different color components is not ignored. This leads to better image quality compared to that obtained by componentwise or marginal processing. Extensive simulations show that multichannel image processing with the proposed algorithms (VRFL) and (VRFd) based on l p-norm and directional pro- cessing, respectively; significantly outperform linear and some nonlinear techniques, e.g., vector FIR median hybrid filters (VFMH). Some images interpolated using VRFL and VRFd are presented for qualitative compari- son. These images are free from blockiness and jaggedness, confirming the quantitative results. © 1999 Society of Photo-Optical Instrumentation Engineers. (S0091-3286(99)00105-1)

Patent
Makoto Takaoka1
28 May 1999
TL;DR: An image processing apparatus which obtains an output result faithful to an original image independently of characteristics of a device to optically read the original image is described in this paper, where the image data is stored with Profile (color characteristic information) unique to the image scanner.
Abstract: An image processing apparatus which obtains an output result faithful to an original image independently of characteristics of a device to optically read the original image. When the original image is read by an image scanner, layout and the like of the read image are analyzed, recognition is performed on characters, and compression is performed on an image area. The image data is stored with Profile (color characteristic information) unique to the image scanner. When the image is displayed or print-outputted, color matching is performed in accordance with Profile of a display device or printer and the Profile of the image scanner, and the image is reproduced.

Proceedings ArticleDOI
17 Oct 1999
TL;DR: The least squares fitting minimizes the squares sum of error-of-fit in predefined measures by the geometric fitting of ellipse, and a robust algorithm is proposed based on the coordinate description of the corresponding point on theEllipse for the given point.
Abstract: The least squares fitting minimizes the squares sum of error-of-fit in predefined measures. By the geometric fitting, the error distances are defined with the shortest distances from the given points to the geometric feature to be fitted. For the geometric fitting of ellipse, a robust algorithm is proposed. This is based on the coordinate description of the corresponding point on the ellipse for the given point, where the connecting line of the two points is the shortest path from the given point to the ellipse. As a practical application example, we show the geometric ellipse fitting to the image of circular point targets, where the contour points are weighted with their image gradient across the boundary of the image ellipse.

Patent
25 Oct 1999
TL;DR: A method and apparatus for the control of darkness/lightness in a digital image rendered by a printing system is described in this article, where an original image containing antialiased edges is initially thresholded and filtered to determine an edge map.
Abstract: A method and apparatus for the control of darkness/lightness in a digital image rendered by a printing system. An original image containing antialiased edges is initially thresholded and filtered to determine an edge map. With knowledge of the edge via the edge map, darkness adjustment is applies to the digital image. Gray-edge compaction is applied thereafter to adjust the position of the edge.

Patent
16 Apr 1999
TL;DR: In this article, a 2×2 matrix called the contrast form is defined comprised of the first derivatives of the n-dimenional image function with respect to the image plane, and a local metric defined on n-dimensional photometric space.
Abstract: A method is presented for the treatment and visualization of local contrast in n-dimensional multispectral images, which directly applies to n-dimensional multisensor images as well. A 2×2 matrix called the contrast form is defined comprised of the first derivatives of the n-dimenional image function with respect to the image plane, and a local metric defined on n-dimensional photometric space. The largest eigenvector of this 2×2 contrast form encodes the inherent local contrast at each point on the image plane. It is shown how a scalar intensity function defined on n-dimensional photometric space is used to select a preferred orientation for this eigenvector at each image point in the n-dimensional image defining the contrast vector field for an n-dimensional image. A grey level visualization of local n-dimensional image contrast is produced by the greylevel image intensity function such that the sum of the square difference between the components of the gradient vector of this intensity function and the components of the contrast vector field is minimized across the image plane. This is achieved by solving the corresponding Euler-Lagrange equations for this variational problem. An m-dimensional image, 1

Proceedings ArticleDOI
24 Oct 1999
TL;DR: In this paper, the PI-surface is projected onto the detector, not quite along a perfect line, but nevertheless with most of the points concentrated along a certain non-horizontal line.
Abstract: Aiming for a more coherent 1D-filtering, the authors propose an improvement of the original PI-method. A PI-surface is defined by the object points which enter and exit over detector edges simultaneously. In between these to events the PI-surface is projected onto the detector, not quite along a perfect line, but nevertheless with most of the points concentrated along a certain non-horizontal line. The authors show that one-dimensional filtering along these slanted lines improves the image quality considerably.

Proceedings ArticleDOI
20 Sep 1999
TL;DR: This work uses both a region model, based on distributions of pixel colors, and an edge model, which removes false positives, to perform corner detection on color images whose regions contain texture.
Abstract: Corner models in the literature have lagged behind edge models with respect to color and shading. We use both a region model, based on distributions of pixel colors, and an edge model, which removes false positives, to perform corner detection on color images whose regions contain texture. We show results on a variety of natural images at different scales that highlight the problems that occur when boundaries between regions have curvature.

Patent
20 Dec 1999
TL;DR: In this article, an image processing apparatus and image processing method capable of reducing a load for drawing, diversifying image representation and smoothing character motion is presented. But it is not possible to generate an image in which a motion of an object coincides with reproduced music with less computation.
Abstract: An object of this invention is to provide an image processing apparatus and image processing method capable of reducing a load for drawing, diversifying image representation and smoothing character motion. An image processing means ( 2 ) synthesizes background image data ( 3 d ) composed of movie image with character data ( 4 d ) composed of solid image of polygon data and supplies to a display ( 6 ). An image ( 60 ) produced by synthesizing the background image ( 3 ) with the character ( 4 ) is displayed on the display ( 6 ). A simple model ( 5 d ) composed of three dimensional data for determining the precedence in erasing a negative face between the background image data ( 3 d ) and character data ( 4 d ) is set in part of the background image data. The image processing means ( 2 ) determines a portion in which the character ( 4 ) is hidden by the background ( 3 ) based on the simple model and erase a corresponding portion. This erase processing enables a screen to be represented three-dimensionally. Meanwhile, it is possible to generate an image in which a motion of an object coincides with reproduced music with less computation amount without depending on complicated control.

01 Jan 1999
TL;DR: A digital image processing method to measure translation error between color records, that can be computed from the acquired image, has proven reliable and can be applied to images acquired from both digital cameras and scanners.
Abstract: identify a region of interest (ROI)fit a linear equation to the centroid locationusing linear fit to line, project the image data along the edge direction to top or bottom edge of ROIapply window and compute edgederivativeyof this arraycompute the centroid of each line (LSF) A compute discrete Fourier transform (DFT) of this array‘bin’ data, sampled at 1/4 of original image samplingnormalize modulus as SFRcompute derivative in the x (pixel) direction using FIR filterreport resultstransform image data using the OECFOECFderive luminance record if data is R, G, B .store equationfor eachcolor recordcompute edge derivative of thisarray, and appl window to resultcompute discrete Fourier We describe a digital image processing method to measure translation error between color records, that can be computed from the acquired image. Several steps in the ISO 12233 standard for measurement of resolution for digital cameras are used to locate an edge for each color plane. The method has proven reliable and can be applied to images acquired from both digital cameras and scanners.