Showing papers in "CVGIP: Graphical Models and Image Processing in 1994"
••
TL;DR: An efficient three-dimensional (3-D) parallel thinning algorithm for extracting both the medial surfaces and the medial axes of a 3-D object and its use in defect analysis of objects produced by casting and forging is discussed.
1,357 citations
••
TL;DR: It is shown that the method of radial basis functions provides a powerful mechanism for processing facial expressions and is applicable to other elastic objects as well.
277 citations
••
TL;DR: The entropy method for image thresholding suggested by Kapur et al. has been modified and a more pertinent information measure of the image is obtained.
199 citations
••
TL;DR: This method has been shown to reduce the number of binarization failures from 33% to 6% on difficult images and to improve subsequent OCR recognition rates from about 95% to 97,5% on binary images.
165 citations
••
TL;DR: A hill-clustering technique is applied to the image histogram in order to approximately determine the peak locations of the histogram, which gives the global minimum of each rational function, which corresponds to a multilevel threshold value.
114 citations
••
TL;DR: This article addresses the question of how to define boundaries in multidimensional digital spaces so that they are "closed" and connected, andSo that they partition the digital space into an interior set that is connected and an exterior set that are connected.
92 citations
••
TL;DR: An algorithm is presented which approximates a sequence of uniformly spaced single-valued data by a sum of Gaussians with a prescribed accuracy.
87 citations
••
TL;DR: If it is not essential to minimize m, simple modifications of the algorithms afford a reduction by a factor of n for both time and space, and all of the authors' algorithms exhibit O ( n 2 ) space complexity.
80 citations
••
TL;DR: It is shown that the detection of the zero-crossings and the local extrema of a wavelet transform of the histograms gives a complete characterization of the peaks in the histogram, that is to say, the values at which they start, end, and are extreme.
67 citations
••
TL;DR: The algorithm is compared with several other smoothing techniques for additively corrupted images and the smoothing of synthetic aperture radar images is used as an example for multiplicative noise.
62 citations
••
TL;DR: The algebraic specification of a prototype interactive geometric modeler for 3D objects, whose topologies are represented by 3-dimensional generalized maps, is presented, which constitutes an efficient formal frame for developing large pieces of software in the area of boundary representation.
••
TL;DR: A particularly convenient representation of ellipsoids as elements of the vector space of symmetric matrices is presented, which allows the straightforward inclusion of geometric constraints on the reconstructedEllipsoid in the form of inner and outer bounds on recovered ellipseid shape.
••
TL;DR: A morphological approach to character string extraction from overlapping text/background images that minimizes the shape distortion of characters is described and implemented.
••
TL;DR: The proposed algorithm, called E-GNC, can be considered an extension of the graduated nonconvexity (GNC), first proposed by Blake and Zisserman for noninteracting discontinuities, and is shown to give satisfactory results with a low number of iterations.
••
TL;DR: This algorithm uses a plane sweep to presort the segments; then it operates on a list of slabs that efficiently stores a single level of a segment tree that performs well even with inadequate physical memory.
••
TL;DR: These models are based on the 2-D linear phase portrait, and consist of a superposition of flow primitives that are equivalent to the canonical form of phase portraits that are employed to compress scalar images that exhibit little or gradual variation along the flow streamlines.
••
TL;DR: A straightforward and economical iterative procedure is developed which is shown to be stable and have rapid convergence to an unbiased least squares fit on a wide range of synthetic data.
••
TL;DR: By applying the blur estimation algorithm to natural and synthetic images with different amounts of blur and noise, it is shown that the algorithm gives reliable estimates for the spread of the blurring kernel even at low signal-to-noise ratios.
••
TL;DR: This paper presents a framework that, in its most general form, is a statistically optimal technique for the extraction of specific geometric features of objects directly from the noisy projection data, focusing on the tomographic reconstruction of binary polygonal objects from sparse and noisy data.
••
TL;DR: This paper first describes the options and procedures for constructing such a pseudo-hexagonal grid and then demonstrates techniques of resampling digital images on the pseudo hexagonal grid.
••
TL;DR: A new method is proposed for image restoration of a gray-level image blurred by an erroneous point spread function and corrupted by either additive or multiplicative noise based on a Markov random field model with an appropriate line field, whereby it has the ability to restore discontinuities.
••
TL;DR: The proposed method is tested against forms, with Code 39 bar codes, and the results of the experiments show the method is very effective and robust against different form types and for scanning imperfection.
••
TL;DR: Algorithms to process off-line Arabic handwriting prior to recognition are presented and special rules to enforce temporal information on the stroke to obtain the most likely traversal that is consistent with Arabic handwriting are applied.
••
TL;DR: The technique exploits the intrinsic ordering of the triangles produced by the surface extraction algorithm by adopting a Back-to-Front visualization technique and permits the rendering of high resolution volumetric datasets in computational environments with limited capabilities in terms of memory and graphics hardware.
••
TL;DR: Simple improvements are proposed to modify the results so that the improved algorithm will make possible robust, flexible, and correct region filling and complete reconstruction of an image.
••
TL;DR: A Fourier-based method is described that accounts for aliasing and that, for a variety of 512 × 512 image pairs, gives misregistration estimates with standard errors quite often less than 1/100th of a pixel in both horizontal and vertical directions.
••
TL;DR: The local form of actual feature types contained in real images is explored, and it is indicated that the feature forms are self-similar over different images and across scales.
••
TL;DR: Modifications to Kittler′s method are presented to estimate better the gray level above the maximum slope and to be less sensitive to the noise over uniform luminance areas.
••
TL;DR: It is shown how the high speed derivative algorithm can also be used to directly generate spline curves and surfaces much more efficiently than linear combination algorithms and forward differencing.
••
TL;DR: An extended EXM formulation that matches multiple templates in the complex image domain is presented that is robust to minor rotation and scale distortions and a new generalized MSE restoration paradigm based on the analogy to multiple-template EXM is introduced.