Showing papers in "Computer Graphics and Image Processing in 1981"
••
TL;DR: This paper develops a statistical technique to define a noise model, and then successfully applies a local statistics noise filtering algorithm to a set of actual SEASAT SAR images, resulting in smoothed images that permit observers to resolve fine detail with an enhanced edge effect.
880 citations
••
TL;DR: In this article, an extension of Lee's local statistics method modified to utilize local gradient information is presented, where the local mean and variance are computed from a reduced set of pixels depending on the orientation of the edge.
819 citations
••
TL;DR: An automatic threshold selection method for picture segmentation based on the definition of an anisotropy coefficient, related to the asymmetry of the grey-level histogram, which has been successfully applied to images having various kinds of histograms.
475 citations
••
TL;DR: A highly efficient recursive algorithm is defined for simultaneously convolving an image (or other two-dimensional function) with a set of kernels which differ in width but not in shape, so that the algorithm generates aSet of low-pass or band-pass versions of the image.
442 citations
••
TL;DR: In this paper, the problem of surface detection is translated into a problem of traversal of a directed graph, G, and it is proven that connected subgraphs of G correspond to surfaces of connected components of Q (i.e., of objects in the scene).
299 citations
••
TL;DR: The segmentation algorithm proposed in this paper is a complex form of thresholding which utilizes multiple thresholds and not only works well for simple images but also produces reasonable segmentations for complex images.
287 citations
••
TL;DR: A facet model for image data is discussed which has the potential for fitting the form of the real idealized image, and for describing how the observed image differs from the idealized form.
248 citations
••
TL;DR: A parallel algorithm for three-dimensional object thinning with two approaches, path connectivity and surface connectivity, are presented and criteria to avoid excessive deletion and preserve connectivity are described.
239 citations
••
TL;DR: This paper reviews box-filtering techniques and also describes some useful extensions of the box filtering technique.
229 citations
••
TL;DR: In this article, the authors compared the accuracy of four modeling techniques for solid state and vidicon cameras, including linear spline, quadratic, and two-plane models.
207 citations
••
TL;DR: In this article, an iterative technique for gradually updating the local registration of two images based on a dynamic cooperative model is described. But this method is limited to the case when registration is expected to be poor and feature measures unreliable.
••
TL;DR: In this article, an image smoothing scheme for improving the quality of noisy pictures is presented, which is an iterative scheme employing a 3 × 3 mask in which the weighting coefficients are the normalized gradient inverse between the center pixel and its neighbors.
••
TL;DR: In this article, the rotational and translational components of the optical flow are decomposed into rotational components and a translational component is extracted implicitly by locating the focus of expansion associated with the translational part of the relative motion.
••
ARCO1
TL;DR: The proposed algorithm takes advantage of this approach both to reduce the computation time needed to obtain the skeleton and to perform contour analysis in order to find the contour regions to be represented by the skeleton branches.
••
TL;DR: Hierarchical template matching as discussed by the authors allows both a savings in computation time (by a problem-dependent amount) and a considerable degree of insensitivity to noise, which is an important feature of template matching locations that would be difficult to enforce and evene to express.
••
TL;DR: A relaxation method is described to obtain a function such that when this function is subtracted from the original image the seams are eliminated, but the details are not affected or blurred.
••
TL;DR: A method is developed for determining an intensity mapping to linearize the response of a viewed display device and allows the sensible application of intensity mappings meant to improve perception of image information for a particular image or imaging objective.
••
TL;DR: Algorithms are presented for finding the area, centroid, union, intersection, and complement of binary images, all of which are linear in the number of nodes in the tree.
••
TL;DR: In this paper, a unified treatment of the correction of periodic or nonperiodic errors is presented, which provides some insight into the relation of correction algorithms to the type of radiometric degradation.
••
TL;DR: This work studies the effect of noise reduction preprocessing, specifically median filtering and averaging, on the accuracy of edge location estimation using least squares in the case of white Gaussian noise and binary symmetrical channel noise, finding that neither median filtering nor averaging improves the estimation accuracy.
••
TL;DR: Through proper coding of the objects in a binary picture it is shown that the binary and Minkowski operators can be implemented in such a way as to decrease significantly computational complexity.
••
IBM1
TL;DR: The median (or any other rank-order value) is obtained by examining the bits of the arguments columnwise starting with the most significant bits, and the opposite turns out to be true.
••
TL;DR: Two related methods for the hierarchical representation of curve information are presented and an edge quadtree representation is presented, which is a variable-resolution representation of the linear information in the image.
••
TL;DR: A decision rule for fitting an appropriate random field model to a given image is designed using spectral representations of the random field and standard Bayesian methods.
••
TL;DR: It is shown how one may use local geometric information derived from the contour to aid in the selection and generation of significant pieces of the skeleton for contours or curves of length n.
••
TL;DR: In this paper, a local algorithm creates cones from elliptical regions on adjacent slices and a predictive algorithm hypothesises cones from regions on nonadjacent slices and then tests the hypotheses by intelligently requesting intermediate slices.
••
TL;DR: An algorithm is described for converting region boundaries in an image array into chain-encoded line structures, each described by a set of chain links, which is used for preprocessing an image in scene analysis.
••
TL;DR: This contribution reviews the specifications of the task and the three main choices for coding: area, contour, and structural coding, and illustrates the last two topics.
••
TL;DR: In this paper, an algorithm is described that traces and labels the boundaries of objects whose images are stored as a condensed linear array of coordinate values defining the successive intersections of a linear raster with the object boundaries.
••
TL;DR: A simple and efficient algorithm has been developed which approximates data points defining planar lines and curves by means of linear segments that are constrained to pass within specified distances of the points.