scispace - formally typeset
Search or ask a question

Showing papers in "Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing in 1984"


Journal ArticleDOI
TL;DR: The method handles artificial pits introduced by data collection systems and extracts only the major drainage paths and its performance appears to be consistent with the visual interpretation of drainage patterns from elevation contours.
Abstract: The extraction of drainage networks from digital elevation data is important for quantitative studies in geomorphology and hydrology. A method is presented for extracting drainage networks from gridded elevation data. The method handles artificial pits introduced by data collection systems and extracts only the major drainage paths. Its performance appears to be consistent with the visual interpretation of drainage patterns from elevation contours.

2,167 citations


Journal ArticleDOI
TL;DR: There are several image segmentation techniques, some considered general purpose and some designed for specific classes of images as discussed by the authors, some of which can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid link growing scheme, centroid region growing scheme and split-and-merge scheme.
Abstract: There are now a wide Abstract There are now a wide variety of image segmentation techniques, some considered general purpose and some designed for specific classes of images. These techniques can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid linkage region growing schemes, centroid linkage region growing schemes, spatial clustering schemes, and split-and-merge schemes. In this paper, we define each of the major classes of image segmentation techniques and describe several specific examples of each class of algorithm. We illustrate some of the techniques with examples of segmentations performed on real images.

2,009 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to generalize these distance transformation families to higher dimensions and to compare the computed distances with the Euclidean distance.
Abstract: In many applications of digital picture processing, distances from certain feature elements to the nonfeature elements must be computed. In two dimensions at least four different families of distance transformations have been suggested, the most popular one being the city block/chessboard distance family. The purpose of this paper is twofold: To generalize these transformations to higher dimensions and to compare the computed distances with the Euclidean distance. All of the four distance transformation families are presented in three dimensions, and the two fastest ones are presented in four and arbitrary dimensions. The comparison with Euclidean distance is given as upper limits for the difference between the Euclidean distance and the computed distances.

870 citations


Journal ArticleDOI
TL;DR: A study aimed at segmenting a high-resolution black and white image of Sunnyvale, California, shows that the proposed procedure should be useful for land use classifications as well as other problems.
Abstract: A study aimed at segmenting a high-resolution black and white image of Sunnyvale, California, is described. In this study regions were classified as belonging to any one of nine classes: residential, commercial/industrial, mobile home, water, dry land, runway/taxiway, aircraft parking, multilane highway, and vehicle parking. The classes were selected so that they directly relate to the Defense Mapping Agency's Mapping, Charting and Geodesy tangible features. To attack the problem a statistical segmentation procedure was devised. The primitive operators used to drive the segmentation are texture measures derived from cooccurrence matrices. The segmentation procedure considers three kinds of regions at each level of the segmentation: uniform, boundary, and unspecified. At every level the procedure differentiates uniform regions from boundary and unspecified regions. In the assigns a class label to the uniform regions. The boundary and unspecified regions are split to form higher level regions. The methodologies involved are mathematically developed as a series of hypothesis tests. While only a one-level segmentation was performed studies are described which show the capabilities of each of these hypothesis tests. In particular an 83% correct classification was obtained in testing the labeling procedure. These studies indicate that the proposed procedure should be useful for land use classifications as well as other problems.

366 citations


Journal ArticleDOI
TL;DR: It is shown that a two-dimensional curve defined parametrically in terms of rational degree n polynomials in t can be expressed implicitly as a degree n POlynomial in z and y and it is demonstrated that a “bi-m-ic” parametric surface can be written implicitly as an implicit polynomial of degree 2m2.
Abstract: The following two problems are shown to have closed-form solutions requiring only the arithmetic operations of addition, subtraction, multiplication and division: (1) Given a curve or surface defined parametrically in terms of rational polynomials, find an implicit polynomial equation which defines the same curve or surface. (2) Given the Cartesian coordinates of a point on such a curve or surface, find the parameter(s) corresponding to that point. It is shown that a two-dimensional curve defined parametrically in terms of rational degree n polynomials in t can be expressed implicitly as a degree n polynomial in z and y. It is also demonstrated that a “bi-m-ic” parametric surface (where e.g., m = 3 for bicubic) can be expressed implicitly as a polynomial in x, y, z of degree 2m2. The degree of a rational bi-m-ic surface is also shown to be 2m2. The application of these results to finding curve and surface intersections is discussed.

323 citations


Journal ArticleDOI
TL;DR: In this article, a scan-along algorithm for polygonal approximation is presented, where the approximation depends on the area deviation for each line segment and the algorithm outputs a new line segment when the area variance per length unit of the current segment exceeds a prespecified value.
Abstract: A new and very fast algorithm for polygonal approximation is presented. It uses a scan-along technique where the approximation depends on the area deviation for each line segment. The algorithm outputs a new line segment when the area deviation per length unit of the current segment exceeds a prespecified value. Pictures are included, showing the application of the algorithm to contour coding of binary objects. Some useful simplifications of the algorithms are suggested.

303 citations


Journal ArticleDOI
TL;DR: It is shown that for Euclidean distance, the minimal metric bases for the digital plane are just the sets of three noncollinear points; but for city block or chessboard distance,The digital plane has no finite metric basis.
Abstract: Let S be a metric space under the distance function d. A metric basis is a subset B ⊆ S such that d(b, x) = d(b, y) for all b ϵ B implies x = y. It is shown that for Euclidean distance, the minimal metric bases for the digital plane are just the sets of three noncollinear points; but for city block or chessboard distance, the digital plane has no finite metric basis. The sizes of minimal metric bases for upright digital rectangles are also derived, and it is shown that there exist rectangles having minimal metric bases of any size ≥ 3.

290 citations


Journal ArticleDOI
TL;DR: Results encouraged investigations into modeling the picture as a mosaic of patches where the gray-value function within each patch is described as a second-order bivariate polynomial of the pixel coordinates, facilitating the determination of threshold values related to a priori confidence limits.
Abstract: Modeling the image as a piecewise linear gray-value function of the pixel coordinates considerably improved a change detection test based previously on a piecewise constant gray-value function. These results encouraged investigations into modeling the picture as a mosaic of patches where the gray-value function within each patch is described as a second-order bivariate polynomial of the pixel coordinates. Such a more appropriate model allowed the assumption to be made that the remaining gray-value variation within each patch can be attributed to noise related to the sensing and digitizing devices, independent of the individual image frames in a sequence. This assumption made it possible to relate the likelihood test for change detection to well-known statistical tests ( t test, F test), facilitating the determination of threshold values related to a priori confidence limits.

213 citations


Journal ArticleDOI
TL;DR: A computationally inexpensive algorithm for the determination of vanishing points once line segments in an image have been determined since it has no computationally degenerate cases and the only operations necessary are vector cross products and arc tangents.
Abstract: This paper describes a computationally inexpensive algorithm for the determination of vanishing points once line segments in an image have been determined. The approach is particularly attractive since it has no computationally degenerate cases and the only operations necessary are vector cross products and arc tangents. The need to know the distance to the focal plane is also eliminated thus avoiding tedious calibration procedures.

206 citations


Journal ArticleDOI
TL;DR: The sources of error for the edge finding technique proposed by Marr and Hildreth are identified, and the magnitudes of the errors are estimated, based on idealized models of the most common error producing situations as mentioned in this paper.
Abstract: The sources of error for the edge finding technique proposed by Marr and Hildreth (D. Marr and T. Poggio, Proc. R. Soc. London Ser. B 204 , 1979, 301–328; D. Marr and E. Hildreth, Proc. R. Soc. London Ser. B. 207 , 1980, 187–217) are identified, and the magnitudes of the errors are estimated, based on idealized models of the most common error producing situations. Errors are shown to be small for linear illuminations, as well as for nonlinear illuminations with a second derivative less than a critical value. Nonlinear illuminations are shown to lead to spurious contours under some conditions, and some fast techniques for discarding such contours are suggested.

195 citations


Journal ArticleDOI
TL;DR: In this paper a new class of similarity measure is introduced, which is issued from the calculation of the number of sign changes in the scanned subtraction image, which leads to registration algorithms which are demonstrated to be far more robust than the methods currently in use.
Abstract: The computer comparison of two images requires a registration step which is usually performed by optimizing a similarity measure with respect to the registration parameters. In this paper a new class of similarity measure is introduced, which is issued from the calculation of the number of sign changes in the scanned subtraction image. Using these similarity measures for the registration of dissimilar images leads to registration algorithms which are demonstrated to be far more robust than the methods currently in use (maximization of the correlation coefficient or correlation function, minimization of the sum of the absolute values of the differences). Two medical applications of these image processing methods are presented: The registration of gamma ray images for change detection purpose and the alignment of X ray digitized images (without and with iodine contrast) for improving the quality of the subtraction angiographic images.

Journal ArticleDOI
TL;DR: It is shown that splines provide an attractive representation technique for generalized cylinders and in fact splines provides a parametric representation for the full generalization of the generalized cylinders concept.
Abstract: Generalized cylinders are an attractive representation for three-dimensional objects. One of their principal features is that the representation contains an intrinsic coordinate frame. The intrinsic (object-centered) frame computed from image data facilitates the matching of an object with a similarly represented stored prototype. The definitions of generalized cylinders are loose and do not specify how a generalized cylinder is to be represented digitally. Previous digital representations have tended to be either nonparametric renditions, which makes them computationally cumbersome, or parametric simplifications, which limit the class of objects that can be represented. It is shown that splines provide an attractive representation technique for generalized cylinders. In fact splines provide a parametric representation for the full generalization of the generalized cylinders concept.

Journal ArticleDOI
TL;DR: The fundamental problems of developing an effective MSIMD system are discussed and a simple SIMD/MIMD computational model for comparison with such systems is proposed.
Abstract: Image processing problems frequently involve large structured arrays of data and a need for very rapid computation. Special parallel processing schemes have evolved over the last 20 years to deal with these problems. In this paper many parallel systems which have been developed for image processing are outlined and the features of their underlying architectures are discussed. Most of these special architectures may be loosely classified as either SIMD or pipeline structures although some MIMD structures have been designed for high level image analysis. In recent years several multiple SIMD (MSIMD) schemes have been proposed as suitable architectures for image processing. The fundamental problems of developing an effective MSIMD system are discussed and a simple SIMD/MIMD computational model for comparison with such systems is proposed.

Journal ArticleDOI
TL;DR: A software system for image processing, HIPS, was developed for use in a UNIX environment and has the useful feature that images are self-documenting to the extent that each image as stored in the system includes a history of the transformations that have been applied to that image.
Abstract: A software system for image processing, HIPS, was developed for use in a UNIX environment. It includes a small set of subroutines which primarily deals with a standardized descriptive image sequence header, and an ever-growing library of image transformation tools in the form of UNIX “filters.” Programs have been developed for simple image transformations, filtering, convolution, Fourier and other transform processing, edge detection and line drawing manipulation, simulation of digital compression and transmission, noise generation, and image statistics computation. The system has the useful feature that images are self-documenting to the extent that each image as stored in the system includes a history of the transformations that have been applied to that image. Although it has been used primarily with a Grinnell image processor, the bulk of the system is machine-independent. The system has proven itself a highly flexible system, both as an interactive research tool, and for more production-oriented tasks. It is both easy to use, and quickly adapted and extended to new uses.

Journal ArticleDOI
TL;DR: This method does not rely on the existence of modes on the histogram, and the number of free parameters is reduced, which makes this algorithm essentially automatic and not time consuming.
Abstract: A method for image segmentation and compression based on the intrinsic properties of the distribution function of an image is presented. This method does not rely on the existence of modes on the histogram. The number of free parameters is reduced, which makes this algorithm essentially automatic and not time consuming.

Journal ArticleDOI
TL;DR: A recursive technique for multiple threshold selection on digital images is described, which may be recursively applied first using only those pixels whose intensities are smaller than the threshold and then only those pixel whose intensity values are larger than thereshold.
Abstract: A recursive technique for multiple threshold selection on digital images is described. Pixels are first classified as edge pixels or nonedge pixels. Edge pixels are then classified, on the basis of their neighborhoods, as being relatively dark or relatively light. A histogram of the graytone intensities is obtained for those pixels which are edge pixels and relatively dark and another histogram is obtained for those pixels which are edge pixels and relatively light. A threshold is selected corresponding to the graytone intensity value corresponding to one of the highest peaks from the two histograms. To get multiple thresholds, the procedure may be recursively applied first using only those pixels whose intensities are smaller than the threshold and then only those pixels whose intensities are larger than the threshold.

Journal ArticleDOI
TL;DR: A mathematical formulation is presented for detecting the 3D motion of a planar surface from the motion of its perspective image without knowing correspondence of points.
Abstract: A mathematical formulation is presented for detecting the 3D motion of a planar surface from the motion of its perspective image without knowing correspondence of points. The motion is determined explicitly by numerical computation of certain line or surface integrals on the image. The same principle is also used to know the position and orientation of a planar surface fixed in the space by moving the camera or using several appropriately positioned cameras, and no correspondence of points is involved. Some numerical examples are also given.

Journal ArticleDOI
TL;DR: An efficient (linear time), computationally simple algorithm is developed that achieves a high degree of data reduction while producing a representation that is accurate for even the most complex curves.
Abstract: A planar curve may be represented by a sequence of connected line segments. Existent algorithms for reducing the number of line segments used to represent a curve are examined. An efficient (linear time), computationally simple algorithm is developed. This algorithm achieves a high degree of data reduction while producing a representation that is accurate for even the most complex curves.

Journal ArticleDOI
TL;DR: The form of a two-dimensional image is shown to be uniquely related to the intensity of its Fourier transform, which leads directly to a computationally realizable phase-restoration algorithm, which is a distinct theoretical and practical improvement on earlier (somewhat ad hoc) algorithms of the same general type.
Abstract: The form of a two-dimensional image is shown to be uniquely related to the intensity of its Fourier transform. No distinction is made between the form of the image and (the complex conjugate of) its mirror image. The uniqueness is unequivocal for a real, positive image. If the image is complex, or real but not necessarily positive, uniqueness only applies for the most compact image compatible with the given samples of the Fourier intensity. The number and spacing of the samples must be adequate to permit the autocorrelation of the image to be accurately reconstructed. Unlike other demonstrations of this uniqueness, the approach introduced here leads directly to a computationally realizable phase-restoration algorithm, which is a distinct theoretical and practical improvement on earlier (somewhat ad hoc) algorithms of the same general type.

Journal ArticleDOI
TL;DR: An efficient way of building a polyhedral approximation of a set of points in 3-D space is described, where the points are the vertices of a planar graph embedded in a surface of genus 0 and are obtained by a laser range finder.
Abstract: An efficient way of building a polyhedral approximation of a set of points in 3-D space is described. The points are the vertices of a planar graph embedded in a surface of genus 0 and are obtained by a laser range finder. The technique presented here is a generalization of an existing algorithm (R. Duda and P. Hart, Pattern Classification and Scene Analysis, Wiley-Interscience, New York 1973) for the polygonal approximation of a simple curve in 2-D space.

Journal ArticleDOI
TL;DR: Eight of the nine varieties of simple surface points are shown to have natural “continuous analogs,” and the one remaining variety is shown to be very different from the other types.
Abstract: Z3. But simple surface points are defined by means of axioms, and the axioms do not reveal what simple surface points “look like.” In this paper eight of the nine varieties of simple surface points are shown to have natural “continuous analogs,” and the one remaining variety is shown to be very different from the other types. This work yields substantial generalizations of the main theorems on simple surface points that were proved by Morgenthaler, Reed, and Rosenfeld. Q

Journal ArticleDOI
TL;DR: It is demonstrated that the partition problem is equivalent to finding the maximum number of independent vertices in a bipartite graph, and the graph's matching properties are used to develop an algorithm that solves the independent vertex problem.
Abstract: An algorithm is presented for partitioning a finite region of the digital plane into a minimum number of rectangular regions It is demonstrated that the partition problem is equivalent to finding the maximum number of independent vertices in a bipartite graph The graph's matching properties are used to develop an algorithm that solves the independent vertex problem The solution of this graph-theoretical problem leads to a solution of the partition problem

Journal ArticleDOI
TL;DR: A sequential algorithm is described which extracts a polygonal approximation to contours in a raster-scanned binary image which features a dynamic data structure which keeps track of the boundaries of objects and holes at any line of the frame.
Abstract: A sequential algorithm is described which extracts a polygonal approximation to contours in a raster-scanned binary image. The design features a dynamic data structure which keeps track of the boundaries of objects and holes at any line of the frame. By manipulating the pointers in the structure, high speed execution is attained and is independent of the complexity of the image. A frame may contain any number of objects and holes of any shape and in any configuration. Contours are generated as an ordered list of (x, y) coordinate pairs, with collinear points removed and a standardized starting point. Object contours are linked counterclockwise and hole contours clockwise and the data structure maintains object/hole relationships. Only one line of the image need ever be stored at any one time. The algorithm has been implemented on an 8086-based microcomputer in less than 2 kb,ytes of memory.

Journal ArticleDOI
TL;DR: It is shown that in theory, shading information from the two views can be used to determine the orientation of the surface normal along the feature-point contours, provided the photometric properties of thesurface material are known.
Abstract: Zero-crossing or feature-point based stereo algorithms can, by definition, determine explicit depth information only at particular points in the image. To compute a complete surface description, this sparse depth map must be interpolated. A computational theory of this interpolation or reconstruction process, based on a surface consistency constraint, has previously been proposed, implemented, and tested. In order to provide stronger boundary conditions for the interpolation process, other visual cues to surface shape are examined in this paper. In particular, it is shown that in theory, shading information from the two views can be used to determine the orientation of the surface normal along the feature-point contours, provided the photometric properties of the surface material are known. This computation can be performed by using a simple modification of existing photometric stereo algorithms. It is further shown that these photometric properties need not be known a priori, but can be computed directly from image irfadiance information for a particular class of surface materials. The numerical stability of the resulting equations is also examined.

Journal ArticleDOI
TL;DR: It is shown that the so-called Euler operators completely describe the family of objects bounded by 2-manifold surfaces.
Abstract: Alternative modeling spaces for physical solid objects are discussed and it is shown that the so-called Euler operators completely describe the family of objects bounded by 2-manifold surfaces.

Journal ArticleDOI
TL;DR: A complete mathematical treatment is given for describing the topographic primal sketch of the underlying grey tone intensity surface of a digital image.
Abstract: A complete mathematical treatment is given for describing the topographic primal sketch of the underlying grey tone intensity surface of a digital image. Each picture element is independently classified into a unique descriptive label, invariant under monotonically increasing grey tone transformations, from the set {peak, pit, ridge, ravine, saddle, flat, and hillside}, with hillside having subcategories {inflection point, slope, convex hill, concave hill, and saddle hill}. The topographic classification is based on the first and second directional derivatives of the estimated image intensity surface. Two different sets of basis functions, generalized splines and the discrete cosine basis, are used to estimate the image intensity surface. Zero-crossings of the first directional derivative are identified as location of interest in the image.

Journal ArticleDOI
TL;DR: A rectilinear polygon can be viewed as an art gallery room whose walls meet at right angles and an algorithm is presented that stations guards in such a room so that every interior point is visible to some guard.
Abstract: A rectilinear polygon can be viewed as an art gallery room whose walls meet at right angles. An algorithm is presented that stations guards in such a room so that every interior point is visible to some guard. The algorithm partitions the polygon into L-shaped pieces, a subclass of star-shaped pieces, and locates one guard within each kernel. The algorithm runs in O ( n log n ) time in the worst case for a polygon of n vertices.

Journal ArticleDOI
TL;DR: Input of line drawings to computers may be accomplished using digital image processing and pattern recognition methods for automatic digitization and a one-pass algorithm is used for the segmentation of the binary image into primary components.
Abstract: Input of line drawings to computers may be accomplished using digital image processing and pattern recognition methods for automatic digitization. Part of a system for the recognition of electrical schematics is presented. A one-pass algorithm is used for the segmentation of the binary image into primary components. Primary components are simply groups of connected black pixels. The segmentation yields a picture graph representing the binary image. Every node of the graph represents a primary component. Strings of alphanumeric symbols in the drawing are located by computing connected components, i.e., connected subgraphs, and clustering the small connected components. The line elements are computed with two different methods. First, primary components are merged into classified line elements which describe the dominant large lines of the drawing. Second, the details are analyzed within the context of dominant lines using a production system.

Journal ArticleDOI
TL;DR: A time-varying corner detector is described which is based on the and operation between the cornerness and the temporal derivative, which is shown to be equivalent to the corner detectors by Zuniga and Haralick.
Abstract: The algorithms for structure from motion require solution of the correspondence problem. By detecting only time-varying tokens, the problem may be significantly simplified. In this paper, a time-varying corner detector is described which is based on the and operation between the cornerness and the temporal derivative. It is shown that the corner detectors by Zuniga and Haralick (IEEE CVPR Conf. 1983, pp. 30–37) , Kitchen and Rosenfeld (Pattern Recognition Lett. 1, 1982, 95–102) , and Dreschler and Nagel (Proc. IJCAI, 1981, pp. 692–697) are equivalent. In this time-varying corner detector, the Zuniga and Haralick, in loc. cit., corner detector is used for finding the cornerness at a point and the absolute value of difference in intensity at a point is used to approximate the temporal derivative. The results of the time-varying corner detector for the the real scenes and the synthetic images with random background and random object are shown.

Journal ArticleDOI
TL;DR: A measurement selection algorithm was formulated which shows detection accuracies of better than 90% on a number of experiments and assumes that measures computed from regions containing the object form a cluster defined by N (μ 0 , Σ 0 ) in measurement space, while measures compute from regions notcontaining the object lie some distance away from this cluster.
Abstract: Previous investigations indicate that edge information, gray level histogram information, texture information, and shape information are all useful in detecting objects. Gray level cooccurrence matrices contain a form of each of these types of information. Hence applying measures defined on cooccurrence matrices to the object detection problem would seem to be an approach which should be investigated. This paper presents a formulation of such an approach. The decision logic employed assumes that measures computed from regions containing the object form a cluster defined by N (μ 0 , Σ 0 ) in measurement space, while measures computed from regions not containing the object lie some distance away from this cluster. To help assure that this is the case, a measurement selection algorithm was formulated. Studies are reported which show testing detection accuracies of better than 90% on a number of experiments.