scispace - formally typeset
Search or ask a question

Showing papers by "Azriel Rosenfeld published in 1989"


Journal ArticleDOI
TL;DR: The fundamental concepts of digital topology are reviewed and the major theoretical results in the field are surveyed, with a bibliography of almost 140 references.
Abstract: Digital topology deals with the topological properties of digital images: or, more generally, of discrete arrays in two or more dimensions. It provides the theoretical foundations for important image processing operations such as connected component labeling and counting, border following, contour filling, and thinning—and their generalizations to three- (or higher-) dimensional “images.” This paper reviews the fundamental concepts of digital topology and surveys the major theoretical results in the field. A bibliography of almost 140 references is included.

1,084 citations


Book
01 Aug 1989
TL;DR: An alternative formulation of the surface slant probability density is developed that takes the discrete nature of digital images into account, and that yields a better estimate of the light source direction.
Abstract: This paper develops improved methods of estimating local surface orientation from shading information with the aid of a coordinate system having one axis in the (assumed) direction of the light source. In this system, the surface tilt is related to the direction of the gray level gradient at the given point. An alternative formulation of the surface slant probability density is developed that takes the discrete nature of digital images into account, and that yields a better estimate of the light source direction.

250 citations


Journal ArticleDOI
TL;DR: A divide-and-conquer Hough transform technique for detecting a given number of straight edges or lines in an image that requires only O(log n) computational steps for an image of size n × n.

50 citations


Journal ArticleDOI
TL;DR: This paper uses a scheme which represents a quadtree by its leaf nodes; each node is represented by the coordinates of the upper left corner of its corresponding block and its color and size.

33 citations


Journal ArticleDOI
TL;DR: This divide-and-conquer approach yields an approximate MD subdivision of P in O (log n ) computational steps using O ( n ) processors, where n is the size of P.

32 citations



Journal ArticleDOI
TL;DR: A preprocessing technique that gives each data point a weight related to the density of data points in its vicinity, so that points belonging to clusters get relatively high weights, while background noise points get relatively low weights.

28 citations


Journal ArticleDOI
TL;DR: Results suggest that the use of 3-dimensional gradient operators greatly reduces search effort, and somewhat improves the accuracy of boundaries for noisy images.
Abstract: A graph search based method for boundary following in a stack of serial section images is presented. The cost function utilizes edge gradient magnitudes, edge circularity, and comparisons with neighboring sections. The performance of the method is compared for 2- and 3-dimensional gradient operators and for cost functions which omit the circularity component and/or comparisons. Results suggest that the use of 3-dimensional gradient operators greatly reduces search effort, and somewhat improves the accuracy of boundaries for noisy images.

28 citations


Journal ArticleDOI
TL;DR: The topics covered include architectures; computational techniques; feature detection, segmentation, and image analysis; matching, stereo, and time-varying imagery; shape and geometry; color and texture; and 3-dimensional scene analysis.
Abstract: This paper presents a bibliography of over 1600 references related to computer vision and image analysis, arranged by subject matter. The topics covered include architectures; computational techniques; feature detection, segmentation, and image analysis; matching, stereo, and time-varying imagery; shape and geometry; color and texture; and 3-dimensional scene analysis. A few references are also given on related topics, such as computer graphics, image input/output, image processing, optical processing, neural nets, visual perception, pattern recognition, and artificial intelligence.

26 citations


Journal ArticleDOI
TL;DR: A less rigid concept of linearity based on least squares and the correlation coefficient is presented and a new type of connectedness is discussed; it is intermediate between the usual 4- and 8-connectedness.

19 citations


Proceedings ArticleDOI
04 Jun 1989
TL;DR: A scheme is developed to match range images in an environment where distinctive features are scarce, and has been used to map the floor of the ocean, where the range data are obtained by a multibeam echo-sounder system.
Abstract: A scheme is developed to match range images in an environment where distinctive features are scarce. When each image overlaps with several other images, the match must also be performed at the global level. This is particularly challenging, because of the possibility of bending and compression in range images. The primitives used for local matching contours of constant range which are extracted from data are represented by means of a modified chain-code method. All best matches of pairs of contours are considered tentative until their geometrical implications are evaluated and a consistent majority has emerged. To do global matching a cost function is constructed and minimized. Terms contributing to the cost include violation of local matches as well as compression and bending in range images. The present global scheme is valid for any set of multiple overlapping range images, whether they contain distinctive features or not. This scheme has been used to map the floor of the ocean, where the range data are obtained by a multibeam echo-sounder system. >

Journal ArticleDOI
TL;DR: It is shown that it is computationally more efficient to initially match a subgraph and check the rest of the graph only when this match succeeds, and a probabilistic analysis of the expected cost of this procedure is given with the aim of determining the optimum subgraph size which minimizes this cost.

Journal ArticleDOI
TL;DR: An algebraic structure on the paths in a graph based on a coloring of the arcs provides a framework for defining parallel contraction operations on a graph, in which many pairs of nodes are simultaneously collapsed into single nodes, but the degree of the graph does not increase.

Journal ArticleDOI
TL;DR: A method of detecting and extracting compact regions having different textures from their backgrounds is presented and requires O( log n) computational steps for an n by n image if implemented on a cellular pyramid.

Journal ArticleDOI
TL;DR: In this article, random processes are defined that generate planar point patterns in which the points have a tendency to cluster, or in which clustering is inhibited, and processes are also defined for labeling a given point pattern in such a way that neighboring points tend to have, or not to have the same labels.


Journal ArticleDOI
TL;DR: This paper shows that if the authors require the functions to be shift-invariant and the rules to be of bounded diameter, then such coordinate grammars do have a useful hierarchy of types; in fact, when they require that their sentential forms always remain connected, they turn out to be equivalent to "isometric Grammars".
Abstract: In a "coordinate grammar", the rewriting rules replace sets of symbols having been given coordinates by sets of symbols whose coordinates are given functions of the coordinates of the original symbols. It was shown in 1972 that coordinate grammars are "too powerful"; even if the rules are all of finite-state types and the functions are all computable by finite transducers, the grammar has the power of a Turing machine. This paper shows that if we require the functions to be shift-invariant and the rules to be of bounded diameter, then such grammars do have a useful hierarchy of types; in fact, when we require that their sentential forms always remain connected, they turn out to be equivalent to "isometric grammars".

Journal ArticleDOI
TL;DR: This paper introduces the more general concept of local operations on labelled dot patterns, where the new label of a dot is a function of the old labels of the dot and a set of its neighbors (e.g., its Voronoi neighbors).

Journal ArticleDOI
TL;DR: For histograms representing mixtures of two Gaussians, this method was found to work well for n / k as small as 8, and the cost can be reduced by performing bimodality analysis on a ‘reduced-resolution’ histogram having n /k bins.

01 Jan 1989
TL;DR: The bimodality of a population P can be measured by dividing its range into two intervals so as to maximize the Fisher distance between the resulting two subpopulations P1 and P2, and this method was found to work well for n/k as small as 8.
Abstract: The bimodality of a population P can be measured by dividing its range into two intervals so as to maximize the Fisher distance between the resulting two subpopulations P1 and P2. If P is a mixture of two (approximately) Gaussian subpopulations, then P1 and P2 are good approximations to the original Gaussians, if their Fisher distance if great enough. For a histogram having n bins this method of bimodality analysis requires n - 1 Fisher distance computations, since the range can be divided into two intervals in n - I ways. The method can also be applied to 'circular' histograms, e.g. of populations of slope or hue values; but for such histograms it is much more computationaly costly, since a circular histogram having n bins can be divided into two intervals (arcs) in n(n - 1)/2 ways. The cost can be reduced by performing bimodality analysis on a 'reduced-resolution' histogram having n/k bins; finding the subdivision of this histogram that maximizes the Fisher distance; and then finding a maxi- mum Fisher distance subdivision of the full-resolution histogram in the neighborhood of this subdivision. This reduces the re- quired number of Fisher distance computations to n(n - 1)/2k 2 + O(k). For histograms representing mixtures of two Gaussians, this method was found to work well for n/k as small as 8.

Proceedings ArticleDOI
04 Jun 1989
TL;DR: A method for parallel processing of chain-codable contours is described, similar to a regular nonoverlapping image pyramid structure, which makes possible the fast computation of contours.
Abstract: A method for parallel processing of chain-codable contours is described. The proposed hierarchical environment, called the chain pyramid, is similar to a regular nonoverlapping image pyramid structure. The chain pyramid makes possible the fast computation of contours. The artifacts of contour processing on pyramids are eliminated by a probabilistic allocation algorithm. Processing modules are developed for smoothing of curves, gap bridging in fragmented data, and treatment of branch points. Raw edge data are preprocessed before being input into the chain pyramid. Typical results are presented and briefly characterized. >