scispace - formally typeset
Search or ask a question

Showing papers by "Azriel Rosenfeld published in 1990"


Journal ArticleDOI
TL;DR: It is demonstrated that the fundamental reason for this shortcoming is the subsampling introduced in the higher levels of the pyramid and the multi-resolution algorithms in general have a fundamental and inherent difficulty in analyzing elongated objects and ensuring connectivity.

139 citations


Journal ArticleDOI
TL;DR: A blind noise variance algorithm that recovers the variance of noise in two steps is proposed and application of the algorithm to differently sized images is also discussed.
Abstract: A blind noise variance algorithm that recovers the variance of noise in two steps is proposed. The sample variances are computed for square cells tessellating the noise image. Several tessellations are applied with the size of the cells increasing fourfold for consecutive tessellations. The four smallest sample variance values are retained for each tessellation and combined through an outlier analysis into one estimate. The different tessellations thus yield a variance estimate sequence. The value of the noise variance is determined from this variance estimate sequence. The blind noise variance algorithm is applied to 500 noisy 256*256 images. In 98% of the cases, the relative estimation error was less than 0.2 with an average error of 0.06. Application of the algorithm to differently sized images is also discussed. >

127 citations


Journal ArticleDOI
TL;DR: A new effort to evaluate parallel architectures applied to knowledge-based machine vision is described, which can be used to gain insight into processor strengths and weaknesses and may help to guide the development of the next generation of parallel vision architectures.

82 citations


Journal ArticleDOI
TL;DR: A novel hierarchical approach toward fast parallel processing of chain-codable contours is presented, which makes possible fast, O(log(image/sub -/size), computation of contour representation in discrete scale-space.
Abstract: A novel hierarchical approach toward fast parallel processing of chain-codable contours is presented. The environment, called the chain pyramid, is similar to a regular nonoverlapping image pyramid structure. The artifacts of contour processing on pyramids are eliminated by a probabilistic allocation algorithm. Building of the chain pyramid is modular, and for different applications new algorithms can be incorporated. Two applications are described: smoothing of multiscale curves and gap bridging in fragmented data. The latter is also employed for the treatment of branch points in the input contours. A preprocessing module allowing the application of the chain pyramid to raw edge data is also described. The chain pyramid makes possible fast, O(log(image/sub -/size)), computation of contour representation in discrete scale-space. >

64 citations


Book ChapterDOI
01 Apr 1990
TL;DR: An image analysis technique in which a separate hierarchy is built over every compact object of the input, made possible by a stochastic decimation algorithm which adapts the structure of the hierarchy to the analyzed image.
Abstract: In this paper we have presented an image analysis technique in which a separate hierarchy is built over every compact object of the input. The approach is made possible by a stochastic decimation algorithm which adapts the structure of the hierarchy to the analyzed image. For labeled images the final description is unique. For gray level images the classes are defined by converging local processes and slight differences may appear. At the apex every root can recover information about the represented object in logirhtmic number of processing steps, and thus the adjacency graph can become the foundation for a reulational model of the scene.

61 citations



Book ChapterDOI
01 Jan 1990
TL;DR: In the context of computer vision, the recognition of three-dimensional objects typically consists of image capture, feature extraction, and object model matching.
Abstract: In the context of computer vision, the recognition of three-dimensional objects typically consists of image capture, feature extraction, and object model matching. During the image capture phase, a camera senses the brightness at regularly spaced points, or pixels, in the image. The brightness at these points is quantized into discrete values; the two-dimensional array of quantized values forms a digital image, the input to the computer vision system. During the feature extraction phase, various algorithms are applied to the digital image to extract salient features such as lines, curves, or regions. The set of these features, represented by a data structure, is then compared to the database of object model data structures in an attempt to identify the object. Clearly, the type of features that need to be extracted from the image depends on the representation of objects in the database.

29 citations


Journal ArticleDOI
TL;DR: It is shown that the Euler characteristic of a binary digital image—the number of components minus the number of holes (components of 0's surrounded by 1's)—is locally computable; in fact, it can be computed by counting the numbers of occurrences of various local patterns of 1's in the image.

25 citations


Journal ArticleDOI
TL;DR: This paper shows that the expected cumulative error when matching an image and a template is maximized by using an ordering technique, and presents experimental results for digital images, when the probability densities of their gray levels, or more generally, the probabilities of arrays of local property values derived from the images.
Abstract: Matching of two digital images is computationally expensive, because it requires a pixel-by-pixel comparison of the pixels in the image and in the template. If we have probabilistic models for the classes of images being matched, we can reduce the expected computational cost of matching by comparing the pixels in an appropriate order. In this paper we show that the expected cumulative error when matching an image and a template is maximized by using an ordering technique. We also present experimental results for digital images, when we know the probability densities of their gray levels, or more generally, the probability densities of arrays of local property values derived from the images.

22 citations


Journal ArticleDOI
TL;DR: This paper presents a technique to reduce the computational cost of template matching by using probabilistic knowledge about local features that appear in the image and the template, and shows that the most probable locations for successful matching can be found.
Abstract: Matching of two digital images is computationally expensive, because it requires a pixel-by-pixel comparison of the pixels in the image and in the template for every location in the image. In this paper we present a technique to reduce the computational cost of template matching by using probabilistic knowledge about local features that appear in the image and the template. Using this technique the most probable locations for successful matching can be found. In the paper we discuss how the size of the features affects the computational cost and the robustness of the technique. We also present results of experiments showing that even simple methods of feature extraction and representation can reduce the computational cost bymmore than an order of magnitude.

13 citations


Journal ArticleDOI
TL;DR: It is shown that “is simply connected” and “ is contractible” are locally computable for connected images in 2D, but not in 3D.
Abstract: It is well known that “is connected” is not locally computable for 2D (or, hence, higher dimensional) images. We show that “is simply connected” and “is contractible” are locally computable for connected images in 2D, but not in 3D. Orientability of a surface is likewise not locally computable.

Journal ArticleDOI
TL;DR: A simple curve extraction process involving only local isotropic parallel operations is described, compared with more abstract, essentially one-dimensional contour summarization processes.
Abstract: The human visual system has the impressive ability to quickly extract simple, global, curvilinear structure from input that may locally not even contain small fragments of this structure. Curves are easy to see globally even when they are locally broken, blurred, or jagged. Because the character of curve input can change with the scale at which it is considered, a hierarchical “pyramid” data structure is suggested. This paper describes a simple curve extraction process involving only local isotropic parallel operations. The noise-cleaned input image is smoothed and subsampled into a pyramid of lower-resolution versions by recursive computation of Gaussian-weighted sums. Curves are localized to thin strings of ridges and peaks at each scale. The method is compared with more abstract, essentially one-dimensional contour summarization processes.

Journal ArticleDOI
TL;DR: A set of pyramid-based algorithms that can detect and extract various types of global structure in its input are described, including algorithms for inferring three-dimensional information from images and for processing time sequences of images.

Journal ArticleDOI
TL;DR: This paper develops basic properties of this code and shows how to derive various geometric properties of the arc (or the region it bounds, if it is closed) directly from the code.
Abstract: An isothetic polygonal arc is one that has all its sides oriented in two orthogonal directions, so that all its angles are right angles. Such an arc is determined (up to congruence) by specifying a “code” sequence of the form α 1 A 1 α 2 … α m −1 A m −1 α m , where the α's are positive real numbers representing side lengths, and the A 's are single bits that specify whether the arc turns left or right between one side and the next. In this paper we develop basic properties of this code and show how to derive various geometric properties of the arc (or the region it bounds, if it is closed) directly from the code.

Proceedings ArticleDOI
04 Dec 1990
TL;DR: The integration of object-centered and viewer-centered representations provides the indexing power of 3-D volumetric primitives, while supporting a 2-D matching paradigm for primitive reconstruction.
Abstract: An approach is presented to 3-D primitive reconstruction that is independent of the selection of volumetric primitives used to model objects. The approach first takes an arbitrary set of 3-D volumetric primitives and generates a hierarchical aspect representation based on the projected surfaces of the primitives; conditional probabilities capture the ambiguity of mappings between levels of the hierarchy. The integration of object-centered and viewer-centered representations provides the indexing power of 3-D volumetric primitives, while supporting a 2-D matching paradigm for primitive reconstruction. Formulation of the problem based on grouping the image regions according to aspect is presented. No domain dependent heuristics are used; the authors exploit only the probabilities inherent in the aspect hierarchy. For a given selection of primitives, the success of the heuristic depends on the likelihood of the various aspects; best results are achieved when certain aspects are more likely, and fewer primitives project to a given aspect. >

Journal ArticleDOI
TL;DR: A bibliography of nearly 1200 references related to computer vision and image analysis, arranged by subject matter is presented, covering topics including architectures; computational techniques; feature detection, segmentation, and imageAnalysis.
Abstract: This paper presents a bibliography of nearly 1200 references related to computer vision and image analysis, arranged by subject matter. The topics covered include architectures; computational techniques; feature detection, segmentation, and image analysis; matching, stereo, and time-varying imagery; shape and pattern; color and texture; and three-dimensional scene analysis. A few references are also given on related topics, such as computational geometry, computer graphics, image input/output and coding, image processing, optical processing, visual perception, neural nets, pattern recognition, and artificial intelligence.

Journal ArticleDOI
TL;DR: A pyramid Hough transform, based on computing the distances between line or edge segments and enforcing merge and select strategies among them, is implemented using this pyramid programming environment on the Connection Machine.

Journal ArticleDOI
TL;DR: This paper presents an algorithm to reduce the computational cost of template matching by using run length representation of the image and the template, and presents some results in which using this method yields more than 20-fold speedup.

Journal ArticleDOI
TL;DR: The employment of multiple roots defined on the smoothed representation of the input contributes to the robustness of the method at very low signal-to-noise ratios.

Journal ArticleDOI
TL;DR: Algorithms which compute border code representations for regions which are stored in a mesh-connected computer and process border codes in a variety of ways are presented.

Book ChapterDOI
05 Mar 1990
TL;DR: This note briefly examines the process of repeatedly selecting maximal sets of nonoverlapping instances of a subgraph and shows that doing so “decimates” the graph in the sense that the number of nodes shrinks exponentially; but that unfortunately, the degree of the graph may grow exponentially.
Abstract: To avoid paradoxes in parallel graph rewriting, it is desirable to forbid overlapping instances of a subgraph to be rewritten simultaneously. The selection of maximal sets of nonoverlapping instances corresponds to the selection of maximal independent sets of nodes in a derived graph. This note briefly examines the process of repeatedly selecting such sets of nodes. It shows that doing so “decimates” the graph in the sense that the number of nodes shrinks exponentially; but that unfortunately, the degree of the graph may grow exponentially.