# Showing papers in "IEEE Transactions on Pattern Analysis and Machine Intelligence in 1986"

••

[...]

TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.

Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

26,639 citations

••

[...]

TL;DR: In this paper, the authors consider the edge detection problem as a numerical differentiation problem and show that numerical differentiation of images is an ill-posed problem in the sense of Hadamard.

Abstract: Edge detection is the process that attempts to characterize the intensity changes in the image in terms of the physical processes that have originated them. A critical, intermediate goal of edge detection is the detection and characterization of significant intensity changes. This paper discusses this part of the edge detection problem. To characterize the types of intensity changes derivatives of different types, and possibly different scales, are needed. Thus, we consider this part of edge detection as a problem in numerical differentiation. We show that numerical differentiation of images is an ill-posed problem in the sense of Hadamard. Differentiation needs to be regularized by a regularizing filtering operation before differentiation. This shows that this part of edge detection consists of two steps, a filtering step and a differentiation step. Following this perspective, the paper discusses in detail the following theoretical aspects of edge detection. 1) The properties of different types of filters-with minimal uncertainty, with a bandpass spectrum, and with limited support-are derived. Minimal uncertainty filters optimize a tradeoff between computational efficiency and regularizing properties. 2) Relationships among several 2-D differential operators are established. In particular, we characterize the relation between the Laplacian and the second directional derivative along the gradient. Zero crossings of the Laplacian are not the only features computed in early vision. 3) Geometrical and topological properties of the zero crossings of differential operators are studied in terms of transversality and Morse theory.

906 citations

••

[...]

TL;DR: The problem of finding a description, at varying levels of detail, for planar curves and matching two such descriptions is posed and solved and the result is the ``generalized scale space'' image of a planar curve which is invariant under rotation, uniform scaling and translation of the curve.

Abstract: The problem of finding a description, at varying levels of detail, for planar curves and matching two such descriptions is posed and solved in this paper. A number of necessary criteria are imposed on any candidate solution method. Path-based Gaussian smoothing techniques are applied to the curve to find zeros of curvature at varying levels of detail. The result is the ``generalized scale space'' image of a planar curve which is invariant under rotation, uniform scaling and translation of the curve. These properties make the scale space image suitable for matching. The matching algorithm is a modification of the uniform cost algorithm and finds the lowest cost match of contours in the scale space images. It is argued that this is preferable to matching in a so-called stable scale of the curve because no such scale may exist for a given curve. This technique is applied to register a Landsat satellite image of the Strait of Georgia, B.C. (manually corrected for skew) to a map containing the shorelines of an overlapping area.

894 citations

••

[...]

TL;DR: This paper proposes a general class of controlled-continuity stabilizers which provide the necessary control over smoothness in visual reconstruction problems that involve both continuous regions and discontinuities, for which global smoothness constraints fail.

Abstract: Inverse problems, such as the reconstruction problems that arise in early vision, tend to be mathematically ill-posed. Through regularization, they may be reformulated as well-posed variational principles whose solutions are computable. Standard regularization theory employs quadratic stabilizing functionals that impose global smoothness constraints on possible solutions. Discontinuities present serious difficulties to standard regularization, however, since their reconstruction requires a precise spatial control over the smoothing properties of stabilizers. This paper proposes a general class of controlled-continuity stabilizers which provide the necessary control over smoothness. These nonquadratic stabilizing functionals comprise multiple generalized spline kernels combined with (noncontinuous) continuity control functions. In the context of computational vision, they may be thought of as controlled-continuity constraints. These generic constraints are applicable to visual reconstruction problems that involve both continuous regions and discontinuities, for which global smoothness constraints fail.

875 citations

••

[...]

TL;DR: It is shown that the Gaussian probability density function is the only kernel in a broad class for which first-order maxima and minima, respectively, increase and decrease when the bandwidth of the filter is increased.

Abstract: Scale-space filtering constructs hierarchic symbolic signal descriptions by transforming the signal into a continuum of versions of the original signal convolved with a kernal containing a scale or bandwidth parameter. It is shown that the Gaussian probability density function is the only kernel in a broad class for which first-order maxima and minima, respectively, increase and decrease when the bandwidth of the filter is increased. The consequences of this result are explored when the signal?or its image by a linear differential operator?is analyzed in terms of zero-crossing contours of the transform in scale-space.

852 citations

••

[...]

TL;DR: A critical review is given of two kinds of Fourier descriptors and a distance measure is proposed, in terms of FD's, that measures the difference between two boundarv curves.

Abstract: Description or discrimination of boundary curves (shapes) is an important problem in picture processing and pattern recognition Fourier descriptors (FD's) have interesting properties in this respect. First, a critical review is given of two kinds of FD's. Some properties of the FD's are given and a distance measure is proposed, in terms of FD's, that measures the difference between two boundarv curves. It is shown how FD's can be used for obtaining skeletons fobjects. Finally, experimental results are given in character recognition and machine parts recognition.

807 citations

••

[...]

TL;DR: An implemented algorithm is described that computes the Curvature Primal Sketch by matching the multiscale convolutions of a shape, and its performance on a set of tool shapes is illustrated.

Abstract: In this paper we introduce a novel representation of the significant changes in curvature along the bounding contour of planar shape. We call the representation the Curvature Primal Sketch because of the close analogy to the primal sketch representation advocated by Marr for describing significant intensity changes. We define a set of primitive parameterized curvature discontinuities, and derive expressions for their convolutions with the first and second derivatives of a Gaussian. We describe an implemented algorithm that computes the Curvature Primal Sketch by matching the multiscale convolutions of a shape, and illustrate its performance on a set of tool shapes. Several applications of the representation are sketched.

795 citations

••

[...]

TL;DR: In this article, the "oriented smoothness" constraint was introduced to restrict variations of the displacement vector field only in directions with small or no variation of gray values, which creates difficulties at gray value transitions which correspond to occluding contours.

Abstract: A mapping between one frame from an image sequence and the preceding or following frame can be represented as a displacement vector field. In most situations, the mere gray value variations do not provide sufficient information in order to estimate such a displacement vector field. Supplementary constraints are necessary, for example the postulate that a displacement vector field varies smoothly as a function of the image position. Taken as a general requirement, this creates difficulties at gray value transitions which correspond to occluding contours. Nagel therefore introduced the ``oriented smoothness'' requirement which restricts variations of the displacement vector field only in directions with small or no variation of gray values. This contribution reports results of an investigation about how such an ``oriented smoothness'' constraint may be formulated and evaluated.

735 citations

••

[...]

TL;DR: The algorithm appears to be more effective than previous techniques for two key reasons: 1) the gradient orientation is used as the initial organizing criterion prior to the extraction of straight lines, and 2) the global context of the intensity variations associated with a straight line is determined prior to any local decisions about participating edge elements.

Abstract: This paper presents a new approach to the extraction of straight lines in intensity images. Pixels are grouped into line-support regions of similar gradient orientation, and then the structure of the associated intensity surface is used to determine the location and properties of the edge. The resulting regions and extracted edge parameters form a low-level representation of the intensity variations in the image that can be used for a variety of purposes. The algorithm appears to be more effective than previous techniques for two key reasons: 1) the gradient orientation (rather than gradient magnitude) is used as the initial organizing criterion prior to the extraction of straight lines, and 2) the global context of the intensity variations associated with a straight line is determined prior to any local decisions about participating edge elements.

717 citations

••

[...]

TL;DR: It is proved that in any dimension the only filter that does not create generic zero crossings as the scale increases is the Gaussian and this result can be generalized to apply to level crossings of any linear differential operator.

Abstract: We characterize some properties of the zero crossings of the Laplacian of signals?in particular images?filtered with linear filters, as a function of the scale of the filter (extending recent work by Witkin [16]). We prove that in any dimension the only filter that does not create generic zero crossings as the scale increases is the Gaussian. This result can be generalized to apply to level crossings of any linear differential operator: it applies in particular to ridges and ravines in the image intensity. In the case of the second derivative along the gradient, there is no filter that avoids creation of zero crossings, unless the filtering is performed after the derivative is applied.

697 citations

••

[...]

TL;DR: An approximate fuzzy c-means (AFCM) implementation based upon replacing the necessary ``exact'' variates in the FCM equation with integer-valued or real-valued estimates enables AFCM to exploit a lookup table approach for computing Euclidean distances and for exponentiation.

Abstract: This paper reports the results of a numerical comparison of two versions of the fuzzy c-means (FCM) clustering algorithms. In particular, we propose and exemplify an approximate fuzzy c-means (AFCM) implementation based upon replacing the necessary ``exact'' variates in the FCM equation with integer-valued or real-valued estimates. This approximation enables AFCM to exploit a lookup table approach for computing Euclidean distances and for exponentiation. The net effect of the proposed implementation is that CPU time during each iteration is reduced to approximately one sixth of the time required for a literal implementation of the algorithm, while apparently preserving the overall quality of terminal clusters produced. The two implementations are tested numerically on a nine-band digital image, and a pseudocode subroutine is given for the convenience of applications-oriented readers. Our results suggest that AFCM may be used to accelerate FCM processing whenever the feature space is comprised of tuples having a finite number of integer-valued coordinates.

••

[...]

TL;DR: Experimental results indicate that sum and difference histograms used conjointly are nearly as powerful as cooccurrence matrices for texture discrimination.

Abstract: The sum and difference of two random variables with same variances are decorrelated and define the principal axes of their associated joint probability function. Therefore, sum and difference histograms are introduced as an alternative to the usual co-occurrence matrices used for texture analysis. Two maximum likelihood texture classifiers are presented depending on the type of object used for texture characterization (sum and difference histograms or some associated global measures). Experimental results indicate that sum and difference histograms used conjointly are nearly as powerful as cooccurrence matrices for texture discrimination. The advantage of the proposed texture analysis method over the conventional spatial gray level dependence method is the decrease in computation time and memory storage.

••

[...]

TL;DR: The method has been integrated within a vision system coupled to an indutrial robot arm, to provide automatic picking and repositioning of partially overlapping industrial parts to provide strong robustness to partial occlusions.

Abstract: A new method has been designed to identify and locate objects lying on a flat surface. The merit of the approach is to provide strong robustness to partial occlusions (due for instance to uneven lighting conditions, shadows, highlights, touching and overlapping objects) thanks to a local and compact description of the objects boundaries and to a new fast recognition method involving generation and recursive evaluation of hypotheses named HYPER (HY potheses Predicted and Evaluated Recursively). The method has been integrated within a vision system coupled to an indutrial robot arm, to provide automatic picking and repositioning of partially overlapping industrial parts.

••

[...]

TL;DR: An approach is presented for the estimation of object motion parameters based on a sequence of noisy images that may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images are available.

Abstract: An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

••

[...]

TL;DR: A system that takes a gray level image as input, locates edges with subpixel accuracy, and links them into lines and notes that the zero-crossings obtained from the full resolution image using a space constant ¿ for the Gaussian, are very similar, but the processing times are very different.

Abstract: We present a system that takes a gray level image as input, locates edges with subpixel accuracy, and links them into lines. Edges are detected by finding zero-crossings in the convolution of the image with Laplacian-of-Gaussian (LoG) masks. The implementation differs markedly from M.I.T.'s as we decompose our masks exactly into a sum of two separable filters instead of the usual approximation by a difference of two Gaussians (DOG). Subpixel accuracy is obtained through the use of the facet model [1]. We also note that the zero-crossings obtained from the full resolution image using a space constant ? for the Gaussian, and those obtained from the 1/n resolution image with 1/n pixel accuracy and a space constant of ?/n for the Gaussian, are very similar, but the processing times are very different. Finally, these edges are grouped into lines using the technique described in [2].

••

[...]

TL;DR: In this paper, a series of one-dimensional surfaces are fit to each window and the surface description is accepted, which is adequate in the least square sense and has the fewest parameters.

Abstract: An edge in an image corresponds to a discontinuity in the intensity surface of the underlying scene. It can be approximated by a piecewise straight curve composed of edgels, i.e., short, linear edge-elements, each characterized by a direction and a position. The approach to edgel-detection here, is to fit a series of one-dimensional surfaces to each window (kernel of the operator) and accept the surface-description which is adequate in the least squares sense and has the fewest parameters. (A one-dimensional surface is one which is constant along some direction.) The tanh is an adequate basis for the stepedge and its combinations are adequate for the roofedge and the line-edge. The proposed method of step-edgel detection is robust with respect to noise; for (step-size/?noise) ? 2.5, it has subpixel position localization (?position < ?) and an angular localization better than 10°; further, it is designed to be insensitive to smooth shading. These results are demonstrated by some simple analysis, statistical data, and edgelimages. Also included is a comparison of performance on a real image, with a typical operator (Difference-of-Gaussians). The results indicate that the proposed operator is superior with respect to detection, localization, and resolution.

••

[...]

TL;DR: This paper develops multiresolution iterative algorithms for computing lightness, shape-from-shading, and optical flow and examines the efficiency of these algorithms using synthetic image inputs, and describes the multigrid methodology that is broadly applicable in early vision.

Abstract: Image analysis problems, posed mathematically as variational principles or as partial differential equations, are amenable to numerical solution by relaxation algorithms that are local, iterative, and often parallel. Although they are well suited structurally for implementation on massively parallel, locally interconnected computational architectures, such distributed algorithms are seriously handi capped by an inherent inefficiency at propagating constraints between widely separated processing elements. Hence, they converge extremely slowly when confronted by the large representations of early vision. Application of multigrid methods can overcome this drawback, as we showed in previous work on 3-D surface reconstruction. In this paper, we develop multiresolution iterative algorithms for computing lightness, shape-from-shading, and optical flow, and we examine the efficiency of these algorithms using synthetic image inputs. The multigrid methodology that we describe is broadly applicable in early vision. Notably, it is an appealing strategy to use in conjunction with regularization analysis for the efficient solution of a wide range of ill-posed image analysis problems.

••

[...]

TL;DR: A new model-based approach for texture classification which is rotation invariant, i.e., the recognition accuracy is not affected if the orientation of the test texture is different from the Orientation of the training samples.

Abstract: This paper presents a new model-based approach for texture classification which is rotation invariant, i.e., the recognition accuracy is not affected if the orientation of the test texture is different from the orientation of the training samples. The method uses three statistical features, two of which are obtained from a new parametric model of the image called a ``circular symmetric autoregressive model.'' Two of the proposed features have physical interpretation in terms of the roughness and directionality of the texture. The results of several classification experiments on differently oriented samples of natural textures including both microtextures and macrotextures are presented.

••

[...]

TL;DR: It is shown that a quaternion representation of rotation yields constraints which are purely algebraic in a seven-dimensional space, which greatly simplifies computation of collision points, and allows us to derive an efficient exact intersection test for an object which is translating and rotating among obstacles.

Abstract: We consider the collision-detection problem for a three-dimensional solid object moving among polyhedral obstacles. The configuration space for this problem is six-dimensional, and the traditional representation of the space uses three translational parameters and three angles (typically Euler angles). The constraints between the object and obstacles then involve trigonometric functions. We show that a quaternion representation of rotation yields constraints which are purely algebraic in a seven-dimensional space. By simple manipulation, the constraints may be projected down into a six-dimensional space with no increase in complexity. The algebraic form of the constraints greatly simplifies computation of collision points, and allows us to derive an efficient exact intersection test for an object which is translating and rotating among obstacles.

••

[...]

TL;DR: This work presents an algorithm that finds a piecewise linear curve with the minimal number of segments required to approximate a curve within a uniform error with fixed initial and final points.

Abstract: Two-dimensional digital curves are often uniformly approximated by polygons or piecewise linear curves. Several algorithms have been proposed in the literature to find such curves. We present an algorithm that finds a piecewise linear curve with the minimal number of segments required to approximate a curve within a uniform error with fixed initial and final points. We compare our optimal algorithm to several suboptimal algorithms with respect to the number of linear segments required in the approximation and the execution time of the algorithm.

••

[...]

TL;DR: Test results indicate the ability of the technique developed in this work to recognize partially occluded objects and Processing-speed measurements show that the method is fast in the recognition mode.

Abstract: In this paper, a method of classifying objects is reported that is based on the use of autoregressive (AR) model parameters which represent the shapes of boundaries detected in digitized binary images of the objects. The object identification technique is insensitive to object size and orientation. Three pattern recognition algorithms that assign object names to unlabelled sets of AR model parameters were tested and the results compared. Isolated object tests were performed on five sets of shapes, including eight industrial shapes (mostly taken from the recognition literature), and recognition accuracies of 100 percent were obtained for all pattern sets at some model order in the range 1 to 10. Test results indicate the ability of the technique developed in this work to recognize partially occluded objects. Processing-speed measurements show that the method is fast in the recognition mode. The results of a number of object recognition tests are presented. The recognition technique was realized with Fortran programs, Imaging Technology, Inc. image-processing boards, and a PDP 11/60 computer. The computer algorithms are described.

••

[...]

TL;DR: A new module is described here which unifies stereo and motion analysis in a manner in which each helps to overcome the other's short-comings and points to the importance of the ratio, rate of change of disparity, and its possible role in establishing stereo correspondence.

Abstract: The analyses of visual data by stereo and motion modules have typically been treated as separate parallel processes which both feed a common viewer-centered 2.5-D sketch of the scene. When acting separately, stereo and motion analyses are subject to certain inherent difficulties; stereo must resolve a combinatorial correspondence problem and is further complicated by the presence of occluding boundaries, motion analysis involves the solution of nonlinear equations and yields a 3-D interpretation specified up to an undetermined scale factor. A new module is described here which unifies stereo and motion analysis in a manner in which each helps to overcome the other's short-comings. One important result is a correlation between relative image flow (i.e., binocular difference flow) and stereo disparity; it points to the importance of the ratio ? ?, rate of change of disparity ? to disparity ?, and its possible role in establishing stereo correspondence. The importance of such ratios was first pointed out by Richards [19]. Our formulation may reflect the human perception channel probed by Regan and Beverley [18].

••

[...]

TL;DR: This paper presents a powerful image understanding system that utilizes a semantic-syntactic (or attributed-synibolic) representation scheme in the form of attributed relational graphs (ARG's) for comprehending the global information contents of images.

Abstract: This paper presents a powerful image understanding system that utilizes a semantic-syntactic (or attributed-synibolic) representation scheme in the form of attributed relational graphs (ARG's) for comprehending the global information contents of images. Nodes in the ARG represent the global image features, while the relations between those features are represented by attributed branches between their corresponding nodes. The extraction of ARG representation from images is achieved by a multilayer graph transducer scheme. This scheme is basically a rule-based system that uses a combination of model-driven and data-driven concepts in performing a hierarchical symbolic mapping of the image information content from the spatial-domain representation into a global representation. Further analysis and inter-pretation of the imagery data is performed on the extracted ARG representation. A distance measure between images is defined in terms of the distance between their respective ARG representations. The distance between two ARG's and the inexact matching of their respective components are calculated by an efficient dynamic programming technique. The system handles noise, distortion, and ambiguity in real-world images by two means, namely, through modeling and embedding them into the transducer's mapping rules, as well as through the appropriate cost of error-transformation for the inexact matching of the ARG image representation. Two illustrative experiments are presented to demonstrate some capabilities of the proposed system. Experiment I deals with locating objects in multiobject scenes, while Experiment II is concerned with target detection in SAR images.

••

[...]

TL;DR: The problem of grammatical inference is introduced, and its potential engineering applications are demonstrated, andference algorithms for finite-state and context-free grammars are presented.

Abstract: Inference of high-dimensional grammars is discussed. Specifically, techniques for inferring tree grammars are briefly presented. The problem of inferring a stochastic grammar to model the behavior of an information source is also introduced and techniques for carrying out the inference process are presented for a class of stochastic finite-state and context-free grammars. The possible practical application of these methods is illustrated by examples.

••

[...]

TL;DR: This paper details the design and implementation of ANGY, a rule-based expert system in the domain of medical image processing that identifies and isolates the coronary vessels while ignoring any nonvessel structures which may have arisen from noise, variations in background contrast, imperfect subtraction, and irrelevent anatomical detail.

Abstract: This paper details the design and implementation of ANGY, a rule-based expert system in the domain of medical image processing. Given a subtracted digital angiogram of the chest, ANGY identifies and isolates the coronary vessels, while ignoring any nonvessel structures which may have arisen from noise, variations in background contrast, imperfect subtraction, and irrelevent anatomical detail. The overall system is modularized into three stages: the preprocessing stage and the two stages embodied in the expert itself. In the preprocessing stage, low-level image processing routines written in C are used to create a segmented representation of the input image. These routines are applied sequentially. The expert system is rule-based and is written in OPS5 and LISP. It is separated into two stages: The low-level image processing stage embodies a domain-independent knowledge of segmentation, grouping, and shape analysis. Working with both edges and regions, it determines such relations as parallel and adjacent and attempts to refine the segmentation begun by the preprocessing. The high-level medical stage embodies a domain-dependent knowledge of cardiac anatomy and physiology. Applying this knowledge to the objects and relations determined in the preceding two stages, it identifies those objects which are vessels and eliminates all others.

••

[...]

TL;DR: The necessary techniques for optimal local parameter estimation and primitive boundary or surface type recognition for each small patch of data are developed, and optimal combining of these inaccurate locally derived parameter estimates are combined to arrive at roughly globally optimum object-position estimation.

Abstract: New asymptotic methods are introduced that permit computationally simple Bayesian recognition and parameter estimation for many large data sets described by a combination of algebraic, geometric, and probabilistic models. The techniques introduced permit controlled decomposition of a large problem into small problems for separate parallel processing where maximum likelihood estimation or Bayesian estimation or recognition can be realized locally. These results can be combined to arrive at globally optimum estimation or recognition. The approach is applied to the maximum likelihood estimation of 3-D complex-object position. To this end, the surface of an object is modeled as a collection of patches of primitive quadrics, i.e., planar, cylindrical, and spherical patches, possibly augmented by boundary segments. The primitive surface-patch models are specified by geometric parameters, reflecting location, orientation, and dimension information. The object-position estimation is based on sets of range data points, each set associated with an object primitive. Probability density functions are introduced that model the generation of range measurement points. This entails the formulation of a noise mechanism in three-space accounting for inaccuracies in the 3-D measurements and possibly for inaccuracies in the 3-D modeling. We develop the necessary techniques for optimal local parameter estimation and primitive boundary or surface type recognition for each small patch of data, and then optimal combining of these inaccurate locally derived parameter estimates in order to arrive at roughly globally optimum object-position estimation.

••

[...]

TL;DR: A combined syntactic-semantic approach based on attributed grammars is suggested, intended to be an initial step toward unification of syntactic and statistical approaches to pattern recognition.

Abstract: The problem of pattern recognition is discussed in terms of single-entity representation versus multiple-entity representation. A combined syntactic-semantic approach based on attributed grammars is suggested. Syntax-semantics tradeoff in pattern representation is demonstrated. This approach is intended to be an initial step toward unification of syntactic and statistical approaches to pattern recognition.

••

[...]

TL;DR: This paper describes an approach to implementing a Gaussian Pyramid which requires approximately two addition operations per pixel, per level, per dimension, and examines tradeoffs in choosing an algorithm for Gaussian filtering.

Abstract: Gaussian filtering is an important tool in image processing and computer vision. In this paper we discuss the background of Gaussian filtering and look at some methods for implementing it. Consideration of the central limit theorem suggests using a cascade of ``simple'' filters as a means of computing Gaussian filters. Among ``simple'' filters, uniform-coefficient finite-impulse-response digital filters are especially economical to implement. The idea of cascaded uniform filters has been around for a while [13], [16]. We show that this method is economical to implement, has good filtering characteristics, and is appropriate for hardware implementation. We point out an equivalence to one of Burt's methods [1], [3] under certain circumstances. As an extension, we describe an approach to implementing a Gaussian Pyramid which requires approximately two addition operations per pixel, per level, per dimension. We examine tradeoffs in choosing an algorithm for Gaussian filtering, and finally discuss an implementation.

••

[...]

TL;DR: A critical evaluation of the partitioning problem is offered, noting the extent to which it has distinct formulations and parameterizations, and it is argued that any effective technique must satisfy two general principles.

Abstract: In this paper we offer a critical evaluation of the partitioning (perceptual organization) problem, noting the extent to which it has distinct formulations and parameterizations. We show that most partitioning techniques can be characterized as variations of four distinct paradigms, and argue that any effective technique must satisfy two general principles. We give concrete substance to our general discussion by introducing new partitioning techniques for planar geometric curves, and present experimental results demonstrating their effectiveness.

••

[...]

TL;DR: The basic concept of learning control is introduced, and the following five learning schemes are briefly reviewed: 1) trainable controllers using pattern classifiers, 2) reinforcement learning control systems, 3) Bayesian estimation, 4) stochastic approximation, and 5) Stochastic automata models.

Abstract: The basic concept of learning control is introduced. The following five learning schemes are briefly reviewed: 1) trainable controllers using pattern classifiers, 2) reinforcement learning control systems, 3) Bayesian estimation, 4) stochastic approximation, and 5) stochastic automata models. Potential applications and problems for further research in learning control are outlined.