# Showing papers in "IEEE Transactions on Pattern Analysis and Machine Intelligence in 1987"

••

[...]

TL;DR: An algorithm for finding the least-squares solution of R and T, which is based on the singular value decomposition (SVD) of a 3 × 3 matrix, is presented.

Abstract: Two point sets {pi} and {p'i}; i = 1, 2,..., N are related by p'i = Rpi + T + Ni, where R is a rotation matrix, T a translation vector, and Ni a noise vector. Given {pi} and {p'i}, we present an algorithm for finding the least-squares solution of R and T, which is based on the singular value decomposition (SVD) of a 3 × 3 matrix. This new algorithm is compared to two earlier algorithms with respect to computer time requirements.

3,862 citations

••

[...]

TL;DR: The tutorial provided in this paper reviews both binary morphology and gray scale morphology, covering the operations of dilation, erosion, opening, and closing and their relations.

Abstract: For the purposes of object or defect identification required in industrial vision applications, the operations of mathematical morphology are more useful than the convolution operations employed in signal processing because the morphological operators relate directly to shape. The tutorial provided in this paper reviews both binary morphology and gray scale morphology, covering the operations of dilation, erosion, opening, and closing and their relations. Examples are given for each morphological concept and explanations are given for many of their interrelationships.

2,461 citations

••

[...]

TL;DR: This paper presents random field models for noisy and textured image data based upon a hierarchy of Gibbs distributions, and presents dynamic programming based segmentation algorithms for chaotic images, considering a statistical maximum a posteriori (MAP) criterion.

Abstract: This paper presents a new approach to the use of Gibbs distributions (GD) for modeling and segmentation of noisy and textured images. Specifically, the paper presents random field models for noisy and textured image data based upon a hierarchy of GD. It then presents dynamic programming based segmentation algorithms for noisy and textured images, considering a statistical maximum a posteriori (MAP) criterion. Due to computational concerns, however, sub-optimal versions of the algorithms are devised through simplifying approximations in the model. Since model parameters are needed for the segmentation algorithms, a new parameter estimation technique is developed for estimating the parameters in a GD. Finally, a number of examples are presented which show the usefulness of the Gibbsian model and the effectiveness of the segmentation algorithms and the parameter estimation procedures.

1,092 citations

••

[...]

TL;DR: In this paper, the authors examined a novel source of depth information: focal gradients resulting from the limited depth of field inherent in most optical systems, which can be used to make reliable depth maps of useful accuracy with relatively minimal computation.

Abstract: This paper examines a novel source of depth information: focal gradients resulting from the limited depth of field inherent in most optical systems. Previously, autofocus schemes have used depth of field to measured depth by searching for the lens setting that gives the best focus, repeating this search separately for each image point. This search is unnecessary, for there is a smooth gradient of focus as a function of depth. By measuring the amount of defocus, therefore, we can estimate depth simultaneously at all points, using only one or two images. It is proved that this source of information can be used to make reliable depth maps of useful accuracy with relatively minimal computation. Experiments with realistic imagery show that measurement of these optical gradients can provide depth information roughly comparable to stereo disparity or motion parallax, while avoiding image-to-image matching problems.

963 citations

••

[...]

TL;DR: This correspondence discusses an extension of the method to cover both translational and rotational movements, characterized by an outstanding robustness against correlated noise and disturbances, such as those encountered with nonuniform, time varying illumination.

Abstract: A well-known method for image registration is based on a conventional correlation between phase-only, or whitened, versions of the two images to be realigned. The method, covering rigid translational movements, is characterized by an outstanding robustness against correlated noise and disturbances, such as those encountered with nonuniform, time varying illumination. This correspondence discusses an extension of the method to cover both translational and rotational movements.

792 citations

••

[...]

TL;DR: This approach allows an efficient and natural way to construct iconic indexes for pictures and proves the necessary and sufficient conditions to characterize ambiguous pictures for reduced 2D strings as well as normal 2-D strings.

Abstract: In this paper, we describe a new way of representing a symbolic picture by a two-dimensional string. A picture query can also be specified as a 2-D string. The problem of pictorial information retrieval then becomes a problem of 2-D subsequence matching. We present algorithms for encoding a symbolic picture into its 2-D string representation, reconstructing a picture from its 2-D string representation, and matching a 2-D string with another 2-D string. We also prove the necessary and sufficient conditions to characterize ambiguous pictures for reduced 2-D strings as well as normal 2-D strings. This approach thus allows an efficient and natural way to construct iconic indexes for pictures.

674 citations

••

[...]

TL;DR: This correspondence illustrates the ideas of the Adaptive Hough Transform, AHT, by tackling the problem of identifying linear and circular segments in images by searching for clusters of evidence in 2-D parameter spaces and shows that the method is robust to the addition of extraneous noise.

Abstract: We introduce the Adaptive Hough Transform, AHT, as an efficient way of implementing the Hough Transform, HT, method for the detection of 2-D shapes. The AHT uses a small accumulator array and the idea of a flexible iterative "coarse to fine" accumulation and search strategy to identify significant peaks in the Hough parameter spaces. The method is substantially superior to the standard HT implementation in both storage and computational requirements. In this correspondence we illustrate the ideas of the AHT by tackling the problem of identifying linear and circular segments in images by searching for clusters of evidence in 2-D parameter spaces. We show that the method is robust to the addition of extraneous noise and can be used to analyze complex images containing more than one shape.

671 citations

••

[...]

TL;DR: The approach operates by examining all hypotheses about pairings between sensed data and object surfaces and efficiently discarding inconsistent ones by using local constraints on distances between faces, angles between face normals, and angles of vectors between sensed points.

Abstract: This paper discusses how local measurements of positions and surface normals may be used to identify and locate overlapping objects. The objects are modeled as polyhedra (or polygons) having up to six degrees of positional freedom relative to the sensors. The approach operates by examining all hypotheses about pairings between sensed data and object surfaces and efficiently discarding inconsistent ones by using local constraints on: distances between faces, angles between face normals, and angles (relative to the surface normals) of vectors between sensed points. The method described here is an extension of a method for recognition and localization of nonoverlapping parts previously described in [18] and [15].

535 citations

••

[...]

Mie University

^{1}TL;DR: Two types of modified quadratic disriminant functions (MQDF1, MQDF2) which are less sensitive to the estimation error of the covariance matrices are proposed.

Abstract: Issues in the quadratic discriminant functions (QDF) are discussed and two types of modified quadratic disriminant functions (MQDF1, MQDF2) which are less sensitive to the estimation error of the covariance matrices are proposed. The MQDF1 is a function which employs a kind of a (pseudo) Bayesian estimate of the covariance matrix instead of the maximum likelihood estimate ordinarily used in the QDF. The MQDF2 is a variation of the MQDF1 to save the required computation time and storage. Two discriminant functions were applied to Chinese character recognition to evaluate their effectiveness, and remarkable improvement was observed in their performance.

529 citations

••

[...]

TL;DR: This work forms the correspondence problem as an optimization problem and proposes an iterative algorithm to find trajectories of points in a monocular image sequence and demonstrates the efficacy of this approach considering synthetic, laboratory, and real scenes.

Abstract: Identifying the same physical point in more than one image, the correspondence problem, is vital in motion analysis. Most research for establishing correspondence uses only two frames of a sequence to solve this problem. By using a sequence of frames, it is possible to exploit the fact that due to inertia the motion of an object cannot change instantaneously. By using smoothness of motion, it is possible to solve the correspondence problem for arbitrary motion of several nonrigid objects in a scene. We formulate the correspondence problem as an optimization problem and propose an iterative algorithm to find trajectories of points in a monocular image sequence. A modified form of this algorithm is useful in case of occlusion also. We demonstrate the efficacy of this approach considering synthetic, laboratory, and real scenes.

496 citations

••

[...]

TL;DR: It is shown that ``edge focusing'', i.e., a coarse-to-fine tracking in a continuous manner, combines high positional accuracy with good noise-reduction, which is of vital interest in several applications.

Abstract: Edge detection in a gray-scale image at a fine resolution typically yields noise and unnecessary detail, whereas edge detection at a coarse resolution distorts edge contours. We show that ``edge focusing'', i.e., a coarse-to-fine tracking in a continuous manner, combines high positional accuracy with good noise-reduction. This is of vital interest in several applications. Junctions of different kinds are in this way restored with high precision, which is a basic requirement when performing (projective) geometric analysis of an image for the purpose of restoring the three-dimensional scene. Segmentation of a scene using geometric clues like parallelism, etc., is also facilitated by the algorithm, since unnecessary detail has been filtered away. There are indications that an extension of the focusing algorithm can classify edges, to some extent, into the categories diffuse and nondiffuse (for example diffuse illumination edges). The edge focusing algorithm contains two parameters, namely the coarseness of the resolution in the blurred image from where we start the focusing procedure, and a threshold on the gradient magnitude at this coarse level. The latter parameter seems less critical for the behavior of the algorithm and is not present in the focusing part, i.e., at finer resolutions. The step length of the scale parameter in the focusing scheme has been chosen so that edge elements do not move more than one pixel per focusing step.

••

[...]

TL;DR: A procedure to detect connected planar, convex, and concave surfaces of 3-D objects by segments the range image into surface patches by a square error criterion clustering algorithm using surface points and associated surface normals.

Abstract: The recognition of objects in three-dimensional space is a desirable capability of a computer vision system. Range images, which directly measure 3-D surface coordinates of a scene, are well suited for this task. In this paper we report a procedure to detect connected planar, convex, and concave surfaces of 3-D objects. This is accomplished in three stages. The first stage segments the range image into ``surface patches'' by a square error criterion clustering algorithm using surface points and associated surface normals. The second stage classifies these patches as planar, convex, or concave based on a non-parametric statistical test for trend, curvature values, and eigenvalue analysis. In the final stage, boundaries between adjacent surface patches are classified as crease or noncrease edges, and this information is used to merge compatible patches to produce reasonable faces of the object(s). This procedure has been successfully applied to a large number of real and synthetic images, four of which we present in this paper.

••

[...]

TL;DR: A novel strategy for rapid acquisition of the range map of a scene employing color-encoded structured light, which exists for the first time to acquire high-resolution range data in real time for modest cost.

Abstract: In this paper, we discuss a novel strategy for rapid acquisition of the range map of a scene employing color-encoded structured light. This technique offers several potential advantages including increased speed and improved accuracy. In this approach we illuminate the scene with a single encoded grid of colored light stripes. The indexing problem, that of matching a detected image plane stripe with its position in the projection grid, is solved from a knowledge of the color grid encoding. In fact, the possibility exists for the first time to acquire high-resolution range data in real time for modest cost, since only a single projection and single color image are required. Grid to grid alignment problems associated with previous multistripe techniques are eliminated, as is the requirement for dark interstices between grid stripes. Scene illumination is more uniform, simplifying the stripe detection problem, and mechanical difficulties associated with the equipment design are significantly reduced.

••

[...]

TL;DR: Analysis and examples indicate that FIR-median hybrid filters preserve details better and are computationally much more efficient than the conventional median and the K-nearest neighbor averaging filters.

Abstract: A new class of median type filters for image processing is proposed. In the filters, linear FIR substructures are used in conjunction with the median operation. The root signals and noise attenuation properties of the FIR-median hybrid filters are analyzed and compared to representative edge preserving filtering operations. The concept of multilevel median operation is introduced to improve the detail preserving property of conventional median and the FIR-median hybrid filters. In the multilevel filters there exists a tradeoff between noise attenuation and detail preservation. The analysis and examples indicate that FIR-median hybrid filters preserve details better and are computationally much more efficient than the conventional median and the K-nearest neighbor averaging filters.

••

[...]

TL;DR: This paper examines the sources of errors for gradient-based techniques that locally solve for optical flow that assume that optical flow is constant in a small neighborhood and the consequence of violating this assumption.

Abstract: Multiple views of a scene can provide important information about the structure and dynamic behavior of three-dimensional objects. Many of the methods that recover this information require the determination of optical flow-the velocity, on the image, of visible points on object surfaces. An important class of techniques for estimating optical flow depend on the relationship between the gradients of image brightness. While gradient-based methods have been widely studied, little attention has been paid to accuracy and reliability of the approach. Gradient-based methods are sensitive to conditions commonly encountered in real imagery. Highly textured surfaces, large areas of constant brightness, motion boundaries, and depth discontinuities can all be troublesome for gradient-based methods. Fortunately, these problematic areas are usually localized can be identified in the image. In this paper we examine the sources of errors for gradient-based techniques that locally solve for optical flow. These methods assume that optical flow is constant in a small neighborhood. The consequence of violating in this assumption is examined. The causes of measurement errors and the determinants of the conditioning of the solution system are also considered. By understanding how errors arise, we are able to define the inherent limitations of the technique, obtain estimates of the accuracy of computed values, enhance the performance of the technique, and demonstrate the informative value of some types of error.

••

[...]

TL;DR: The current state of a system that recognizes printed text of various fonts and sizes for the Roman alphabet is described, which combines several techniques in order to improve the overall recognition rate.

Abstract: We describe the current state of a system that recognizes printed text of various fonts and sizes for the Roman alphabet. The system combines several techniques in order to improve the overall recognition rate. Thinning and shape extraction are performed directly on a graph of the run-length encoding of a binary image. The resulting strokes and other shapes are mapped, using a shape-clustering approach, into binary features which are then fed into a statistical Bayesian classifier. Large-scale trials have shown better than 97 percent top choice correct performance on mixtures of six dissimilar fonts, and over 99 percent on most single fonts, over a range of point sizes. Certain remaining confusion classes are disambiguated through contour analysis, and characters suspected of being merged are broken and reclassified. Finally, layout and linguistic context are applied. The results are illustrated by sample pages.

••

[...]

TL;DR: To compute the flow predicted by the segmentation, a recent method for reconstructing the motion and orientation of planar surface facets is used and the search for the globally optimal segmentation is performed using simulated annealing.

Abstract: This paper presents results from computer experiments with an algorithm to perform scene disposition and motion segmentation from visual motion or optic flow. The maximum a posteriori (MAP) criterion is used to formulate what the best segmentation or interpretation of the scene should be, where the scene is assumed to be made up of some fixed number of moving planar surface patches. The Bayesian approach requires, first, specification of prior expectations for the optic flow field, which here is modeled as spatial and temporal Markov random fields; and, secondly, a way of measuring how well the segmentation predicts the measured flow field. The Markov random fields incorporate the physical constraints that objects and their images are probably spatially continuous, and that their images are likely to move quite smoothly across the image plane. To compute the flow predicted by the segmentation, a recent method for reconstructing the motion and orientation of planar surface facets is used. The search for the globally optimal segmentation is performed using simulated annealing.

••

[...]

TL;DR: A computer model is described that combines concepts from the fields of acoustics, linear system theory, and digital signal processing to simulate an acoustic sensor navigation system using time-of-flight ranging to simulate sonar maps produced by transducers having different resonant frequencies and transmitted pulse waveforms.

Abstract: A computer model is described that combines concepts from the fields of acoustics, linear system theory, and digital signal processing to simulate an acoustic sensor navigation system using time-of-flight ranging. By separating the transmitter/receiver into separate components and assuming mirror-like reflectors, closed-form solutions for the reflections from corners, edges, and walls are determined as a function of transducer size, location, and orientation. A floor plan consisting of corners, walls, and edges is efficiently encoded to indicate which of these elements contribute to a particular pulse-echo response. Sonar maps produced by transducers having different resonant frequencies and transmitted pulse waveforms can then be simulated efficiently. Examples of simulated sonar maps of two floor plans illustrate the performance of the model. Actual sonar maps are presented to verify the simulation results.

••

[...]

TL;DR: A method by which range data from a sonar rangefinder can be used to determine the two-dimensional position and orientation of a mobile robot inside a room, which is extremely tolerant of noise and clutter.

Abstract: This correspondence describes a method by which range data from a sonar rangefinder can be used to determine the two-dimensional position and orientation of a mobile robot inside a room. The plan of the room is modeled as a list of segments indicating the positions of walls. The algorithm works by correlating straight segments in the range data against the room model, then eliminating implausible configurations using the sonar barrier test, which exploits physical constraints on sonar data. The approach is extremely tolerant of noise and clutter. Transient objects such as furniture and people need not be included in the room model, and very noisy, low-resolution sensors can be used. The algorithm's performance is demonstrated using a Polaroid Ultrasonic Rangefinder.

••

[...]

TL;DR: Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances.

Abstract: I describe a method for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

••

[...]

TL;DR: This correspondence presents two iterative schemes for solving nine nonlinear equations in terms of the motion and surface parameters that are derived from a least-squares fomulation.

Abstract: In this correspondence, we show how to recover the motion of an observer relative to a planar surface from image brightness derivatives We do not compute the optical flow as an intermediate step, only the spatial and temporal brightness gradients (at a minimum of eight points) We first present two iterative schemes for solving nine nonlinear equations in terms of the motion and surface parameters that are derived from a least-squares fomulation An initial pass over the relevant image region is used to accumulate a number of moments of the image brightness derivatives All of the quantities used in the iteration are efficiently computed from these totals without the need to refer back to the image We then show that either of two possible solutions can be obtained in closed form We first solve a linear matrix equation for the elements of a 3 × 3 matrix The eigenvalue decomposition of the symmetric part of the matrix is then used to compute the motion parameters and the plane orientation A new compact notation allows us to show easily that there are at most two planar solutions

••

[...]

TL;DR: Two conceptually new algorithms are presented for segmenting textured images into regions in each of which the data are modeled as one of C MRF's, designed to operate in real time when implemented on new parallel computer architectures that can be built with present technology.

Abstract: The modeling and segmentation of images by MRF's (Markov random fields) is treated. These are two-dimensional noncausal Markovian stochastic processes. Two conceptually new algorithms are presented for segmenting textured images into regions in each of which the data are modeled as one of C MRF's. The algorithms are designed to operate in real time when implemented on new parallel computer architectures that can be built with present technology. A doubly stochastic representation is used in image modeling. Here, a Gaussian MRF is used to model textures in visible light and infrared images, and an autobinary (or autoternary, etc.) MRF to model a priori information about the local geometry of textured image regions. For image segmentation, the true texture class regions are treated either as a priori completely unknown or as a realization of a binary (or ternary, etc.) MRF. In the former case, image segmentation is realized as true maximum likelihood estimation. In the latter case, it is realized as true maximum a posteriori likelihood segmentation. In addition to providing a mathematically correct means for introducing geometric structure, the autobinary (or ternary, etc.) MRF can be used in a generative mode to generate image geometries and artificial images, and such simulations constitute a very powerful tool for studying the effects of these models and the appropriate choice of model parameters. The first segmentation algorithm is hierarchical and uses a pyramid-like structure in new ways that exploit the mutual dependencies among disjoint pieces of a textured region.

••

[...]

TL;DR: It is hoped that the results presented will have an impact upon both sensor design and error modeling of position measuring systems for computer vision and related applications.

Abstract: The relationship between the geometry of a stereo camera setup and the accuracy in obtaining three-dimensional position information is of great practical importance in many imaging applications. Assuming a point in a scene has been correctly identified in each image, its three-dimensional position can be recovered via a simple geometrical method known as triangulation. The probability that position estimates from triangulation are within some specified error tolerance is derived. An ideal pinhole camera model is used and the error is modeled as known spatial image plane quantization. A point's measured position maps to a small volume in 3-D determined by the finite resolution of the stereo setup. With the assumption that the point's actual position is uniformly distributed inside this volume, closed form expressions for the probability distribution of error in position along each coordinate direction (horizontal, vertical, and range) are derived. Following this, the probability that range error dominates over errors in the point's horizontal or vertical position is determined. It is hoped that the results presented will have an impact upon both sensor design and error modeling of position measuring systems for computer vision and related applications.

••

[...]

TL;DR: In this correspondence, a structural recognition method of Arabic cursively handwritten words is proposed, in which words are first segmented into strokes and classified using their geometrical and topological properties.

Abstract: In spite of the progress of machine recognition techniques of Latin, Kana, and Chinese characters over the two past decades, the machine recognition of Arabic characters has remained almost untouched. In this correspondence, a structural recognition method of Arabic cursively handwritten words is proposed. In this method, words are first segmented into strokes. Those strokes are then classified using their geometrical and topological properties. Finally, the relative position of the classified strokes are examined, and the strokes are combined in several steps into a string of characters that represents the recognized word. Experimental results on texts handwritten by two persons showed high recognition accuracy.

••

[...]

TL;DR: Results on the application of several bootstrap techniques in estimating the error rate of 1-NN and quadratic classifiers show that, in most cases, the confidence interval of a bootstrap estimator of classification error is smaller than that of the leave-one-out estimator.

Abstract: The design of a pattern recognition system requires careful attention to error estimation. The error rate is the most important descriptor of a classifier's performance. The commonly used estimates of error rate are based on the holdout method, the resubstitution method, and the leave-one-out method. All suffer either from large bias or large variance and their sample distributions are not known. Bootstrapping refers to a class of procedures that resample given data by computer. It permits determining the statistical properties of an estimator when very little is known about the underlying distribution and no additional samples are available. Since its publication in the last decade, the bootstrap technique has been successfully applied to many statistical estimations and inference problems. However, it has not been exploited in the design of pattern recognition systems. We report results on the application of several bootstrap techniques in estimating the error rate of 1-NN and quadratic classifiers. Our experiments show that, in most cases, the confidence interval of a bootstrap estimator of classification error is smaller than that of the leave-one-out estimator. The error of 1-NN, quadratic, and Fisher classifiers are estimated for several real data sets.

••

[...]

TL;DR: In this paper, the authors proposed an event-covering approach which covers a subset of statistically relevant outcomes in the outcome space of variable-pairs, and once the covered event patterns are acquired, subsequent analysis tasks such as probabilistic inference, cluster analysis, and detection of event patterns for each cluster based on the incomplete probability scheme can be performed.

Abstract: The difficulties in analyzing and clustering (synthesizing) multivariate data of the mixed type (discrete and continuous) are largely due to: 1) nonuniform scaling in different coordinates, 2) the lack of order in nominal data, and 3) the lack of a suitable similarity measure. This paper presents a new approach which bypasses these difficulties and can acquire statistical knowledge from incomplete mixed-mode data. The proposed method adopts an event-covering approach which covers a subset of statistically relevant outcomes in the outcome space of variable-pairs. And once the covered event patterns are acquired, subsequent analysis tasks such as probabilistic inference, cluster analysis, and detection of event patterns for each cluster based on the incomplete probability scheme can be performed. There are four phases in our method: 1) the discretization of the continuous components based on a maximum entropy criterion so that the data can be treated as n-tuples of discrete-valued features; 2) the estimation of the missing values using our newly developed inference procedure; 3) the initial formation of clusters by analyzing the nearest-neighbor distance on subsets of selected samples; and 4) the reclassification of the n-tuples into more reliable clusters based on the detected interdependence relationships. For performance evaluation, experiments have been conducted using both simulated and real life data.

••

[...]

TL;DR: Noise images prefiltered by median filters defined with a variety of windowing geometries are used to support the analysis and it is found that median prefiltering improves the performance of both thresholding and zero-crossing based edge detectors.

Abstract: In this paper we consider the effect of median prefiltering on the subsequent estimation and detection of edges in digital images. Where possible, a quantitative statistical comparison is made for a number of filters defined with two-dimensional geometries; in some cases one-dimensional analyses are required to illustrate certain points. Noise images prefiltered by median filters defined with a variety of windowing geometries are used to support the analysis, and it is found that median prefiltering improves the performance of both thresholding and zero-crossing based edge detectors.

••

[...]

TL;DR: This paper presents a model based on fractional Brownian motion which will allow us to recover two characteristics related to the fractal dimension from silhouettes, and introduces a new theoretical concept called the average Holder constant, which relates it mathematically to the Fractal dimension.

Abstract: Many objects in images of natural scenes are so complex and erratic, that describing them by the familiar models of classical geometry is inadequate. In this paper, we exploit the power of fractal geometry to generate global characteristics of natural scenes. In particular we are concerned with the following two questions: 1) Can we develop a measure which can distinguish between different global backgrounds (e.g., mountains and trees)? and 2) Can we develop a measure that is sensitive to change in distance (or scale)? We present a model based on fractional Brownian motion which will allow us to recover two characteristics related to the fractal dimension from silhouettes. The first characteristic is an estimate of the fractal dimension based on a least squares linear fit. We show that this feature is stable under a variety of real image conditions and use it to distinguish silhouettes of trees from silhouettes of mountains. Next we introduce a new theoretical concept called the average Holder constant and relate it mathematically to the fractal dimension. It is shown that this measurement is sensitive to scale in a predictable manner, and hence, provides the potential for use as a range indicator. Corroborating experimental results are presented.

••

[...]

TASC, Inc

^{1}TL;DR: A new application of scale-space filtering to the classical problem of estimating the parameters of a normal mixture distribution is described, relating pairs of zero-crossings to modes in the histogram where each mode or component is modeled by a normal distribution.

Abstract: A new application of scale-space filtering to the classical problem of estimating the parameters of a normal mixture distribution is described. The technique involves generating a multiscale description of a histogram by convolving it with a series of Gaussians of gradually increasing width (standard deviation), and marking the location and direction of the sign change of zero-crossings in the second derivative. The resulting description, or fingerprint, is interpreted by relating pairs of zero-crossings to modes in the histogram where each mode or component is modeled by a normal distribution. Zero-crossings provide information from which estimates of the mixture parameters are computed. These initial estimates are subsequently refined using an iterative maximum likelihood estimation technique. Varying the scale or resolution of the analysis allows the number of components used in approximating the histogram to be controlled.

••

[...]

TL;DR: Experiments with synthetic and real boundaries show that estimates closer to the true values of Fourier descriptors of complete boundaries are obtained and classification experiments performed using real boundaries indicate that reasonable classification accuracies are obtained even when 20-30 percent of the data is missing.

Abstract: We present a method for the classification of 2-D partial shapes using Fourier descriptors. We formulate the problem as one of estimating the Fourier descriptors of the unknown complete shape from the observations derived from an arbitrarily rotated and scaled shape with missing segments. The method used for obtaining the estimates of the Fourier descriptors minimizes a sum of two terms; the first term of which is a least square fit to the given data subject to the condition that the number of missing boundary points is not known and the second term is the perimeter2/area of the unknown shape. Experiments with synthetic and real boundaries show that estimates closer to the true values of Fourier descriptors of complete boundaries are obtained. Also, classification experiments performed using real boundaries indicate that reasonable classification accuracies are obtained even when 20-30 percent of the data is missing.