# Showing papers in "IEEE Transactions on Pattern Analysis and Machine Intelligence in 1989"

••

TL;DR: In this paper, it is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2 /sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions.

Abstract: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >

20,028 citations

••

TL;DR: The decomposition of deformations by principal warps is demonstrated and the method is extended to deal with curving edges between landmarks to aid the extraction of features for analysis, comparison, and diagnosis of biological and medical images.

Abstract: The decomposition of deformations by principal warps is demonstrated. The method is extended to deal with curving edges between landmarks. This formulation is related to other applications of splines current in computer vision. How they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images in indicated. >

5,065 citations

••

TL;DR: The unsupervised fuzzy partition-optimal number of classes algorithm performs well in situations of large variability of cluster shapes, densities, and number of data points in each cluster.

Abstract: This study reports on a method for carrying out fuzzy classification without a priori assumptions on the number of clusters in the data set. Assessment of cluster validity is based on performance measures using hypervolume and density criteria. An algorithm is derived from a combination of the fuzzy K-means algorithm and fuzzy maximum-likelihood estimation. The unsupervised fuzzy partition-optimal number of classes algorithm performs well in situations of large variability of cluster shapes, densities, and number of data points in each cluster. The algorithm was tested on different classes of simulated data, and on a real data set derived from sleep EEG signal. >

1,691 citations

••

TL;DR: A parallel algorithm for detecting dominant points on a digital closed curve is presented, which leads to the observation that the performance of dominant points detection depends not only on the accuracy of the measure of significance, but also on the precise determination of the region of support.

Abstract: A parallel algorithm is presented for detecting dominant points on a digital closed curve. The procedure requires no input parameter and remains reliable even when features of multiple sizes are present on the digital curve. The procedure first determines the region of support for each point based on its local properties, then computes measures of relative significance (e.g. curvature) of each point, and finally detects dominant points by a process of nonmaximum suppression. This procedure leads to the observation that the performance of dominant points detection depends not only on the accuracy of the measure of significance, but also on the precise determination of the region of support. This solves the fundamental problem of scale factor selection encountered in various dominant point detection algorithms. The inherent nature of scale-space filtering in the procedure is addressed, and the performance of the procedure is compared to those of several other dominant point detection algorithms, using a number of examples. >

772 citations

••

TL;DR: The results of a study on multiscale shape description, smoothing and representation are reported, showing that the partially reconstructed images from the inverse transform on subsequences of skeleton components are the openings of the image at a scale determined by the number of eliminated components.

Abstract: The results of a study on multiscale shape description, smoothing and representation are reported. Multiscale nonlinear smoothing filters are first developed, using morphological opening and closings. G. Matheron (1975) used openings and closings to obtain probabilistic size distributions of Euclidean-space sets (continuous binary images). These distributions are used to develop a concept of pattern spectrum (a shape-size descriptor). A pattern spectrum is introduced for continuous graytone images and arbitrary multilevel signals, as well as for discrete images, by developing a discrete-size family of patterns. Large jumps in the pattern spectrum at a certain scale indicate the existence of major (protruding or intruding) substructures of the signal at the scale. An entropy-like shape-size complexity measure is also developed based on the pattern spectrum. For shape representation, a reduced morphological skeleton transform is introduced for discrete binary and graytone images. This transform is a sequence of skeleton components (sparse images) which represent the original shape at various scales. It is shown that the partially reconstructed images from the inverse transform on subsequences of skeleton components are the openings of the image at a scale determined by the number of eliminated components; in addition, two-way correspondences are established among the degree of shape smoothing via multiscale openings or closings, the pattern spectrum zero values, and the elimination or nonexistence of skeleton components at certain scales. >

707 citations

••

TL;DR: It is shown that recovery of the trace of a curve requires estimating local models for the curve at the same time, and that tangent and curvature information are sufficient, which make it possible to specify powerful constraints between estimated tangents to a curve.

Abstract: An approach is described for curve inference that is based on curvature information. The inference procedure is divided into two stages: a trace inference stage, which is the subject of the present work, and a curve synthesis stage. It is shown that recovery of the trace of a curve requires estimating local models for the curve at the same time, and that tangent and curvature information are sufficient. These make it possible to specify powerful constraints between estimated tangents to a curve, in terms of a neighborhood relationship called cocircularity, and between curvature estimates, in terms of a curvature consistency relation. Because all curve information is quantized, special care must be taken to obtain accurate estimates of trace points, tangents, and curvatures. This issue is addressed specifically to the introduction of a smoothness constraint and a maximum curvature constraint. The procedure is applied to two types of images: artificial images designed to evaluate curvature and noise sensitivity, and natural images. >

550 citations

••

TL;DR: In this paper, a word image is transformed through a hierarchy of representation levels: points, contours, features, letters, and words, and a unique feature representation is generated bottom-up from the image using statistical dependences between letters and features.

Abstract: Cursive script word recognition is the problem of transforming a word from the iconic form of cursive writing to its symbolic form. Several component processes of a recognition system for isolated offline cursive script words are described. A word image is transformed through a hierarchy of representation levels: points, contours, features, letters, and words. A unique feature representation is generated bottom-up from the image using statistical dependences between letters and features. Ratings for partially formed words are computed using a stack algorithm and a lexicon represented as a trie. Several novel techniques for low- and intermediate-level processing for cursive script are described, including heuristics for reference line finding, letter segmentation based on detecting local minima along the lower contour and areas with low vertical profiles, simultaneous encoding of contours and their topological relationships, extracting features, and finding shape-oriented events. Experiments demonstrating the performance of the system are also described. >

502 citations

••

TL;DR: The presented approach to error estimation applies to a wide variety of problems that involve least-squares optimization or pseudoinverse and shows, among other things, that the errors are very sensitive to the translation direction and the range of field view.

Abstract: Deals with estimating motion parameters and the structure of the scene from point (or feature) correspondences between two perspective views. An algorithm is presented that gives a closed-form solution for motion parameters and the structure of the scene. The algorithm utilizes redundancy in the data to obtain more reliable estimates in the presence of noise. An approach is introduced to estimating the errors in the motion parameters computed by the algorithm. Specifically, standard deviation of the error is estimated in terms of the variance of the errors in the image coordinates of the corresponding points. The estimated errors indicate the reliability of the solution as well as any degeneracy or near degeneracy that causes the failure of the motion estimation algorithm. The presented approach to error estimation applies to a wide variety of problems that involve least-squares optimization or pseudoinverse. Finally the relationships between errors and the parameters of motion and imaging system are analyzed. The results of the analysis show, among other things, that the errors are very sensitive to the translation direction and the range of field view. Simulations are conducted to demonstrate the performance of the algorithms and error estimation as well as the relationships between the errors and the parameters of motion and imaging systems. The algorithms are tested on images of real-world scenes with point of correspondences computed automatically. >

495 citations

••

TL;DR: It is shown on examples related to polyhedra that this approach leads to results useful for both location and recognition of 3D objects because few admissible hypotheses are retained from the interpolation of the three line segments.

Abstract: A method for finding analytical solutions to the problem of determining the attitude of a 3D object in space from a single perspective image is presented. Its principle is based on the interpretation of a triplet of any image lines as the perspective projection of a triplet of linear ridges of the object model, and on the search for the model attitude consistent with these projections. The geometrical transformations to be applied to the model to bring it into the corresponding location are obtained by the resolution of an eight-degree equation in the general case. Using simple logical rules, it is shown on examples related to polyhedra that this approach leads to results useful for both location and recognition of 3D objects because few admissible hypotheses are retained from the interpolation of the three line segments. Line matching by the prediction-verification procedure is thus less complex. >

493 citations

••

TL;DR: It is proven that the parameter estimates and the segmentations converge in distribution to the ML estimate of the parameters and the MAP segmentation with those parameter estimates, respectively.

Abstract: An adaptive segmentation algorithm is developed which simultaneously estimates the parameters of the underlying Gibbs random field (GRF)and segments the noisy image corrupted by additive independent Gaussian noise. The algorithm, which aims at obtaining the maximum a posteriori (MAP) segmentation is a simulated annealing algorithm that is interrupted at regular intervals for estimating the GRF parameters. Maximum-likelihood (ML) estimates of the parameters based on the current segmentation are used to obtain the next segmentation. It is proven that the parameter estimates and the segmentations converge in distribution to the ML estimate of the parameters and the MAP segmentation with those parameter estimates, respectively. Due to computational difficulties, however, only an approximate version of the algorithm is implemented. The approximate algorithm is applied on several two- and four-region images with different noise levels and with first-order and second-order neighborhoods. >

400 citations

••

TL;DR: The authors propose a method for solving the stereo correspondence problem by extracting local image structures and matching similar such structures between two images using a benefit function.

Abstract: The authors propose a method for solving the stereo correspondence problem. The method consists of extracting local image structures and matching similar such structures between two images. Linear edge segments are extracted from both the left and right images. Each segment is characterized by its position and orientation in the image as well as its relationships with the nearby segments. A relational graph is thus built from each image. For each segment in one image as set of potential assignments is represented as a set of nodes in a correspondence graph. Arcs in the graph represent compatible assignments established on the basis of segment relationships. Stereo matching becomes equivalent to searching for sets of mutually compatible nodes in this graph. Sets are found by looking for maximal cliques. The maximal clique best suited to represent a stereo correspondence is selected using a benefit function. Numerous results obtained with this method are shown. >

••

TL;DR: An approach is described that integrates the processes of feature matching, contour detection, and surface interpolation to determine the three-dimensional distance, or depth, of objects from a stereo pair of images.

Abstract: An approach is described that integrates the processes of feature matching, contour detection, and surface interpolation to determine the three-dimensional distance, or depth, of objects from a stereo pair of images. Integration is necessary to ensure that the detected surfaces are smooth. Surface interpolation takes into account detected occluding and ridge contours in the scene; interpolation is performed within regions enclosed by these contours. Planar and quadratic patches are used as local models of the surface. Occluded regions in the image are identified, and are not used for matching and interpolation. A coarse-to-fine algorithm is presented that generates a multiresolution hierarchy of surface maps, one at each level of resolution. Experimental results are given for a variety of stereo images. >

••

TL;DR: An approach is described for unsupervised segmentation of textured images that appears to be an improvement on the commonly used Karhunen-Loeve transform and allows efficient texture segmentation based on simple thresholding.

Abstract: An approach is described for unsupervised segmentation of textured images. Local texture properties are extracted using local linear transforms that have been optimized for maximal texture discrimination. Local statistics (texture energy measures) are estimated at the output of an equivalent filter bank by means of a nonlinear transformation (absolute value) followed by an iterative Gaussian smoothing algorithm. This procedure generates a multiresolution sequence of feature planes with a half-octave scale progression. A feature reduction technique is then applied to the data and is determined by simultaneously diagonalizing scatter matrices evaluated at two different spatial resolutions. This approach provides a good approximation of R.A. Fisher's (1950) multiple linear discriminants and has the advantage of requiring no a priori knowledge. This feature reduction methods appears to be an improvement on the commonly used Karhunen-Loeve transform and allows efficient texture segmentation based on simple thresholding. >

••

TL;DR: The effect of finite sample-size on parameter estimates and their subsequent use in a family of functions are discussed, and an empirical approach is presented to enable asymptotic performance to be accurately estimated using a very small number of samples.

Abstract: The effect of finite sample-size on parameter estimates and their subsequent use in a family of functions are discussed. General and parameter-specific expressions for the expected bias and variance of the functions are derived. These expressions are then applied to the Bhattacharyya distance and the analysis of the linear and quadratic classifiers, providing insight into the relationship between the number of features and the number of training samples. Because of the functional form of the expressions, an empirical approach is presented to enable asymptotic performance to be accurately estimated using a very small number of samples. Results were experimentally verified using artificial data in controlled cases and using real, high-dimensional data. >

••

TL;DR: In this paper, an edge operator based on two-dimensional spatial moments is presented, which can be implemented for virtually any size of window and has been shown to locate edges in digitized images to a twentieth of a pixel.

Abstract: Recent results in precision measurements using computer vision are presented. An edge operator based on two-dimensional spatial moments is given. The operator can be implemented for virtually any size of window and has been shown to locate edges in digitized images to a twentieth of a pixel. This accuracy is unaffected by additive or multiplicative changes to the data values. The precision is achieved by correcting for many of the deterministic errors caused by nonideal edge profiles using a lookup table to correct the original estimates of edge orientation and location. This table is generated using a synthesized edge which is located at various subpixel locations and various orientations. The operator is extended to accommodate nonideal edge profiles and rectangularly sampled pixels. The technique is applied to the measurement of imaged machined metal parts. Theoretical and experimental noise analyses show that the operator has relatively small bias in the presence of noise. >

••

TL;DR: In this article, it was shown that the optical flow and the motion field can be interpreted as vector fields tangent to flows of planar dynamical systems, which can be used to reconstruct the 3D structure of a moving scene.

Abstract: It is shown that the motion field the 2-D vector field which is the perspective projection on the image plane of the 3-D velocity field of a moving scene, and the optical flow, defined as the estimate of the motion field which can be derived from the first-order variation of the image brightness pattern, are in general different, unless special conditions are satisfied. Therefore, dense optical flow is often ill-suited for computing structure from motion and for reconstructing the 3-D velocity field by algorithms which require a locally accurate estimate of the motion field. A different use of the optical flow is suggested. It is shown that the (smoothed) optical flow and the motion field can be interpreted as vector fields tangent to flows of planar dynamical systems. Stable qualitative properties of the motion field, which give useful informations about the 3-D velocity field and the 3-D structure of the scene, usually can be obtained from the optical flow. The idea is supported by results from the theory of structural stability of dynamical systems. >

••

TL;DR: A segmentation algorithm based on sequential optimization which produces a hierarchical decomposition of the picture that can be viewed as a tree, where the nodes correspond to picture segments and where links between nodes indicate set inclusions.

Abstract: A segmentation algorithm based on sequential optimization which produces a hierarchical decomposition of the picture is presented. The decomposition is data driven with no restriction on segment shapes. It can be viewed as a tree, where the nodes correspond to picture segments and where links between nodes indicate set inclusions. Picture segmentation is first regarded as a problem of piecewise picture approximation, which consists of finding the partition with the minimum approximation error. Then, picture segmentation is presented as an hypothesis-testing process which merges only segments that belong to the same region. A hierarchical decomposition constraint is used in both cases, which results in the same stepwise optimization algorithm. At each iteration, the two most similar segments are merged by optimizing a stepwise criterion. The algorithm is used to segment a remote-sensing picture, and illustrate the hierarchical structure of the picture. >

••

TL;DR: The authors argue that representations of structural relationships in the arrangements of primitive image features, as detected by the perceptual organization process, are essential for analyzing complex imagery.

Abstract: The authors describe an approach to perceptual grouping for detecting and describing 3-D objects in complex images and apply it to the task of detecting and describing complex buildings in aerial images. They argue that representations of structural relationships in the arrangements of primitive image features, as detected by the perceptual organization process, are essential for analyzing complex imagery. They term these representations collated features. The choice of collated features is determined by the generic shape of the desired objects in the scene. The detection process for collated features is more robust than the local operations for region segmentation and contour tracing. The important structural information encoded in collated features aids various visual tasks such as object segmentation, correspondence processes, and shape description. The proposed method initially detects all reasonable feature groupings. A constraint satisfaction network is then used to model the complex interactions between the collations and select the promising ones. Stereo matching is performed on the collations to obtain height information. This aids in further reasoning on the collated features and results in the 3-D description of the desired objects. >

••

TL;DR: Two problems which may arise due to the presence of noise in the flow field are examined and constraints and parameters which can be recovered even in ambiguous situations are presented.

Abstract: One of the major areas in research on dynamic scene analysis is recovering 3-D motion and structure from optical flow information. Two problems which may arise due to the presence of noise in the flow field are examined. First, motion parameters of the sensor or a rigidly moving object may be extremely difficult to estimate because there may exist a large set of significantly incorrect solutions which induce flow fields similar to the correct one. The second problem is in the decomposition of the environment into independently moving objects. Two such objects may induce optical flows which are compatible with the same motion parameters, and hence, there is no way to refute the hypothesis that these flows are generated by one rigid object. These ambiguities are inherent in the sense that they are algorithm-independent. Using a mathematical analysis, situations where these problems are likely to arise are characterized. A few examples demonstrate the conclusions. Constraints and parameters which can be recovered even in ambiguous situations are presented. >

••

TL;DR: It is shown that the second-order moment invariants can be used to predict whether the estimation using noisy data is reliable or not and the new derivation of vector forms also facilities the calculation of motion estimation in a tensor approach.

Abstract: The 3-D moment method is applied to object identification and positioning. A general theory of deriving 3-D moments invariants is proposed. The notion of complex moments is introduced. Complex moments are defined as linear combinations of moments with complex coefficients and are collected into multiplets such that each multiplet transforms irreducibly under 3-D rotations. The application of the 3-D moment method to motion estimation is also discussed. Using group-theoretic techniques, various invariant scalars are extracted from compounds of complex moments via Clebsch-Gordon expansion. Twelve moment invariants consisting of the second-order and third-order moments are explicitly derived. Based on a perturbation formula, it is shown that the second-order moment invariants can be used to predict whether the estimation using noisy data is reliable or not. The new derivation of vector forms also facilities the calculation of motion estimation in a tensor approach. Vectors consisting of the third-order moments can be derived in a similar manner. >

••

TL;DR: The authors describe a hybrid approach to the problem of image segmentation in range data analysis, where hybrid refers to a combination of both region- and edge-based considerations.

Abstract: The authors describe a hybrid approach to the problem of image segmentation in range data analysis, where hybrid refers to a combination of both region- and edge-based considerations. The range image of 3-D objects is divided into surface primitives which are homogeneous in their intrinsic differential geometric properties and do not contain discontinuities in either depth of surface orientation. The method is based on the computation of partial derivatives, obtained by a selective local biquadratic surface fit. Then, by computing the Gaussian and mean curvatures, an initial region-gased segmentation is obtained in the form of a curvature sign map. Two additional initial edge-based segmentations are also computed from the partial derivatives and depth values, namely, jump and roof-edge maps. The three image maps are then combined to produce the final segmentation. Experimental results obtained for both synthetic and real range data of polyhedral and curved objects are given. >

••

TL;DR: It is shown that a quantity termed the directional divergence of the 2-D motion field can be used as a reliable indicator of the presence of obstacles in the visual field of an observer undergoing generalized rotational and translational motion.

Abstract: The use of certain measures of flow field divergence is investigated as a qualitative cue for obstacle avoidance during visual navigation. It is shown that a quantity termed the directional divergence of the 2-D motion field can be used as a reliable indicator of the presence of obstacles in the visual field of an observer undergoing generalized rotational and translational motion. The necessary measurements can be robustly obtained from real image sequences. Experimental results are presented showing that the system responds as expected to divergence in real-world image sequences, and the use of the system to navigate between obstacles is demonstrated. >

••

TL;DR: Results indicate that the history heuristic combined with transposition tables significantly outperforms other alpha-beta enhancements in application-generated game trees.

Abstract: Many enhancements to the alpha-beta algorithm have been proposed to help reduce the size of minimax trees. A recent enhancement, the history heuristic, which improves the order in which branches are considered at interior nodes is described. A comprehensive set of experiments is reported which tries all combinations of enhancements to determine which one yields the best performance. In contrast, previous work on assessing their performance has concentrated on the benefits of individual enhancements or a few combinations. The aim is to find the combination that provides the greatest reduction in tree size. Results indicate that the history heuristic combined with transposition tables significantly outperforms other alpha-beta enhancements in application-generated game trees. For trees up to depth 8, this combination accounts for 99% of the possible reductions in tree size, with the other enhancements yielding insignificant gains. >

••

TL;DR: The authors provide a complete method for describing and recognizing 3-D objects, using surface information, by taking as input dense range date and automatically producing a symbolic description of the objects in the scene in terms of their visible surface patches.

Abstract: The authors provide a complete method for describing and recognizing 3-D objects, using surface information. Their system takes as input dense range date and automatically produces a symbolic description of the objects in the scene in terms of their visible surface patches. This segmented representation may be viewed as a graph whose nodes capture information about the individual surface patches and whose links represent the relationships between them, such as occlusion and connectivity. On the basis of these relations, a graph for a given scene is decomposed into subgraphs corresponding to different objects. A model is represented by a set of such descriptions from multiple viewing angles, typically four to six. Models can therefore be acquired and represented automatically. Matching between the objects in a scene and the models is performed by three modules: the screener, in which the most likely candidate views for each object are found; the graph matcher, which compares the potential matching graphs and computes the 3-D transformation between them; and the analyzer, which takes a critical look at the results and proposes to split and merge object graphs. >

••

Brown University

^{1}TL;DR: The restoration algorithm is a global-optimization algorithm applicable to other optimization problems, and generates iteratively a multilevel cascode of restored images corresponding to different levels of resolution, or scale.

Abstract: A method for studying problems in digital image processing, based on a combination of renormalization group ideas, the Markov random-field modeling of images, and metropolis-type Monte Carlo algorithms, is presented. The method is efficiently implementable on parallel architectures, and provides a unifying procedure for performing a hierarchical, multiscale, coarse-to-fine analysis of image-processing tasks such as restoration, texture analysis, coding, motion analysis, etc. The method is formulated and applied to the restoration of degraded images. The restoration algorithm is a global-optimization algorithm applicable to other optimization problems. It generates iteratively a multilevel cascode of restored images corresponding to different levels of resolution, or scale. In the lower levels of the cascade appear the large-scale features of the image, and in the higher levels, the microscopic features of the image. >

••

TL;DR: It is shown that zero-crossing edge detection algorithms can produce edges that do not correspond to significant image intensity changes, and it is seen that authentic edges are denser and stronger, on the average, than phantom edges.

Abstract: It is shown that zero-crossing edge detection algorithms can produce edges that do not correspond to significant image intensity changes. Such edges are called phantom or spurious. A method for classifying zero crossings as corresponding to authentic or phantom edges is presented. The contrast of an authentic edge is shown to increase and the contrast of phantom edges to decrease with a decrease in the filter scale. Thus, a phantom edge is truly a phantom in that the closer one examines it, the weaker it becomes. The results of applying the classification schemes described to synthetic and authentic signals in one and two dimensions are given. The significance of the phantom edges is examined with respect to their frequency and strength relative to the authentic edges, and it is seen that authentic edges are denser and stronger, on the average, than phantom edges. >

••

TL;DR: Although the developed theory is algebraic, its prototype operations are well suited for shape analysis; hence, the results also apply to systems that extract information about the geometrical structure of signals.

Abstract: A unifying theory for many concepts and operations encountered in or related to morphological image and signal analysis is presented. The unification requires a set-theoretic methodology, where signals are modeled as sets, systems (signal transformations) are viewed as set mappings, and translational-invariant systems are uniquely characterized by special collections of input signals. This approach leads to a general representation theory, in which any translation-invariant, increasing, upper semicontinuous system can be presented exactly as a minimal nonlinear superposition of morphological erosions or dilations. The theory is used to analyze some special cases of image/signal analysis systems, such as morphological filters, median and order-statistic filters, linear filters, and shape recognition transforms. Although the developed theory is algebraic, its prototype operations are well suited for shape analysis; hence, the results also apply to systems that extract information about the geometrical structure of signals. >

••

TL;DR: A simple algorithm is presented for finding all the axes of symmetry of symmetric and almost symmetric planar images having nonuniform gray-level (intensity images) and is especially suited for application in conjunction with digitized figures.

Abstract: A simple algorithm is presented for finding all the axes of symmetry of symmetric and almost symmetric planar images having nonuniform gray-level (intensity images). The technique, which can also be used in conjunction with planar curves, is based on the identification of the centroids of the given image and other related sets of points, followed by a maximization of a specially defined coefficient of symmetry. Owing to the nature of the required procedures, which are strictly based on the computation of mean values, the method is not particularly affected by the presence of noise and is especially suited for application in conjunction with digitized figures. >

••

TL;DR: The authors show that a necessary and sufficient condition for a 3*3 matrix to be so decomposable is that one of its singular values is zero and the other two are equal.

Abstract: In the eight-point linear algorithm for determining 3D motion/structure from two perspective views using point correspondences, the E matrix plays a central role. The E matrix is defined as a skew-symmetrical matrix (containing the translation components) postmultiplied by a rotation matrix. The authors show that a necessary and sufficient condition for a 3*3 matrix to be so decomposable is that one of its singular values is zero and the other two are equal. Several other forms of this property are presented. Some applications are briefly described. >

••

TL;DR: It is concluded that the deterministic algorithm (graduated nonconvexity) outstrips stochastic (simulated annealing) algorithms both in computational efficiency and in problem-solving power.

Abstract: Piecewise continuous reconstruction of real-valued data can be formulated in terms of nonconvex optimization problems. Both stochastic and deterministic algorithms have been devised to solve them. The simplest such reconstruction process is the weak string. Exact solutions can be obtained for it and are used to determine the success or failure of the algorithms under precisely controlled conditions. It is concluded that the deterministic algorithm (graduated nonconvexity) outstrips stochastic (simulated annealing) algorithms both in computational efficiency and in problem-solving power. >