scispace - formally typeset
Search or ask a question

Showing papers on "Affine transformation published in 2008"


Proceedings ArticleDOI
07 Jun 2008
TL;DR: An automatic polyhedral source-to-source transformation framework that can optimize regular programs for parallelism and locality simultaneously simultaneously and is implemented into a tool to automatically generate OpenMP parallel code from C program sections.
Abstract: We present the design and implementation of an automatic polyhedral source-to-source transformation framework that can optimize regular programs (sequences of possibly imperfectly nested loops) for parallelism and locality simultaneously. Through this work, we show the practicality of analytical model-driven automatic transformation in the polyhedral model -- far beyond what is possible by current production compilers. Unlike previous works, our approach is an end-to-end fully automatic one driven by an integer linear optimization framework that takes an explicit view of finding good ways of tiling for parallelism and locality using affine transformations. The framework has been implemented into a tool to automatically generate OpenMP parallel code from C program sections. Experimental results from the tool show very high speedups for local and parallel execution on multi-cores over state-of-the-art compiler frameworks from the research community as well as the best native production compilers. The system also enables the easy use of powerful empirical/iterative optimization for general arbitrarily nested loop sequences.

930 citations


Journal ArticleDOI
01 Aug 2008
TL;DR: 4PCS, a fast and robust alignment scheme for 3D point sets that uses wide bases, which are known to be resilient to noise and outliers, is introduced and an extension to handle similarity and affine transforms is proposed.
Abstract: We introduce 4PCS, a fast and robust alignment scheme for 3D point sets that uses wide bases, which are known to be resilient to noise and outliers. The algorithm allows registering raw noisy data, possibly contaminated with outliers, without pre-filtering or denoising the data. Further, the method significantly reduces the number of trials required to establish a reliable registration between the underlying surfaces in the presence of noise, without any assumptions about starting alignment. Our method is based on a novel technique to extract all coplanar 4-points sets from a 3D point set that are approximately congruent, under rigid transformation, to a given set of coplanar 4-points. This extraction procedure runs in roughly O(n2 + k) time, where n is the number of candidate points and k is the number of reported 4-points sets. In practice, when noise level is low and there is sufficient overlap, using local descriptors the time complexity reduces to O(n + k). We also propose an extension to handle similarity and affine transforms. Our technique achieves an order of magnitude asymptotic acceleration compared to common randomized alignment techniques. We demonstrate the robustness of our algorithm on several sets of multiple range scans with varying degree of noise, outliers, and extent of overlap.

581 citations


Journal ArticleDOI
TL;DR: The model-based approach for the fully automatic segmentation of the whole heart (four chambers, myocardium, and great vessels) from 3-D CT images shows better interphase and interpatient shape variability characterization than commonly used principal component analysis.
Abstract: Automatic image processing methods are a pre-requisite to efficiently analyze the large amount of image data produced by computed tomography (CT) scanners during cardiac exams. This paper introduces a model-based approach for the fully automatic segmentation of the whole heart (four chambers, myocardium, and great vessels) from 3-D CT images. Model adaptation is done by progressively increasing the degrees-of-freedom of the allowed deformations. This improves convergence as well as segmentation accuracy. The heart is first localized in the image using a 3-D implementation of the generalized Hough transform. Pose misalignment is corrected by matching the model to the image making use of a global similarity transformation. The complex initialization of the multicompartment mesh is then addressed by assigning an affine transformation to each anatomical region of the model. Finally, a deformable adaptation is performed to accurately match the boundaries of the patient's anatomy. A mean surface-to-surface error of 0.82 mm was measured in a leave-one-out quantitative validation carried out on 28 images. Moreover, the piecewise affine transformation introduced for mesh initialization and adaptation shows better interphase and interpatient shape variability characterization than commonly used principal component analysis.

435 citations


Journal ArticleDOI
TL;DR: In this article, the authors employ the same framework of affine systems which is at the core of the construction of the wavelet transform to introduce the Continuous Shearlet Transform, which is defined by SH ψ f(a,s,t) = (fψ ast ), where the analyzing elements ψ ast are dilated and translated copies of a single generating function ψ.
Abstract: It is known that the Continuous Wavelet Transform of a distribution f decays rapidly near the points where f is smooth, while it decays slowly near the irregular points. This property allows the identification of the singular support of f. However, the Continuous Wavelet Transform is unable to describe the geometry of the set of singularities of f and, in particular, identify the wavefront set of a distribution. In this paper, we employ the same framework of affine systems which is at the core of the construction of the wavelet transform to introduce the Continuous Shearlet Transform. This is defined by SH ψ f(a,s,t) = (fψ ast ), where the analyzing elements ψ ast are dilated and translated copies of a single generating function ψ. The dilation matrices form a two-parameter matrix group consisting of products of parabolic scaling and shear matrices. We show that the elements {ψ ast } form a system of smooth functions at continuous scales a > 0, locations t ∈ R 2 , and oriented along lines of slope s ∈ R in the frequency domain. We then prove that the Continuous Shearlet Transform does exactly resolve the wavefront set of a distribution f.

271 citations


Journal ArticleDOI
TL;DR: In this article, a theory of affine flag varieties and Schubert varieties for reductive groups over a Laurent power series local field k((t)) with k a perfect field was developed.

271 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: A robust subspace separation scheme that can deal with all of these practical issues in a unified framework and draw strong connections between lossy compression, rank minimization, and sparse representation is developed.
Abstract: We examine the problem of segmenting tracked feature point trajectories of multiple moving objects in an image sequence. Using the affine camera model, this motion segmentation problem can be cast as the problem of segmenting samples drawn from a union of linear subspaces. Due to limitations of the tracker, occlusions and the presence of nonrigid objects in the scene, the obtained motion trajectories may contain grossly mistracked features, missing entries, or not correspond to any valid motion model. In this paper, we develop a robust subspace separation scheme that can deal with all of these practical issues in a unified framework. Our methods draw strong connections between lossy compression, rank minimization, and sparse representation. We test our methods extensively and compare their performance to several extant methods with experiments on the Hopkins 155 database. Our results are on par with state-of-the-art results, and in many cases exceed them. All MATLAB code and segmentation results are publicly available for peer evaluation at http://perception.csl.uiuc.edu/coding/motion/.

259 citations


Journal ArticleDOI
TL;DR: This algorithm involves projecting all point trajectories onto a 5-dimensional subspace using the SVD, the PowerFactorization method, or RANSAC, and fitting multiple linear subspaces representing different rigid-body motions to the points in ℝ5 using GPCA.
Abstract: We consider the problem of segmenting multiple rigid-body motions from point correspondences in multiple affine views. We cast this problem as a subspace clustering problem in which point trajectories associated with each motion live in a linear subspace of dimension two, three or four. Our algorithm involves projecting all point trajectories onto a 5-dimensional subspace using the SVD, the PowerFactorization method, or RANSAC, and fitting multiple linear subspaces representing different rigid-body motions to the points in ?5 using GPCA. Unlike previous work, our approach does not restrict the motion subspaces to be four-dimensional and independent. Instead, it deals gracefully with all the spectrum of possible affine motions: from two-dimensional and partially dependent to four-dimensional and fully independent. Our algorithm can handle the case of missing data, meaning that point tracks do not have to be visible in all images, by using the PowerFactorization method to project the data. In addition, our method can handle outlying trajectories by using RANSAC to perform the projection. We compare our approach to other methods on a database of 167 motion sequences with full motions, independent motions, degenerate motions, partially dependent motions, missing data, outliers, etc. On motion sequences with complete data our method achieves a misclassification error of less that 5% for two motions and 29% for three motions.

253 citations


Book ChapterDOI
29 Mar 2008
TL;DR: This work proposes an automatic transformation framework to optimize arbitrarily-nested loop sequences with affine dependences for parallelism and locality simultaneously and finds good tiling hyperplanes by embedding a powerful and versatile cost function into an Integer Linear Programming formulation.
Abstract: The polyhedral model provides powerful abstractions to optimize loop nests with regular accesses. Affine transformations in this model capture a complex sequence of execution-reordering loop transformations that can improve performance by parallelization as well as locality enhancement. Although a significant body of research has addressed affine scheduling and partitioning, the problem of automaticallyfinding good affine transforms forcommunication-optimized coarsegrained parallelization together with locality optimization for the general case of arbitrarily-nested loop sequences remains a challenging problem. We propose an automatic transformation framework to optimize arbitrarilynested loop sequences with affine dependences for parallelism and locality simultaneously. The approach finds good tiling hyperplanes by embedding a powerful and versatile cost function into an Integer Linear Programming formulation. These tiling hyperplanes are used for communication-minimized coarse-grained parallelization as well as for locality optimization. The approach enables the minimization of inter-tile communication volume in the processor space, and minimization of reuse distances for local execution at each node. Programs requiring one-dimensional versusmulti-dimensional time schedules (with scheduling-based approaches) are all handled with the same algorithm. Synchronization-free parallelism, permutable loops or pipelined parallelismat various levels can be detected. Preliminary studies of the framework show promising results.

231 citations


Journal ArticleDOI
TL;DR: It is shown that theMDs based on classical estimates are invariant to the family of logratio transformations, and that the MDsbased on affine equivariant estimators of location and covariance are the same for additive and isometriclogratio transformation.
Abstract: Outlier detection based on the Mahalanobis distance (MD) requires an appropriate transformation in case of compositional data. For the family of logratio transformations (additive, centered and isometric logratio transformation) it is shown that the MDs based on classical estimates are invariant to these transformations, and that the MDs based on affine equivariant estimators of location and covariance are the same for additive and isometric logratio transformation. Moreover, for 3-dimensional compositions the data structure can be visualized by contour lines. In higher dimension the MDs of closed and opened data give an impression of the multivariate data behavior.

188 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: A major advantage of this approach is that the segmentation energy is minimized directly without having to compute its gradient, which can be a cumbersome task and often relies on approximations.
Abstract: We present a new shape prior segmentation method using graph cuts capable of segmenting multiple objects. The shape prior energy is based on a shape distance popular with level set approaches. We also present a multiphase graph cut framework to simultaneously segment multiple, possibly overlapping objects. The multiphase formulation differs from multiway cuts in that the former can account for object overlaps by allowing a pixel to have multiple labels. We then extend the shape prior energy to encompass multiple shape priors. Unlike variational methods, a major advantage of our approach is that the segmentation energy is minimized directly without having to compute its gradient, which can be a cumbersome task and often relies on approximations. Experiments demonstrate that our algorithm can cope with image noise and clutter, as well as partial occlusions and affine transformations of the shape.

181 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that Lp affine surface area of a convex body K equals the affine area of the polar body K? for all p[-8,1] n.

Journal ArticleDOI
TL;DR: KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance and provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares, and regularization networks.
Abstract: The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

Journal ArticleDOI
TL;DR: A reconstruction strategy is proposed for physiological motion correction, which overcomes many limitations of existing techniques and is suitable for cardiac or abdominal imaging, in the context of multiple coil, arbitrarily sampled acquisition.
Abstract: A reconstruction strategy is proposed for physiological motion correction, which overcomes many limitations of existing techniques. The method is based on a general framework allowing correction for arbitrary motion–nonrigid or affine, making it suitable for cardiac or abdominal imaging, in the context of multiple coil, arbitrarily sampled acquisition. A model is required to predict motion in the field of view at each sample time point, based on prior knowledge provided by external sensors. A theoretical study is carried out to analyze the influence of motion prediction errors. Small errors are shown to propagate linearly in that reconstruction algorithm, and thus induce a reconstruction residue that is bounded (stability). Furthermore, optimization of the motion model is proposed in order to minimize this residue. This leads to reformulating reconstruction as two inverse problems which are coupled: motion-compensated reconstruction (known motion) and model optimization (known image). A fixed-point multiresolution scheme is described for inverting these two coupled systems. This framework is shown to allow fully autocalibrated reconstructions, as coil sensitivities and motion model coefficients are determined directly from the corrupted raw data. The theory is validated with real cardiac and abdominal data from healthy volunteers, acquired in free-breathing. Magn Reson Med 60:146–157, 2008. © 2008 Wiley-Liss, Inc.

Journal ArticleDOI
TL;DR: This paper studies the statistical behavior of an affine combination of the outputs of two least-mean-square adaptive filters that simultaneously adapt using the same white Gaussian inputs to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD).
Abstract: This paper studies the statistical behavior of an affine combination of the outputs of two least mean-square (LMS) adaptive filters that simultaneously adapt using the same white Gaussian inputs. The purpose of the combination is to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD). The linear combination studied is a generalization of the convex combination, in which the combination factor lambda(n) is restricted to the interval (0,1). The viewpoint is taken that each of the two filters produces dependent estimates of the unknown channel. Thus, there exists a sequence of optimal affine combining coefficients which minimizes the mean-square error (MSE). First, the optimal unrealizable affine combiner is studied and provides the best possible performance for this class. Then two new schemes are proposed for practical applications. The mean-square performances are analyzed and validated by Monte Carlo simulations. With proper design, the two practical schemes yield an overall MSD that is usually less than the MSDs of either filter.

Journal ArticleDOI
TL;DR: This paper presents an image watermarking scheme by the use of two statistical features (the histogram shape and the mean) in the Gaussian filtered low-frequency component of images that is mathematically invariant to scaling the size of images and robust to interpolation errors during geometric transformations, and common image processing operations.
Abstract: Watermark resistance to geometric attacks is an important issue in the image watermarking community. Most countermeasures proposed in the literature usually focus on the problem of global affine transforms such as rotation, scaling and translation (RST), but few are resistant to challenging cropping and random bending attacks (RBAs). The main reason is that in the existing watermarking algorithms, those exploited robust features are more or less related to the pixel position. In this paper, we present an image watermarking scheme by the use of two statistical features (the histogram shape and the mean) in the Gaussian filtered low-frequency component of images. The two features are: 1) mathematically invariant to scaling the size of images; 2) independent of the pixel position in the image plane; 3)statistically resistant to cropping; and 4) robust to interpolation errors during geometric transformations, and common image processing operations. As a result, the watermarking system provides a satisfactory performance for those content-preserving geometric deformations and image processing operations, including JPEG compression, lowpass filtering, cropping and RBAs.

Journal ArticleDOI
TL;DR: In this paper, the authors present an automatic polyhedral source-to-source transformation framework that can optimize regular programs (sequences of possibly imperfectly nested loops) for polyhedral transformations.
Abstract: We present the design and implementation of an automatic polyhedral source-to-source transformation framework that can optimize regular programs (sequences of possibly imperfectly nested loops) for...

Journal ArticleDOI
TL;DR: This paper proposes a VSS-APA derived in the context of AEC that aims to recover the near-end signal within the error signal of the adaptive filter and is robust against near- end signal variations (including double-talk).
Abstract: The adaptive algorithms used for acoustic echo cancellation (AEC) have to provide (1) high convergence rates and good tracking capabilities, since the acoustic environments imply very long and time-variant echo paths, and (2) low misadjustment and robustness against background noise variations and double-talk. In this context, the affine projection algorithm (APA) and different versions of it are very attractive choices for AEC. However, an APA with a constant step-size parameter has to compromise between the performance criteria (1) and (2). Therefore, a variable step-size APA (VSS-APA) represents a more reliable solution. In this paper, we propose a VSS-APA derived in the context of AEC. Most of the APAs aim to cancel p (i.e., projection order) previous a posteriori errors at every step of the algorithm. The proposed VSS-APA aims to recover the near-end signal within the error signal of the adaptive filter. Consequently, it is robust against near-end signal variations (including double-talk). This algorithm does not require any a priori information about the acoustic environment, so that it is easy to control in practice. The simulation results indicate the good performance of the proposed algorithm as compared to other members of the APA family.

Journal ArticleDOI
TL;DR: The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model.

Journal ArticleDOI
01 Aug 2008-Tellus A
TL;DR: In this article, an affine kernel dressing (AKD) approach is proposed, which considers an ensemble as merely a source of information rather than the possible scenarios of reality, and uses the affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, including a weight assigned to the unconditioned (climatological) distribution.
Abstract: The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.


Book ChapterDOI
20 Oct 2008
TL;DR: It is shown that, by exploiting algebraic dependencies among the entries of the projection matrices, one can upgrade the projective reconstruction to determine the affine configuration of the points in �”3, and the motion of the camerarelative to their centroid.
Abstract: We present a closed form solution to the nonrigid shape andmotion (NRSM) problem from point correspondences in multipleperspective uncalibrated views. Under the assumption that thenonrigid object deforms as a linear combination of Krigidshapes, we show that the NRSM problem can be viewed as areconstruction problem from multiple projections fromℙ3K to ℙ2. Therefore,one can linearly solve for the projection matrices by factorizing amultifocal tensor. However, this projective reconstruction inℙ3K does not satisfy the constraints ofthe NRSM problem, because it is computed only up to a projectivetransformation in ℙ3K . Our keycontribution is to show that, by exploiting algebraic dependenciesamong the entries of the projection matrices, one can upgrade theprojective reconstruction to determine the affine configuration ofthe points in ℝ3, and the motion of the camerarelative to their centroid. Moreover, if K≥ 2, theneither by using calibrated cameras, or by assuming a camera withfixed internal parameters, it is possible to compute the Euclideanstructure by a closed form method.

Journal ArticleDOI
TL;DR: The authors proposed a new representation of affine models in which the state vector comprises infinitesimal maturity yields and their quadratic covariations, which can be used for the estimation and interpretation of multifactor models.
Abstract: Building on Duffie and Kan (1996), we propose a new representation of affine models in which the state vector comprises infinitesimal maturity yields and their quadratic covariations. Because these variables possess unambiguous economic interpretations, they generate a representation that is globally identifiable. Further, this representation has more identifiable parameters than the "maximal" model of Dai and Singleton (2000). We implement this new representation for select three-factor models and find that model-independent estimates for the state vector can be estimated directly from yield curve data, which present advantages for the estimation and interpretation of multifactor models.

Dissertation
01 Jan 2008
TL;DR: In this article, a method for rapid evaluation of flux-type outputs of interest from solutions to partial differential equations (PDEs) is presented within the reduced basis framework for linear, elliptic PDEs.
Abstract: A method for rapid evaluation of flux-type outputs of interest from solutions to partial differential equations (PDEs) is presented within the reduced basis framework for linear, elliptic PDEs. The central point is a Neumann-Dirichlet equivalence that allows for evaluation of the output through the bilinear form of the weak formulation of the PDE. Through a comprehensive example related to electrostatics, we consider multiple outputs, a posteriori error estimators and empirical interpolation treatment of the non-affine terms in the bilinear form. Together with the considered Neumann-Dirichlet equivalence, these methods allow for efficient and accurate numerical evaluation of a relationship mu->s(mu), where mu is a parameter vector that determines the geometry of the physical domain and s(mu) is the corresponding flux-type output matrix of interest. As a practical application, we lastly employ the rapid evaluation of s-> s(mu) in solving an inverse (parameter-estimation) problem.

Journal ArticleDOI
TL;DR: This work proposes a distributed algorithm that is able to reconstruct reliably and efficiently the configurations of large protein molecules from a limited number of pairwise distances corrupted by noise, without incorporating domain knowledge such as the minimum separation distance constraints derived from van der Waals interactions.
Abstract: We propose a distributed algorithm for solving Euclidean metric realization problems arising from large 3-D graphs, using only noisy distance information and without any prior knowledge of the positions of any of the vertices. In our distributed algorithm, the graph is first subdivided into smaller subgraphs using intelligent clustering methods. Then a semidefinite programming relaxation and gradient search method are used to localize each subgraph. Finally, a stitching algorithm is used to find affine maps between adjacent clusters, and the positions of all points in a global coordinate system are then derived. In particular, we apply our method to the problem of finding the 3-D molecular configurations of proteins based on a limited number of given pairwise distances between atoms. The protein molecules, all with known molecular configurations, are taken from the Protein Data Bank. Our algorithm is able to reconstruct reliably and efficiently the configurations of large protein molecules from a limited number of pairwise distances corrupted by noise, without incorporating domain knowledge such as the minimum separation distance constraints derived from van der Waals interactions.

Proceedings ArticleDOI
Kenneth L. Clarkson1
09 Jun 2008
TL;DR: Here the case of random projection of smooth manifolds is considered, and a previous analysis is sharpened, reducing the dependence on such properties as the manifold's maximum curvature.
Abstract: The Johnson-Lindenstrauss random projection lemma gives a simple way to reduce the dimensionality of a set of points while approximately preserving their pairwise distances. The most direct application of the lemma applies to a finite set of points, but recent work has extended the technique to affine subspaces, curves, and general smooth manifolds. Here the case of random projection of smooth manifolds is considered, and a previous analysis is sharpened, reducing the dependence on such properties as the manifold's maximum curvature.

Journal ArticleDOI
TL;DR: In this paper, the authors proved the existence and uniqueness of globally smooth solutions to the second boundary value problem for affine maximal surface equation and affine mean curvature equation. But they assumed that the inhomoge-neous term is only assumed to be Holder continuous.
Abstract: In this paper, we prove global second derivative estimates for solutions of the Dirichlet problem for the Monge-Ampere equation when the inhomoge- neous term is only assumed to be Holder continuous. As a consequence of our approach, we also establish the existence and uniqueness of globally smooth solutions to the second boundary value problem for the affine maximal surface equation and affine mean curvature equation.

Journal ArticleDOI
TL;DR: In this paper, the authors derived an iterative algorithm to solve the multivariate total least squares (MTLS) problem by using the nonlinear Euler-Lagrange conditions.
Abstract: The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model (Y−E Y = (X−E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler–Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335–342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

Journal ArticleDOI
01 Sep 2008
TL;DR: A software system grounded on Differential Evolution to automatically register multiview and multitemporal images is designed, implemented and tested through a set of 2D satellite images on two problems, i.e. mosaicking and changes in time.
Abstract: A software system grounded on Differential Evolution to automatically register multiview and multitemporal images is designed, implemented and tested through a set of 2D satellite images on two problems, i.e. mosaicking and changes in time. Registration is effected by looking for the best affine transformation in terms of maximization of the mutual information between the first image and the transformation of the second one, and no control points are needed in this approach. This method is compared against five widely available tools, and its effectiveness is shown.

Journal ArticleDOI
TL;DR: Experiments with Gaussian noise demonstrate that the proposed algorithm is robust to detect the target object with the changes of translation, orientation and scale, and is suitable for on-line template matching with scene translation, rotation and scaling.

Journal ArticleDOI
TL;DR: An enhanced image-based fingerprint verification algorithm that reduces multi-spectral noise by enhancing a fingerprint image to accurately and reliably determine a reference point, and then aligns the image according to the position and orientation of reference point to avoid time-consuming alignment.