scispace - formally typeset
Search or ask a question

Showing papers on "Affine transformation published in 2004"


Journal ArticleDOI
TL;DR: A comparative evaluation of different detectors is presented and it is shown that the proposed approach for detecting interest points invariant to scale and affine transformations provides better results than existing methods.
Abstract: In this paper we propose a novel approach for detecting interest points invariant to scale and affine transformations. Our scale and affine invariant detectors are based on the following recent results: (1) Interest points extracted with the Harris detector can be adapted to affine transformations and give repeatable results (geometrically stable). (2) The characteristic scale of a local structure is indicated by a local extremum over scale of normalized derivatives (the Laplacian). (3) The affine shape of a point neighborhood is estimated based on the second moment matrix. Our scale invariant detector computes a multi-scale representation for the Harris interest point detector and then selects points at which a local measure (the Laplacian) is maximal over scales. This provides a set of distinctive points which are invariant to scale, rotation and translation as well as robust to illumination changes and limited changes of viewpoint. The characteristic scale determines a scale invariant region for each point. We extend the scale invariant detector to affine invariance by estimating the affine shape of a point neighborhood. An iterative algorithm modifies location, scale and neighborhood of each point and converges to affine invariant points. This method can deal with significant affine transformations including large scale changes. The characteristic scale and the affine shape of neighborhood determine an affine invariant region for each point. We present a comparative evaluation of different detectors and show that our approach provides better results than existing methods. The performance of our detector is also confirmed by excellent matching resultss the image is described by a set of scale/affine invariant descriptors computed on the regions associated with our points.

4,107 citations


Book ChapterDOI
25 Mar 2004
TL;DR: A Multi-Parametric Toolbox (MPT) for computing optimal or suboptimal feedback controllers for constrained linear and piecewise affine systems is under development at ETH.
Abstract: A Multi-Parametric Toolbox (MPT) for computing optimal or suboptimal feedback controllers for constrained linear and piecewise affine systems is under development at ETH The toolbox offers a broad spectrum of algorithms compiled in a user friendly and accessible format: starting from different performance objectives (linear, quadratic, minimum time) to the handling of systems with persistent additive disturbances and polytopic uncertainties The algorithms included in the toolbox are a collection of results from recent publications in the field of constrained optimal control of linear and piecewise affine systems [10,13,4,9,16,17,15,14,7]

999 citations


Journal ArticleDOI
TL;DR: In this article, a micro-mechanically based network model for the description of the elastic response of rubbery polymers at large strains and details of its numerical implementation are presented.
Abstract: The contribution presents a new micro-mechanically based network model for the description of the elastic response of rubbery polymers at large strains and considers details of its numerical implementation. The approach models a rubber-like material based on a micro-structure that can be symbolized by a micro-sphere where the surface represents a continuous distribution of chain orientations in space. Core of the model is a new two-dimensional constitutive setting of the micro-mechanical response of a single polymer chain in a constrained environment defined by two micro-kinematic variables: the stretch of the chain and the contraction of the cross section of a micro-tube that contains the chain. The second key feature is a new non-affine micro-to-macro transition that defines the three-dimensional overall response of the polymer network based on a characteristic homogenization procedure of micro-variables defined on the micro-sphere of space orientations. It determines a stretch fluctuation field on the micro-sphere by a principle of minimum averaged free energy and links the two micro-kinematic variables in a non-affine format to the line-stretch and the area-stretch of the macro-continuum. Hence, the new model describes two superimposed contributions resulting from free chain motions and their topological constraints in an attractive dual geometric structure on both the micro- and the macro-level. Averaging operations on the micro-sphere are directly evaluated by an efficient numerical integration scheme. The overall model contains five effective material parameters obtained from the single chain statistics and properties of the network with clearly identifiable relationships to characteristic phenomena observed in stress–strain experiments. The approach advances features of the affine full network and the eight chain models by a substantial improvement of their modeling capacity. The excellent predictive performance is illustrated by comparative studies with previously developed network models and by fitting of various available experimental data of homogeneous and non-homogeneous tests.

464 citations


Journal ArticleDOI
TL;DR: This paper provides algorithms based on mixed-integer linear or quadratic programming which are guaranteed to converge to a global optimum of hybrid dynamical systems, and suggests a way of trading off between optimality and complexity by using a change detection approach.

384 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a supply function equilibrium (SFE) model of interaction in an electricity market and consider a competitive fringe and several strategic players having capacity limits and affine marginal costs.
Abstract: We consider a supply function equilibrium (SFE) model of interaction in an electricity market. We assume a linear demand function and consider a competitive fringe and several strategic players having capacity limits and affine marginal costs. The choice of SFE over Cournot equilibrium and other models and the choice of affine marginal costs is reviewed in the context of the existing literature. We assume that bid rules allow affine or piecewise affine non-decreasing supply functions by firms and extend results of Green and Rudkevitch concerning the linear SFE solution. An incentive compatibility result is proved. We also find that a piecewise affine SFE can be found easily for the case where there are non-negativity limits on generation. Upper capacity limits, however, pose problems and we propose an ad hoc approach. We apply the analysis to the England and Wales electricity market, considering the 1996 and 1999 divestitures. The piecewise affine SFE solutions generally provide better matches to the empirical data than previous analysis.

344 citations


Journal ArticleDOI
TL;DR: This paper proposes a local intensity normalization method to effectively handle lighting variations, followed by a Gabor transform to obtain local features, and finally a linear discriminant analysis (LDA) method for feature selection.
Abstract: In this paper, we present an approach to automatic detection and recognition of signs from natural scenes, and its application to a sign translation task. The proposed approach embeds multiresolution and multiscale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection, with different emphases at each phase to handle the text in different sizes, orientations, color distributions and backgrounds. We use affine rectification to recover deformation of the text regions caused by an inappropriate camera view angle. The procedure can significantly improve text detection rate and optical character recognition (OCR) accuracy. Instead of using binary information for OCR, we extract features from an intensity image directly. We propose a local intensity normalization method to effectively handle lighting variations, followed by a Gabor transform to obtain local features, and finally a linear discriminant analysis (LDA) method for feature selection. We have applied the approach in developing a Chinese sign translation system, which can automatically detect and recognize Chinese signs as input from a camera, and translate the recognized text into English.

305 citations


Proceedings ArticleDOI
07 Sep 2004
TL;DR: This paper proposes a new approach for finding expressive and geometrically invariant parts for modeling 3D objects that remain approximately affinely rigid across a range of views of an object, and across multiple instances of the same object class.
Abstract: This paper proposes a new approach for finding expressive and geometrically invariant parts for modeling 3D objects. The approach relies on identifying groups of local affine regions (image features having a characteristic appearance and elliptical shape) that remain approximately affinely rigid across a range of views of an object, and across multiple instances of the same object class. These groups, termed semi-local affine parts, are learned using correspondence search between pairs of unsegmented and cluttered input images, followed by validation against additional training images. The proposed approach is applied to the recognition of butterflies in natural imagery.

287 citations


Proceedings ArticleDOI
27 Jun 2004
TL;DR: This work considers the problem of segmenting multiple rigid motions from point correspondences in multiple affine views as a subspace clustering problem, and involves projecting the point trajectories of all the points into 5-dimensional space, using the PowerFactorization method to fill in missing data.
Abstract: We consider the problem of segmenting multiple rigid motions from point correspondences in multiple affine views. We cast this problem as a subspace clustering problem in which the motion of each object lives in a subspace of dimension two, three or four. Unlike previous work, we do not restrict the motion subspaces to be four-dimensional or linearly independent. Instead, our approach deals gracefully with all the spectrum of possible affine motions: from two-dimensional and partially dependent to four-dimensional and fully independent. In addition, our method handles the case of missing data, meaning that point tracks do not have to be visible in all images. Our approach involves projecting the point trajectories of all the points into a 5-dimensional space, using the PowerFactorization method to fill in missing data. Then multiple linear subspaces representing independent motions is fitted to the points in R5 using GPCA. We test our algorithm on various real sequences with degenerate and nondegenerate motions, missing data, perspective effects, transparent motions, etc. Our algorithm achieves a misclassification error of less than 5% for sequences with up to 30% of missing data points.

219 citations


Book ChapterDOI
11 May 2004
TL;DR: In this article, a generative model for shape matching and recognition based on a model for how one shape can be generated by the other is presented. And the matching process is formulated in the EM algorithm to have a fast algorithm and avoid local minima.
Abstract: We present an algorithm for shape matching and recognition based on a generative model for how one shape can be generated by the other This generative model allows for a class of transformations, such as affine and non-rigid transformations, and induces a similarity measure between shapes The matching process is formulated in the EM algorithm To have a fast algorithm and avoid local minima, we show how the EM algorithm can be approximated by using informative features, which have two key properties–invariant and representative They are also similar to the proposal probabilities used in DDMCMC [13] The formulation allows us to know when and why approximations can be made and justifies the use of bottom-up features, which are used in a wide range of vision problems This integrates generative models and feature-based approaches within the EM framework and helps clarifying the relationships between different algorithms for this problem such as shape contexts [3] and softassign [5] We test the algorithm on a variety of data sets including MPEG7 CE-Shape-1, Kimia silhouettes, and real images of street scenes We demonstrate very effective performance and compare our results with existing algorithms Finally, we briefly illustrate how our approach can be generalized to a wider range of problems including object detection

216 citations


Journal ArticleDOI
TL;DR: Given an affine system on a full-dimensional polyTope, the problem of reaching a particular facet of the polytope, using continuous piecewise-affine state feedback is studied and a constructive procedure yields an affines feedback control law, that solves the reachability problem under consideration.

202 citations


Book ChapterDOI
11 May 2004
TL;DR: This work develops the theory and an algorithm for a generic calibration concept that allows in principle to calibrate cameras of any of the types contained in the general imaging model using one and the same algorithm.
Abstract: We present a theory and algorithms for a generic calibration concept that is based on the following recently introduced general imaging model. An image is considered as a collection of pixels, and each pixel measures the light travelling along a (half-) ray in 3-space associated with that pixel. Calibration is the determination, in some common coordinate system, of the coordinates of all pixels' rays. This model encompasses most projection models used in computer vision or photogrammetry, including perspective and affine models, optical distortion models, stereo systems, or catadioptric systems - central (single viewpoint) as well as non-central ones. We propose a concept for calibrating this general imaging model, based on several views of objects with known structure, but which are acquired from unknown viewpoints. It allows in principle to calibrate cameras of any of the types contained in the general imaging model using one and the same algorithm. We first develop the theory and an algorithm for the most general case: a non-central camera that observes 3D calibration objects. This is then specialized to the case of central cameras and to the use of planar calibration objects. The validity of the concept is shown by experiments with synthetic and real data.

Journal ArticleDOI
TL;DR: In this paper, Dai and Singleton show that there is a tension in affine term structure models between matching the mean and the volatility of interest rates and examine whether this tension can be solved by an alternative parametrization of the price of risk.
Abstract: Dai and Singleton (2002) and Duffee (2002) show that there is a tension in affine term structure models between matching the mean and the volatility of interest rates. This article examines whether this tension can be solved by an alternative parametrization of the price of risk. The empirical evidence suggests that, first, the examined parametrization is not sufficient to solve the mean-volatility tension. Second, the usual result in the estimation of affine models, indicating that some of the state variables are extremely persistent, may have been caused by the lack of flexibility in the parametrization of the price of risk. Term structure models have several uses, including pricing fixed-income derivatives, managing the risk of fixed-income portfolios, and detecting relationships between the term structure of interest rates and macrovariables such as inflation and consumption. To perform well in these tasks, term structure models must be numerically and econometrically tractable while matching the empirical properties of the term structure movements. At least two empirical properties of the term structure of interest rates have been well established by financial economists over the years [see Dai and Singleton (2003) for a survey]. First, the term premium, or the expected excess return of Treasury bonds, has a high time variability. Second, the volatility of interest rates is time varying. These two properties are so prominent in the data that they will be referred to as stylized facts. While these two stylized facts are very well established in the empirical literature, affine term structure models are thoroughly discussed in the theoretical literature. Affine models are those in which the yield of zero coupon bonds are affine functions of the model state variables. Classic

Proceedings ArticleDOI
01 Jan 2004
TL;DR: In this paper, the authors apply linear algebra techniques to precise interprocedural dataflow analysis, and describe analyses that determine for each program point identities that are valid among the program variables whenever control reaches that program point.
Abstract: We apply linear algebra techniques to precise interprocedural dataflow analysis. Specifically, we describe analyses that determine for each program point identities that are valid among the program variables whenever control reaches that program point. Our analyses fully interpret assignment statements with affine expressions on the right hand side while considering other assignments as non-deterministic and ignoring conditions at branches. Under this abstraction, the analysis computes the set of all affine relations and, more generally, all polynomial relations of bounded degree precisely. The running time of our algorithms is linear in the program size and polynomial in the number of occurring variables. We also show how to deal with affine preconditions and local variables and indicate how to handle parameters and return values of procedures.

Journal ArticleDOI
TL;DR: A careful analysis of elliptic curve point multiplication methods that use the point halving technique of Knudsen and Schroeppel is presented and an algorithm of Knuth is adapted to allow efficient use of projective coordinates with halving-based windowing methods for point multiplication.
Abstract: We present a careful analysis of elliptic curve point multiplication methods that use the point halving technique of Knudsen and Schroeppel and compare these methods to traditional algorithms that use point doubling. The performance advantage of halving methods is clearest in the case of point multiplication kP, where P is not known in advance and smaller field inversion to multiplication ratios generally favor halving. Although halving essentially operates on affine coordinate representations, we adapt an algorithm of Knuth to allow efficient use of projective coordinates with halving-based windowing methods for point multiplication.

Journal ArticleDOI
TL;DR: A new approach for image fingerprinting using the Radon transform to make the fingerprint robust against affine transformations, and addresses other issues such as pairwise independence, database search efficiency and key dependence of the proposed method.
Abstract: With the ever-increasing use of multimedia contents through electronic commerce and on-line services, the problems associated with the protection of intellectual property, management of large database and indexation of content are becoming more prominent. Watermarking has been considered as efficient means to these problems. Although watermarking is a powerful tool, there are some issues with the use of it, such as the modification of the content and its security. With respect to this, identifying content itself based on its own features rather than watermarking can be an alternative solution to these problems. The aim of fingerprinting is to provide fast and reliable methods for content identification. In this paper, we present a new approach for image fingerprinting using the Radon transform to make the fingerprint robust against affine transformations. Since it is quite easy with modern computers to apply affine transformations to audio, image and video, there is an obvious necessity for affine transformation resilient fingerprinting. Experimental results show that the proposed fingerprints are highly robust against most signal processing transformations. Besides robustness, we also address other issues such as pairwise independence, database search efficiency and key dependence of the proposed method.

Proceedings ArticleDOI
19 Jul 2004
TL;DR: This paper presents a simple but robust visual tracking algorithm based on representing the appearances of objects using affine warps of learned linear subspaces of the image space, and argues that a variant of it, the uniform L/sup 2/-reconstruction error norm, is the right one for tracking.
Abstract: This paper presents a simple but robust visual tracking algorithm based on representing the appearances of objects using affine warps of learned linear subspaces of the image space. The tracker adaptively updates this subspace while tracking by finding a linear subspace that best approximates the observations made in the previous frames. Instead of the traditional L/sup 2/-reconstruction error norm which leads to subspace estimation using PCA or SVD, we argue that a variant of it, the uniform L/sup 2/-reconstruction error norm, is the right one for tracking. Under this framework we provide a simple and a computationally inexpensive algorithm for finding a subspace whose uniform L/sup 2/-reconstruction error norm for a given collection of data samples is below some threshold, and a simple tracking algorithm is an immediate consequence. We show experimental results on a variety of image sequences of people and man-made objects moving under challenging imaging conditions, which include drastic illumination variation, partial occlusion and extreme pose variation.

Journal ArticleDOI
TL;DR: A new non-linear registration model based on a curvature type smoother is introduced, within the variational framework, and it is shown that affine linear transformations belong to the kernel of this regularizer.

Journal ArticleDOI
TL;DR: The surface body is a generalization of the floating body and its relation to the p -affine surface area is studied in this article, where it is shown that the surface body can be decomposed into two parts.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: A novel approach to point matching under large viewpoint and illumination changes that are suitable for accurate object pose estimation at a much lower computational cost than state-of-the-art methods is proposed and is both reliable and suitable for initializing real-time applications.
Abstract: We propose a novel approach to point matching under large viewpoint and illumination changes that are suitable for accurate object pose estimation at a much lower computational cost than state-of-the-art methods. Most of these methods rely either on using ad hoc local descriptors or on estimating local affine deformations. By contrast, we treat wide baseline matching of key points as a classification problem, in which each class corresponds to the set of all possible views of such a point. Given one or more images of a target object, we train the system by synthesizing a large number of views of individual key points and by using statistical classification tools to produce a compact description of this view set. At run-time, we rely on this description to decide to which class, if any, an observed feature belongs. This formulation allows us to use a classification method to reduce matching error rates, and to move some of the computational burden from matching to training, which can be performed beforehand. In the context of pose estimation, we present experimental results for both planar and non-planar objects in the presence of occlusions, illumination changes, and cluttered backgrounds. We show that the method is both reliable and suitable for initializing real-time applications.

Book
01 Oct 2004
TL;DR: In this article, the authors combine an orientation to credit risk modeling with an introduction to affine Markov processes, which are particularly useful for financial modeling, and emphasize corporate credit risk and the pricing of credit derivatives.
Abstract: This article combines an orientation to credit risk modeling with an introduction to affine Markov processes, which are particularly useful for financial modeling. We emphasize corporate credit risk and the pricing of credit derivatives. Applications of affine processes that are mentioned include survival analysis, dynamic term-structure models, and option pricing with stochastic volatility and jumps. The default-risk applications include default correlation, particularly in first-to-default settings. The reader is assumed to have some background in financial modeling and stochastic calculus.

01 Dec 2004
TL;DR: Since the GLC model provides a complete description of all 2D affine subspaces, it can be used as a tool for first-order differential analysis of arbitrary (higher-order) multiperspective imaging systems.
Abstract: We present a General Linear Camera (GLC) model that unifies many previous camera models into a single representation. The GLC model is capable of describing all perspective (pinhole), orthographic, and many multiperspective (including pushbroom and two-slit) cameras, as well as epipolar plane images. It also includes three new and previously unexplored multiperspective linear cameras. Our GLC model is both general and linear in the sense that, given any vector space where rays are represented as points, it describes all 2D affine subspaces (planes) that can be formed by affine combinations of 3 rays. The incident radiance seen along the rays found on subregions of these 2D affine subspaces are a precise definition of a projected image of a 3D scene. The GLC model also provides an intuitive physical interpretation, which can be used to characterize real imaging systems. Finally, since the GLC model provides a complete description of all 2D affine subspaces, it can be used as a tool for first-order differential analysis of arbitrary (higher-order) multiperspective imaging systems.

Journal ArticleDOI
TL;DR: In this paper, the freeness of cones over truncated affine Weyl arrangements was shown to be characterized by properties around a fixed hyperplane, which was conjectured by Edelman and Reiner.
Abstract: We consider a hyperplane arrangement in a vector space of dimension four or higher. In this case, the freeness of the arrangement is characterized by properties around a fixed hyperplane. As an application, we prove the freeness of cones over certain truncated affine Weyl arrangements which was conjectured by Edelman and Reiner.

Journal ArticleDOI
TL;DR: A method for automatically estimating the number of objects and extracting independently moving video objects using motion vectors is presented here and a strategy for edge refinement is proposed to extract the precise object boundaries.
Abstract: This paper addresses the problem of extracting video objects from MPEG compressed video. The only cues used for object segmentation are the motion vectors which are sparse in MPEG. A method for automatically estimating the number of objects and extracting independently moving video objects using motion vectors is presented here. First, the motion vectors are accumulated over a few frames to enhance the motion information, which are further spatially interpolated to get dense motion vectors. The final segmentation, using the dense motion vectors, is obtained by applying the expectation maximization (EM) algorithm. A block-based affine clustering method is proposed for determining the number of appropriate motion models to be used for the EM step and the segmented objects are temporally tracked to obtain the video objects. Finally, a strategy for edge refinement is proposed to extract the precise object boundaries. Illustrative examples are provided to demonstrate the efficacy of the approach. A prominent application of the proposed method is that of object-based coding, which is part of the MPEG-4 standard.

Proceedings ArticleDOI
01 Jan 2004
TL;DR: This work deals with the problem of quadratic stabilization of switched affine systems, where the state of the switched system has to be driven to a point ("switched equilibrium") which is not in the set of subsystems equilibria.
Abstract: This work deals with the problem of quadratic stabilization of switched affine systems, where the state of the switched system has to be driven to a point ("switched equilibrium") which is not in the set of subsystems equilibria. Quadratic stability of the switched equilibrium is assessed using a continuous Lyapunov function, having piecewise continuous derivative. A necessary and sufficient condition is given for the case of two subsystems and a sufficient condition is given in the general case. Two switching rules are presented: a state feedback, in which sliding modes may occur, and an hybrid feedback, in which sliding modes can be avoided. Two examples illustrate our results.

Book ChapterDOI
11 May 2004
TL;DR: The General Linear Camera (GLC) model as discussed by the authors unifies many previous camera models into a single representation and is capable of describing all perspective (pinhole), orthographic and many multiperspective (including pushbroom and two-slit) cameras, as well as epipolar plane images.
Abstract: We present a General Linear Camera (GLC) model that unifies many previous camera models into a single representation. The GLC model is capable of describing all perspective (pinhole), orthographic, and many multiperspective (including pushbroom and two-slit) cameras, as well as epipolar plane images. It also includes three new and previously unexplored multiperspective linear cameras. Our GLC model is both general and linear in the sense that, given any vector space where rays are represented as points, it describes all 2D affine subspaces (planes) that can be formed by affine combinations of 3 rays. The incident radiance seen along the rays found on subregions of these 2D affine subspaces are a precise definition of a projected image of a 3D scene. The GLC model also provides an intuitive physical interpretation, which can be used to characterize real imaging systems. Finally, since the GLC model provides a complete description of all 2D affine subspaces, it can be used as a tool for first-order differential analysis of arbitrary (higher-order) multiperspective imaging systems.

Journal ArticleDOI
01 Jan 2004
TL;DR: An algorithm to compute a set that contains the parameters consistent with the measured output and the given bound of the noise is presented, represented by a zonotope, that is, an affine map of a unitary hypercube.
Abstract: This paper presents a new approach to guaranteed system identification for time-varying parameterized discrete-time systems. A bounded description of noise in the measurement is considered. The main result is an algorithm to compute a set that contains the parameters consistent with the measured output and the given bound of the noise. This set is represented by a zonotope, that is, an affine map of a unitary hypercube. A recursive procedure minimizes the size of the zonotope with each noise corrupted measurement. The zonotope allows us to take into account the time-varying nature of the parameters in a non conservative way. An example has been provided to clarify the algorithm.

Journal ArticleDOI
TL;DR: In this article, it was shown that the product of the Renyi entropies of two independent random vectors provides a sharp lower bound for the expected value of the moments of the inner product.
Abstract: It is shown that the product of the Renyi entropies of two independent random vectors provides a sharp lower bound for the expected value of the moments of the inner product of the random vectors. This new inequality contains important geometry (such as extensions of one of the fundamental affine isoperimetric inequalities, the Blaschke--Santalo inequality).

Proceedings ArticleDOI
D. Serby1, E.K. Meier1, L. Van Gool1
23 Aug 2004
TL;DR: A generic tracker which can handle a variety of different objects by integrating groups of low-level features like interest points, edges, homogeneous and textured regions into a particle filter framework as this has proven very successful for non-linear and non-Gaussian estimation problems.
Abstract: We present a generic tracker which can handle a variety of different objects. For this purpose, groups of low-level features like interest points, edges, homogeneous and textured regions, are combined on a flexible and opportunistic basis. They sufficiently characterize an object and allow robust tracking as they are complementary sources of information which describe both the shape and the appearance of an object. These low-level features are integrated into a particle filter framework as this has proven very successful for non-linear and non-Gaussian estimation problems. We concentrate on rigid objects under affine transformations. Results on real-world scenes demonstrate the performance of the proposed tracker.

Journal ArticleDOI
TL;DR: It is possible to determine whether (or how well) two relative positions are actually related through an affine transformation, and the affinity that best approximates the unknown transformation can be retrieved and the quality of the approximation assessed.
Abstract: Affine invariant descriptors have been widely used for recognition of objects regardless of their position, size, and orientation in space. Examples of color, texture, and shape descriptors abound in the literature. However, many tasks in computer vision require looking not only at single objects or regions in images but also at their spatial relationships. In an earlier work, we showed that the relative position of two objects can be quantitatively described by a histogram of forces. Here, we study how affine transformations affect this descriptor. The position of an object with respect to another changes when the objects are affine transformed. We analyze the link between: 1) the applied affinity, 2) the relative position before transformation (described through a force histogram), and 3) the relative position after transformation. We show that any two of these elements allow the third one to be recovered. Moreover, it is possible to determine whether (or how well) two relative positions are actually related through an affine transformation. If they are not, the affinity that best approximates the unknown transformation can be retrieved, and the quality of the approximation assessed.

Journal ArticleDOI
TL;DR: Transformations derived from imaging physics and a three-dimensional affine transformation as well as mutual information (MI) and local correlation (LC) similarity are compared to each other by means of consistency testing to evaluate retrospective correction of eddy current-induced image distortion in diffusion tensor imaging of the brain.
Abstract: A statistical method for the evaluation of image registration for a series of images based on the assessment of consistency properties of the registration results is proposed. Consistency is defined as the residual error of the composition of cyclic registrations. By combining the transformations of different algorithms the consistency error allows a quantitative comparison without the use of ground truth, specifically, it allows a determination as to whether the algorithms are compatible and hence provide comparable registrations. Consistency testing is applied to evaluate retrospective correction of eddy current-induced image distortion in diffusion tensor imaging of the brain. In the literature several image transformations and similarity measures have been proposed, generally showing a significant reduction of distortion in side-by-side comparison of parametric maps before and after registration. Transformations derived from imaging physics and a three-dimensional affine transformation as well as mutual information (MI) and local correlation (LC) similarity are compared to each other by means of consistency testing. The dedicated transformations could not demonstrate a significant difference for more than half of the series considered. LC similarity is well-suited for distortion correction providing more consistent registrations which are comparable to MI.