scispace - formally typeset
Search or ask a question
Topic

Mahalanobis distance

About: Mahalanobis distance is a research topic. Over the lifetime, 4616 publications have been published within this topic receiving 95294 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, five classification methods were examined to determine the most suitable classification algorithm for the identification of no-till (NT) and traditional tillage (TT) cropping methods: minimum distance (MD), Mahalanobis distance, maximum likelihood (ML), spectral angle mapping (SAM), and the cosine of the angle concept (CAC).

108 citations

Journal ArticleDOI
TL;DR: In this article, two possible methods of dealing with the correlation between the variables are considered: performing a principal components analysis before calculating Euclidean distances, and calculating Mahalanobis distances using the raw data.
Abstract: Cluster analysis is a technique frequently used in climatology for grouping cases to define classes (synoptic types or climate regimes, for example), or for grouping stations or grid points to define regions. Cluster analysis is based on some form of distance matrix, and the most commonly used metric in the climatological field has been Euclidean distances. Arguments for the use of Euclidean distances are in some ways similar to arguments for using a covariance matrix in principal components analysis: the use of the metric is valid if all data are measured on the same scale. When using Euclidean distances for cluster analysis, however, the additional assumption is made that all the variables are uncorrelated, and this assumption is frequently ignored. Two possible methods of dealing with the correlation between the variables are considered: performing a principal components analysis before calculating Euclidean distances, and calculating Mahalanobis distances using the raw data. Under certain con...

108 citations

Journal ArticleDOI
TL;DR: In this paper, a generalization of the k-median problem with respect to an arbitrary dissimilarity measure D was studied, and a linear time (1+ϵ)-approximation algorithm was given for the problem in an arbitrary metric space with bounded doubling dimension.
Abstract: We study a generalization of the k-median problem with respect to an arbitrary dissimilarity measure D Given a finite set P of size n, our goal is to find a set C of size k such that the sum of errors D(P,C) = ∑p i P minc i C {D(p,c)} is minimized The main result in this article can be stated as follows: There exists a (1+ϵ)-approximation algorithm for the k-median problem with respect to D, if the 1-median problem can be approximated within a factor of (1+ϵ) by taking a random sample of constant size and solving the 1-median problem on the sample exactly This algorithm requires time n2O(mklog(mk/ϵ)), where m is a constant that depends only on ϵ and D Using this characterization, we obtain the first linear time (1+ϵ)-approximation algorithms for the k-median problem in an arbitrary metric space with bounded doubling dimension, for the Kullback-Leibler divergence (relative entropy), for the Itakura-Saito divergence, for Mahalanobis distances, and for some special cases of Bregman divergences Moreover, we obtain previously known results for the Euclidean k-median problem and the Euclidean k-means problem in a simplified manner Our results are based on a new analysis of an algorithm of Kumar et al [2004]

107 citations

Book ChapterDOI
26 Oct 2005
TL;DR: The elastic energy is interpreted as the distance of the Green-St Venant strain tensor to the identity, which reflects the deviation of the local deformation from a rigid transformation, giving a new regularization criterion that is able to handle anisotropic deformations and is inverse-consistent.
Abstract: In inter-subject registration, one often lacks a good model of the transformation variability to choose the optimal regularization. Some works attempt to model the variability in a statistical way, but the re-introduction in a registration algorithm is not easy. In this paper, we interpret the elastic energy as the distance of the Green-St Venant strain tensor to the identity, which reflects the deviation of the local deformation from a rigid transformation. By changing the Euclidean metric for a more suitable Riemannian one, we define a consistent statistical framework to quantify the amount of deformation. In particular, the mean and the covariance matrix of the strain tensor can be consistently and efficiently computed from a population of non-linear transformations. These statistics are then used as parameters in a Mahalanobis distance to measure the statistical deviation from the observed variability, giving a new regularization criterion that we called the statistical Riemannian elasticity. This new criterion is able to handle anisotropic deformations and is inverse-consistent. Preliminary results show that it can be quite easily implemented in a non-rigid registration algorithms.

107 citations

Book ChapterDOI
26 Jun 2000
TL;DR: This paper describes a new approach to covariance-weighted factorization, which can factor noisy feature correspondences with high degree of directional uncertainty into structure and motion and shows that this method does not degrade with increase in directionality of uncertainty, even in the extreme when only normal-flow data is available.
Abstract: Factorization using Singular Value Decomposition (SVD) is often used for recovering 3D shape and motion from feature correspondences across multiple views. SVD is powerful at finding the global solution to the associated least-square-error minimization problem. However, this is the correct error to minimize only when the x and y positional errors in the features are uncorrelated and identically distributed. But this is rarely the case in real data. Uncertainty in feature position depends on the underlying spatial intensity structure in the image, which has strong directionality to it. Hence, the proper measure to minimize is covariance-weighted squared-error (or the Mahalanobis distance). In this paper, we describe a new approach to covariance-weighted factorization, which can factor noisy feature correspondences with high degree of directional uncertainty into structure and motion. Our approach is based on transforming the raw-data into a covariance-weighted data space, where the components of noise in the different directions are uncorrelated and identically distributed. Applying SVD to the transformed data now minimizes a meaningful objective function. We empirically show that our new algorithm gives good results for varying degrees of directional uncertainty. In particular, we show that unlike other SVD-based factorization algorithms, our method does not degrade with increase in directionality of uncertainty, even in the extreme when only normal-flow data is available. It thus provides a unified approach for treating corner-like points together with points along linear structures in the image.

107 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
79% related
Artificial neural network
207K papers, 4.5M citations
79% related
Feature extraction
111.8K papers, 2.1M citations
77% related
Convolutional neural network
74.7K papers, 2M citations
77% related
Image processing
229.9K papers, 3.5M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023208
2022452
2021232
2020239
2019249