Topic
Mahalanobis distance
About: Mahalanobis distance is a research topic. Over the lifetime, 4616 publications have been published within this topic receiving 95294 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The new linear pixel-swapping method led to an increase in the accuracy of mapping fine linear features of approximately 5% compared with the conventional pixel- Swapping method.
81 citations
••
TL;DR: In this paper, the influence of index computation and neighbour selection on calibration results using local PLSR models on a large soil spectral database was investigated and an index based on the coefficient correlation with FFT compression that led to a neighbourhood selection gave the best prediction results for the four considered soil constituents.
81 citations
••
TL;DR: A novel Real-time Payload-based Intrusion Detection System (RePIDS) that integrates a 3-Tier IFSEng and the MDM approach is proposed that achieves better performance and lower computational complexity when compared against two state-of-the-art payload-based intrusion detection systems.
81 citations
••
TL;DR: A novel unsupervised and nonparametric genetic algorithm for decision boundary analysis (GADBA) to support the structural damage detection process, even in the presence of linear and nonlinear effects caused by operational and environmental variability is proposed.
81 citations
••
24 Aug 2008TL;DR: This paper presents metric learning algorithms that scale linearly with dimensionality, permitting efficient optimization, storage, and evaluation of the learned metric, and shows that their learned metric can achieve excellent quality with respect to various criteria.
Abstract: The success of popular algorithms such as k-means clustering or nearest neighbor searches depend on the assumption that the underlying distance functions reflect domain-specific notions of similarity for the problem at hand. The distance metric learning problem seeks to optimize a distance function subject to constraints that arise from fully-supervised or semisupervised information. Several recent algorithms have been proposed to learn such distance functions in low dimensional settings. One major shortcoming of these methods is their failure to scale to high dimensional problems that are becoming increasingly ubiquitous in modern data mining applications. In this paper, we present metric learning algorithms that scale linearly with dimensionality, permitting efficient optimization, storage, and evaluation of the learned metric. This is achieved through our main technical contribution which provides a framework based on the log-determinant matrix divergence which enables efficient optimization of structured, low-parameter Mahalanobis distances. Experimentally, we evaluate our methods across a variety of high dimensional domains, including text, statistical software analysis, and collaborative filtering, showing that our methods scale to data sets with tens of thousands or more features. We show that our learned metric can achieve excellent quality with respect to various criteria. For example, in the context of metric learning for nearest neighbor classification, we show that our methods achieve 24% higher accuracy over the baseline distance. Additionally, our methods yield very good precision while providing recall measures up to 20% higher than other baseline methods such as latent semantic analysis.
81 citations