scispace - formally typeset
Search or ask a question

Showing papers on "Metric (mathematics) published in 2008"


Journal ArticleDOI
TL;DR: It is argued that a topological interaction is indispensable to maintain a flock's cohesion against the large density changes caused by external perturbations, typically predation, and supported by numerical simulations.
Abstract: Numerical models indicate that collective animal behavior may emerge from simple local rules of interaction among the individuals. However, very little is known about the nature of such interaction, so that models and theories mostly rely on aprioristic assumptions. By reconstructing the three-dimensional positions of individual birds in airborne flocks of a few thousand members, we show that the interaction does not depend on the metric distance, as most current models and theories assume, but rather on the topological distance. In fact, we discovered that each bird interacts on average with a fixed number of neighbors (six to seven), rather than with all neighbors within a fixed metric distance. We argue that a topological interaction is indispensable to maintain a flock's cohesion against the large density changes caused by external perturbations, typically predation. We support this hypothesis by numerical simulations, showing that a topological interaction grants significantly higher cohesion of the aggregation compared with a standard metric one.

1,814 citations


Journal ArticleDOI
TL;DR: This paper outlines the inconsistencies of existing metrics in the context of multi- object miss-distances for performance evaluation, and proposes a new mathematically and intuitively consistent metric that addresses the drawbacks of current multi-object performance evaluation metrics.
Abstract: The concept of a miss-distance, or error, between a reference quantity and its estimated/controlled value, plays a fundamental role in any filtering/control problem. Yet there is no satisfactory notion of a miss-distance in the well-established field of multi-object filtering. In this paper, we outline the inconsistencies of existing metrics in the context of multi-object miss-distances for performance evaluation. We then propose a new mathematically and intuitively consistent metric that addresses the drawbacks of current multi-object performance evaluation metrics.

1,765 citations


Journal ArticleDOI
TL;DR: In Part I of this series, we showed that left convergence is equivalent to convergence in metric, both for simple graphs and for graphs with nodeweights and edgeweights as discussed by the authors.

702 citations


Journal ArticleDOI
TL;DR: In this article, the authors present some fixed point results for monotone operators in a metric space endowed with a partial order using a weak generalized contraction-type assumption, which is similar to the one used in this paper.
Abstract: We present some fixed point results for monotone operators in a metric space endowed with a partial order using a weak generalized contraction-type assumption.

568 citations


Journal ArticleDOI
TL;DR: This paper considers a general problem of learning from pairwise constraints in the form of must-links and cannot-links, and aims to learn a Mahalanobis distance metric.

541 citations


Journal ArticleDOI
TL;DR: It is demonstrated that OLD20 provides significant advantages over ON in predicting both lexical decision and pronunciation performance in three large data sets, and interacts more strongly with word frequency and shows stronger effects of neighborhood frequency than does ON.
Abstract: Visual word recognition studies commonly measure the orthographic similarity of words using Coltheart’s orthographic neighborhood size metric (ON). Although ON reliably predicts behavioral variability in many lexical tasks, its utility is inherently limited by its relatively restrictive definition. In the present article, we introduce a new measure of orthographic similarity generated using a standard computer science metric of string similarity (Levenshtein distance). Unlike ON, the new measure—named orthographic Levenshtein distance 20 (OLD20)—incorporates comparisons between all pairs of words in the lexicon, including words of different lengths. We demonstrate that OLD20 provides significant advantages over ON in predicting both lexical decision and pronunciation performance in three large data sets. Moreover, OLD20 interacts more strongly with word frequency and shows stronger effects of neighborhood frequency than does ON. The discussion section focuses on the implications of these results for models of visual word recognition.

521 citations


Journal ArticleDOI
TL;DR: In this article, the authors give a physical explanation of the Kontsevich-Soibelman wall-crossing formula for the BPS spectrum in Seiberg-Witten theories.
Abstract: We give a physical explanation of the Kontsevich-Soibelman wall-crossing formula for the BPS spectrum in Seiberg-Witten theories. In the process we give an exact description of the BPS instanton corrections to the hyperkahler metric of the moduli space of the theory on R^3 x S^1. The wall-crossing formula reduces to the statement that this metric is continuous. Our construction of the metric uses a four-dimensional analogue of the two-dimensional tt* equations.

483 citations


Journal ArticleDOI
TL;DR: The existence of common fixed points for mappings satisfying certain contractive conditions, without appealing to continuity, in a cone metric space is established in this paper, which generalizes several well-known comparable results in the literature.

472 citations


Book ChapterDOI
01 May 2008
TL;DR: A metric structure is a many-sorted structure with each sort a metric space, which for convenience is assumed to have finite diameter as mentioned in this paper, and there are functions (of several variables) between sorts, assumed to be uniformly continuous.
Abstract: A metric structure is a many-sorted structure with each sort a metric space, which for convenience is assumed to have finite diameter. Additionally there are functions (of several variables) between sorts, assumed to be uniformly continuous. Examples include metric spaces themselves, measure algebras (with the metric d(A,B) = μ(A∆B), where ∆ is symmetric difference), and structures based on Banach spaces (where one interprets the sorts as balls), including Banach lattices, C*algebras, etc.

433 citations


Proceedings Article
26 Sep 2008
TL;DR: This paper outlines the inconsistencies of existing metrics in the context of multi- object miss-distances for performance evaluation, and proposes a new mathematically and intuitively consistent metric that addresses the drawbacks of current multi-object performance evaluation metrics.
Abstract: The concept of a miss-distance, or error, between a reference quantity and its estimated/controlled value, plays a fundamental role in any filtering/control problem. Yet there is no satisfactory notion of a miss-distance in the well-established field of multi-object filtering. In this paper, we outline the inconsistencies of existing metrics in the context of multi-object miss-distances for performance evaluation. We then propose a new mathematically and intuitively consistent metric that addresses the drawbacks of current multi-object performance evaluation metrics.

426 citations


Book ChapterDOI
13 Jul 2008
TL;DR: This paper proposes an attack graph-based probabilistic metric for network security and studies its efficient computation, and defines and proposes heuristics to improve the efficiency of such computation.
Abstract: To protect critical resources in today's networked environments, it is desirable to quantify the likelihood of potential multi-step attacks that combine multiple vulnerabilities. This now becomes feasible due to a model of causal relationships between vulnerabilities, namely, attack graph. This paper proposes an attack graph-based probabilistic metric for network security and studies its efficient computation. We first define the basic metric and provide an intuitive and meaningful interpretation to the metric. We then study the definition in more complex attack graphs with cycles and extend the definition accordingly. We show that computing the metric directly from its definition is not efficient in many cases and propose heuristics to improve the efficiency of such computation.

Book ChapterDOI
20 Oct 2008
TL;DR: This paper proposes a metric learning based approach for human activity recognition with two main objectives: (1) reject unfamiliar activities and (2) learn with few examples that outperforms all state-of-the-art methods on numerous standard datasets for traditional action classification problem.
Abstract: This paper proposes a metric learning based approach for human activity recognition with two main objectives: (1) reject unfamiliar activities and (2) learn with few examples We show that our approach outperforms all state-of-the-art methods on numerous standard datasets for traditional action classification problem Furthermore, we demonstrate that our method not only can accurately label activities but also can reject unseen activities and can learn from few examples with high accuracy We finally show that our approach works well on noisy YouTube videos

Journal ArticleDOI
TL;DR: The experiments show that the proposed measure is consistent with human visual evaluations and can be applied to evaluate image fusion schemes that are not performed at the same level.

Book ChapterDOI
12 Oct 2008
TL;DR: A new metric between histograms such as SIFT descriptors and a linear time algorithm for its computation and extensive experimental results show that the method outperforms state of the art distances.
Abstract: We present a new metric between histograms such as SIFT descriptors and a linear time algorithm for its computation. It is common practice to use the L 2 metric for comparing SIFT descriptors. This practice assumes that SIFT bins are aligned, an assumption which is often not correct due to quantization, distortion, occlusion etc. In this paper we present a new Earth Mover's Distance (EMD) variant. We show that it is a metric (unlike the original EMD [1] which is a metric only for normalized histograms). Moreover, it is a natural extension of the L 1 metric. Second, we propose a linear time algorithm for the computation of the EMD variant, with a robust ground distance for oriented gradients. Finally, extensive experimental results on the Mikolajczyk and Schmid dataset [2] show that our method outperforms state of the art distances.

Proceedings ArticleDOI
17 May 2008
TL;DR: This work defines an isometry invariant Max Min COV(X) which bounds from below the performance of Lipschitz MAB algorithms for X, and presents an algorithm which comes arbitrarily close to meeting this bound.
Abstract: In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of $n$ trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the "Lipschitz MAB problem". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant Max Min COV(X) which bounds from below the performance of Lipschitz MAB algorithms for $X$, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions.

Posted Content
TL;DR: In this paper, the authors studied a general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric.
Abstract: In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the "Lipschitz MAB problem". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant which bounds from below the performance of Lipschitz MAB algorithms for X, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: A method that enables scalable image search for learned metrics and an indirect solution that enables metric learning and hashing for vector spaces whose high dimensionality make it infeasible to learn an explicit weighting over the feature dimensions is introduced.
Abstract: We introduce a method that enables scalable image search for learned metrics. Given pairwise similarity and dissimilarity constraints between some images, we learn a Mahalanobis distance function that captures the imagespsila underlying relationships well. To allow sub-linear time similarity search under the learned metric, we show how to encode the learned metric parameterization into randomized locality-sensitive hash functions. We further formulate an indirect solution that enables metric learning and hashing for vector spaces whose high dimensionality make it infeasible to learn an explicit weighting over the feature dimensions. We demonstrate the approach applied to a variety of image datasets. Our learned metrics improve accuracy relative to commonly-used metric baselines, while our hashing construction enables efficient indexing with learned distances and very large databases.

Journal ArticleDOI
01 Oct 2008
TL;DR: The new algorithm based on L-optimality is developed, and simulation and comparative results indicate that well-distributed L-optimal solutions can be obtained by utilizing the MDMOEA but cannot be achieved by applying L- Optimality to make a posteriori selection within the huge Pareto nondominated solutions.
Abstract: In this paper, we focus on the study of evolutionary algorithms for solving multiobjective optimization problems with a large number of objectives. First, a comparative study of a newly developed dynamical multiobjective evolutionary algorithm (DMOEA) and some modern algorithms, such as the indicator-based evolutionary algorithm, multiple single objective Pareto sampling, and nondominated sorting genetic algorithm II, is presented by employing the convergence metric and relative hypervolume metric. For three scalable test problems (namely, DTLZ1, DTLZ2, and DTLZ6), which represent some of the most difficult problems studied in the literature, the DMOEA shows good performance in both converging to the true Pareto-optimal front and maintaining a widely distributed set of solutions. Second, a new definition of optimality (namely, L-optimality) is proposed in this paper, which not only takes into account the number of improved objective values but also considers the values of improved objective functions if all objectives have the same importance. We prove that L-optimal solutions are subsets of Pareto-optimal solutions. Finally, the new algorithm based on L-optimality (namely, MDMOEA) is developed, and simulation and comparative results indicate that well-distributed L-optimal solutions can be obtained by utilizing the MDMOEA but cannot be achieved by applying L-optimality to make a posteriori selection within the huge Pareto nondominated solutions. We can conclude that our new algorithm is suitable to tackle many-objective problems.

Proceedings ArticleDOI
05 Jul 2008
TL;DR: A highly efficient solver for the particular instance of semidefinite programming that arises in LMNN classification is described; this solver can handle problems with billions of large margin constraints in a few hours.
Abstract: In this paper we study how to improve nearest neighbor classification by learning a Mahalanobis distance metric. We build on a recently proposed framework for distance metric learning known as large margin nearest neighbor (LMNN) classification. Our paper makes three contributions. First, we describe a highly efficient solver for the particular instance of semidefinite programming that arises in LMNN classification; our solver can handle problems with billions of large margin constraints in a few hours. Second, we show how to reduce both training and testing times using metric ball trees; the speedups from ball trees are further magnified by learning low dimensional representations of the input space. Third, we show how to learn different Mahalanobis distance metrics in different parts of the input space. For large data sets, the use of locally adaptive distance metrics leads to even lower error rates.

Book
30 Aug 2008
TL;DR: The Fefferman Metric and Yamabe Problem were used in this article to define pseudoharmonic maps on CR Manifolds. But they did not consider pseudoeinsteinian manifolds and pseudo-Hermitian immersions.
Abstract: CR Manifolds.- The Fefferman Metric.- The CR Yamabe Problem.- Pseudoharmonic Maps.- Pseudo-Einsteinian Manifolds.- Pseudo-Hermitian Immersions.- Quasiconformal Mappings.- Yang-Mills Fields on CR Manifolds.- Spectral Geometry.

Journal ArticleDOI
TL;DR: A novel multidimensional projection technique based on least square approximations that is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.
Abstract: The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique least square projections (LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.

Journal ArticleDOI
TL;DR: A general validation metric that can be used to characterize the disagreement between the quantitative predictions from a model and relevant empirical data when either or both predictions and data are expressed as probability distributions is introduced.

Journal ArticleDOI
TL;DR: This study shows that most tests can become liberal when the randomization algorithm breaks down a structure in the original data set unrelated to the null hypothesis to test, and when overall species abundances are distributed non-randomly across the phylogeny or when local abundances is spatially autocorrelated, better statistical performances were achieved by randomization algorithms preserving these structural features.
Abstract: 1. Analyzing the phylogenetic structure of natural communities may illuminate the processes governing the assembly and coexistence of species. For instance, an association between species co-occurrence in local communities and their phylogenetic proximity may reveal the action of habitat filtering, niche conservatism and/or competitive exclusion. 2. Different methods were recently proposed to test such community-wide phylogenetic patterns, based on the phylogenetic clustering or overdispersion of the species in a local community. This provides a much needed framework for addressing long standing questions in community ecology as well as the recent debate on community neutrality. The testing procedures are based on (i) a metric measuring the association between phylogenetic distance and species co-occurrence, and (ii) a data set randomization algorithm providing the distribution of the metric under a given `null model'. However, the statistical properties of these approaches are not well-established and their reliability must be tested against simulated data sets. 3. This paper reviews metrics and null models used in previous studies. A `locally neutral' subdivided community model is simulated to produce data sets devoid of phylogenetic structure in the spatial distribution of species. Using these data sets, the consistency of Type I error rates of tests based on 10 metrics combined with nine null models is examined. 4. This study shows that most tests can become liberal (i.e. tests rejecting too often the null hypothesis that only neutral processes structured spatially the local community) when the randomization algorithm breaks down a structure in the original data set unrelated to the null hypothesis to test. Hence, when overall species abundances are distributed non-randomly across the phylogeny or when local abundances are spatially autocorrelated, better statistical performances were achieved by randomization algorithms preserving these structural features. The most reliable randomization algorithm consists of permuting species with similar abundances among the tips of the phylogenetic tree. One metric, RPD-DO, also proved to be robust under most simulated conditions using a variety of null models. 5. Synthesis. Given the suboptimal performances of several tests, attention must be paid to the testing procedures used in future studies. Guidelines are provided to help choosing an adequate test.

Journal ArticleDOI
TL;DR: In this article, the authors generalize and unify fixed point theorems of Das and Naik, Ciric, Jungck, Huang and Zhang on complete cone metric space.

Book ChapterDOI
15 Sep 2008
TL;DR: This paper compares the performance of several popular decision tree splitting criteria and identifies a new skew insensitive measure in Hellinger distance, which is proposed for its application in forming decision trees, and performs a comprehensive comparative analysis between each decision tree construction method.
Abstract: Learning from unbalanced datasets presents a convoluted problem in which traditional learning algorithms may perform poorly. The objective functions used for learning the classifiers typically tend to favor the larger, less important classes in such problems. This paper compares the performance of several popular decision tree splitting criteria --- information gain, Gini measure, and DKM --- and identifies a new skew insensitive measure in Hellinger distance. We outline the strengths of Hellinger distance in class imbalance, proposes its application in forming decision trees, and performs a comprehensive comparative analysis between each decision tree construction method. In addition, we consider the performance of each tree within a powerful sampling wrapper framework to capture the interaction of the splitting metric and sampling. We evaluate over this wide range of datasets and determine which operate best under class imbalance.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: A novel semi-supervised distance metric learning technique, called ldquoLaplacian Regularized Metric Learningrdquo (LRML), for learning robust distance metrics for CIR, which shows that reliable metrics can be learned from real log data even they may be noisy and limited at the beginning stage of a CIR system.
Abstract: Typical content-based image retrieval (CBIR) solutions with regular Euclidean metric usually cannot achieve satisfactory performance due to the semantic gap challenge. Hence, relevance feedback has been adopted as a promising approach to improve the search performance. In this paper, we propose a novel idea of learning with historical relevance feedback log data, and adopt a new paradigm called ldquoCollaborative Image Retrievalrdquo (CIR). To effectively explore the log data, we propose a novel semi-supervised distance metric learning technique, called ldquoLaplacian Regularized Metric Learningrdquo (LRML), for learning robust distance metrics for CIR. Different from previous methods, the proposed LRML method integrates both log data and unlabeled data information through an effective graph regularization framework. We show that reliable metrics can be learned from real log data even they may be noisy and limited at the beginning stage of a CIR system. We conducted extensive evaluation to compare the proposed method with a large number of competing methods, including 2 standard metrics, 3 unsupervised metrics, and 4 supervised metrics with side information.

Proceedings Article
08 Dec 2008
TL;DR: This work presents a new online metric learning algorithm that updates a learned Mahalanobis metric based on LogDet regularization and gradient descent and develops an online locality-sensitive hashing scheme which leads to efficient updates to data structures used for fast approximate similarity search.
Abstract: Metric learning algorithms can provide useful distance functions for a variety of domains, and recent work has shown good accuracy for problems where the learner can access all distance constraints at once. However, in many real applications, constraints are only available incrementally, thus necessitating methods that can perform online updates to the learned metric. Existing online algorithms offer bounds on worst-case performance, but typically do not perform well in practice as compared to their offline counterparts. We present a new online metric learning algorithm that updates a learned Mahalanobis metric based on LogDet regularization and gradient descent. We prove theoretical worst-case performance bounds, and empirically compare the proposed method against existing online metric learning algorithms. To further boost the practicality of our approach, we develop an online locality-sensitive hashing scheme which leads to efficient updates to data structures used for fast approximate similarity search. We demonstrate our algorithm on multiple datasets and show that it outperforms relevant baselines.

Journal ArticleDOI
TL;DR: In this article, the authors studied the construction of maximally supersymmetric field theory Lagrangians in three spacetime dimensions that are based on algebras with a triple product and proved that the only non-trivial examples are either the well known case based on a four dimensional algebra or direct sums thereof.
Abstract: We study the recent construction of maximally supersymmetric field theory Lagrangians in three spacetime dimensions that are based on algebras with a triple product. Assuming that the algebra has a positive definite metric compatible with the triple product, we prove that the only non-trivial examples are either the well known case based on a four dimensional algebra or direct sums thereof.

Journal ArticleDOI
TL;DR: In this article, a concept of -monotone mapping is introduced, and some fixed and common fixed point theorems for -non-decreasing generalized nonlinear contractions in partially ordered complete metric spaces are proved.
Abstract: A concept of -monotone mapping is introduced, and some fixed and common fixed point theorems for -non-decreasing generalized nonlinear contractions in partially ordered complete metric spaces are proved. Presented theorems are generalizations of very recent fixed point theorems due to Agarwal et al. (2008).

Journal ArticleDOI
01 Aug 2008
TL;DR: In this article, the human visual system is used to detect and classify visible changes in the image structure, and a new metric for image quality assessment is proposed based on the detection and classification of visible changes.
Abstract: The diversity of display technologies and introduction of high dynamic range imagery introduces the necessity of comparing images of radically different dynamic ranges. Current quality assessment metrics are not suitable for this task, as they assume that both reference and test images have the same dynamic range. Image fidelity measures employed by a majority of current metrics, based on the difference of pixel intensity or contrast values between test and reference images, result in meaningless predictions if this assumption does not hold. We present a novel image quality metric capable of operating on an image pair where both images have arbitrary dynamic ranges. Our metric utilizes a model of the human visual system, and its central idea is a new definition of visible distortion based on the detection and classification of visible changes in the image structure. Our metric is carefully calibrated and its performance is validated through perceptual experiments. We demonstrate possible applications of our metric to the evaluation of direct and inverse tone mapping operators as well as the analysis of the image appearance on displays with various characteristics.