scispace - formally typeset
Search or ask a question
Topic

Metric (mathematics)

About: Metric (mathematics) is a research topic. Over the lifetime, 42617 publications have been published within this topic receiving 836571 citations. The topic is also known as: distance function & metric.


Papers
More filters
Patent
28 Feb 2005
TL;DR: In this article, the authors proposed a wireless node location mechanism that uses a signal strength weighting metric to improve the accuracy of estimating the location of a WSN based on signals detected among a plurality of radio transceivers.
Abstract: Methods, apparatuses, and systems directed to a wireless node location mechanism that uses a signal strength weighting metric to improve the accuracy of estimating the location of a wireless node based on signals detected among a plurality of radio transceivers. In certain implementations, the wireless node location mechanism further incorporates a differential signal strength metric to reduce the errors caused by variations in wireless node transmit power, errors in signal strength detection, and/or direction-dependent path loss. As opposed to using the absolute signal strength or power of an RF signal transmitted by a wireless node, implementations of the present invention compare the differences between signal strength values detected at various pairs of radio receivers to corresponding differences characterized in a model of the RF environment. One implementation of the invention searches for the locations in the model between each pair of radio receivers where their signal strength is different by an observed amount.

170 citations

Journal ArticleDOI
TL;DR: A metric transfer learning framework (MTF) is proposed to encode metric learning in transfer learning to make knowledge transfer across domains more effective and develops general solutions to both classification and regression problems on top of MTLF.
Abstract: Transfer learning has been proven to be effective for the problems where training data from a source domain and test data from a target domain are drawn from different distributions. To reduce the distribution divergence between the source domain and the target domain, many previous studies have been focused on designing and optimizing objective functions with the Euclidean distance to measure dissimilarity between instances. However, in some real-world applications, the Euclidean distance may be inappropriate to capture the intrinsic similarity or dissimilarity between instances. To deal with this issue, in this paper, we propose a metric transfer learning framework (MTLF) to encode metric learning in transfer learning. In MTLF, instance weights are learned and exploited to bridge the distributions of different domains, while Mahalanobis distance is learned simultaneously to maximize the intra-class distances and minimize the inter-class distances for the target domain. Unlike previous work where instance weights and Mahalanobis distance are trained in a pipelined framework that potentially leads to error propagation across different components, MTLF attempts to learn instance weights and a Mahalanobis distance in a parallel framework to make knowledge transfer across domains more effective. Furthermore, we develop general solutions to both classification and regression problems on top of MTLF, respectively. We conduct extensive experiments on several real-world datasets on object recognition, handwriting recognition, and WiFi location to verify the effectiveness of MTLF compared with a number of state-of-the-art methods.

170 citations

Journal ArticleDOI
TL;DR: In this article, the problem of error correction in coherent and non-coherent network coding is considered under an adversarial model, and it is shown that universal network error correcting codes achieving the Singleton bound can be easily constructed and efficiently decoded.
Abstract: The problem of error correction in both coherent and noncoherent network coding is considered under an adversarial model. For coherent network coding, where knowledge of the network topology and network code is assumed at the source and destination nodes, the error correction capability of an (outer) code is succinctly described by the rank metric; as a consequence, it is shown that universal network error correcting codes achieving the Singleton bound can be easily constructed and efficiently decoded. For noncoherent network coding, where knowledge of the network topology and network code is not assumed, the error correction capability of a (subspace) code is given exactly by a new metric, called the injection metric, which is closely related to, but different than, the subspace metric of KOumltter and Kschischang. In particular, in the case of a non-constant-dimension code, the decoder associated with the injection metric is shown to correct more errors then a minimum-subspace-distance decoder. All of these results are based on a general approach to adversarial error correction, which could be useful for other adversarial channels beyond network coding.

170 citations

Proceedings Article
01 Aug 2013
TL;DR: The task definition is given, the data sets are presented, and the evaluation metric and scorer used in the shared task are described, to give an overview of the various approaches adopted by the participating teams, and present the evaluation results.
Abstract: The CoNLL-2013 shared task was devoted to grammatical error correction. In this paper, we give the task definition, present the data sets, and describe the evaluation metric and scorer used in the shared task. We also give an overview of the various approaches adopted by the participating teams, and present the evaluation results.

169 citations

Proceedings ArticleDOI
20 Jun 2009
TL;DR: This paper introduces a probabilistic variant of the K-nearest neighbor method for classification that can be seamlessly used for active learning in multi-class scenarios and uses this measure of uncertainty to actively sample training examples that maximize discriminating capabilities of the model.
Abstract: Scarcity and infeasibility of human supervision for large scale multi-class classification problems necessitates active learning. Unfortunately, existing active learning methods for multi-class problems are inherently binary methods and do not scale up to a large number of classes. In this paper, we introduce a probabilistic variant of the K-nearest neighbor method for classification that can be seamlessly used for active learning in multi-class scenarios. Given some labeled training data, our method learns an accurate metric/kernel function over the input space that can be used for classification and similarity search. Unlike existing metric/kernel learning methods, our scheme is highly scalable for classification problems and provides a natural notion of uncertainty over class labels. Further, we use this measure of uncertainty to actively sample training examples that maximize discriminating capabilities of the model. Experiments on benchmark datasets show that the proposed method learns appropriate distance metrics that lead to state-of-the-art performance for object categorization problems. Furthermore, our active learning method effectively samples training examples, resulting in significant accuracy gains over random sampling for multi-class problems involving a large number of classes.

169 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
83% related
Optimization problem
96.4K papers, 2.1M citations
83% related
Fuzzy logic
151.2K papers, 2.3M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
83% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202253
20213,191
20203,141
20192,843
20182,731
20172,341