scispace - formally typeset
Search or ask a question
Topic

Metric (mathematics)

About: Metric (mathematics) is a research topic. Over the lifetime, 42617 publications have been published within this topic receiving 836571 citations. The topic is also known as: distance function & metric.


Papers
More filters
Posted Content
TL;DR: In this paper, a generalization of stochastic bandits where the set of arms is allowed to be a generic measurable space and the mean-payoff function is locally Lipschitz with respect to a dissimilarity function that is known to the decision maker is considered.
Abstract: We consider a generalization of stochastic bandits where the set of arms, $\cX$, is allowed to be a generic measurable space and the mean-payoff function is "locally Lipschitz" with respect to a dissimilarity function that is known to the decision maker. Under this condition we construct an arm selection policy, called HOO (hierarchical optimistic optimization), with improved regret bounds compared to previous results for a large class of problems. In particular, our results imply that if $\cX$ is the unit hypercube in a Euclidean space and the mean-payoff function has a finite number of global maxima around which the behavior of the function is locally continuous with a known smoothness degree, then the expected regret of HOO is bounded up to a logarithmic factor by $\sqrt{n}$, i.e., the rate of growth of the regret is independent of the dimension of the space. We also prove the minimax optimality of our algorithm when the dissimilarity is a metric. Our basic strategy has quadratic computational complexity as a function of the number of time steps and does not rely on the doubling trick. We also introduce a modified strategy, which relies on the doubling trick but runs in linearithmic time. Both results are improvements with respect to previous approaches.

244 citations

Proceedings ArticleDOI
01 Sep 2016
TL;DR: A deep neural network is used to learn a complex non-linear function approximation of the lookup table, which reduces the required storage space by a factor of 1000 and surpasses the original table on the performance metrics and encounter sets evaluated here.
Abstract: One approach to designing the decision making logic for an aircraft collision avoidance system is to frame the problem as Markov decision process and optimize the system using dynamic programming. The resulting strategy can be represented as a numeric table. This methodology has been used in the development of the ACAS X family of collision avoidance systems for manned and unmanned aircraft. However, due to the high dimensionality of the state space, discretizing the state variables can lead to very large tables. To improve storage efficiency, we propose two approaches for compressing the lookup table. The first approach exploits redundancy in the table. The table is decomposed into a set of lower-dimensional tables, some of which can be represented by single tables in areas where the lower-dimensional tables are identical or nearly identical with respect to a similarity metric. The second approach uses a deep neural network to learn a complex non-linear function approximation of the table. With the use of an asymmetric loss function and a gradient descent algorithm, the parameters for this network can be trained to provide very accurate estimates of values while preserving the relative preferences of the possible advisories for each state. As a result, the table can be approximately represented by only the parameters of the network, which reduces the required storage space by a factor of 1000. Simulation studies show that system performance is very similar using either compressed table representation in place of the original table. Even though the neural network was trained directly on the original table, the network surpasses the original table on the performance metrics and encounter sets evaluated here.

244 citations

Posted Content
TL;DR: Flaws in the experimental methodology of numerous metric learning papers are found, and it is shown that the actual improvements over time have been marginal at best.
Abstract: Deep metric learning papers from the past four years have consistently claimed great advances in accuracy, often more than doubling the performance of decade-old methods. In this paper, we take a closer look at the field to see if this is actually true. We find flaws in the experimental methodology of numerous metric learning papers, and show that the actual improvements over time have been marginal at best.

244 citations

Journal ArticleDOI
TL;DR: In this article, the connection and curvature forms of the Fefferman metric were derived in terms of tautologous differential forms on a natural circle bundle and Webster's pseudohermitian invariants.
Abstract: C. Fefferman has shown that a real strictly pseudoconvex hypersurface in complex n-space carries a natural conformai Lorentz metric on a circle bundle over the manifold. This paper presents two intrinsic constructions of the metric, valid on an abstract CR manifold. One is in terms of tautologous differential forms on a natural circle bundle; the other is in terms of Webster's pseudohermitian invariants. These results are applied to compute the connection and curvature forms of the Fefferman metric explicitly.

243 citations

Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this article, the quality of each sample can be automatically learned in the training stage, although such information is not explicitly provided during the training process, and the network has two branches, where the first branch extracts appearance feature embedding and the other branch predicts quality score for each sample.
Abstract: This paper targets on the problem of set to set recognition, which learns the metric between two image sets. Images in each set belong to the same identity. Since images in a set can be complementary, they hopefully lead to higher accuracy in practical applications. However, the quality of each sample cannot be guaranteed, and samples with poor quality will hurt the metric. In this paper, the quality aware network (QAN) is proposed to confront this problem, where the quality of each sample can be automatically learned although such information is not explicitly provided in the training stage. The network has two branches, where the first branch extracts appearance feature embedding for each sample and the other branch predicts quality score for each sample. Features and quality scores of all samples in a set are then aggregated to generate the final feature embedding. We show that the two branches can be trained in an end-to-end manner given only the set-level identity annotation. Analysis on gradient spread of this mechanism indicates that the quality learned by the network is beneficial to set-to-set recognition and simplifies the distribution that the network needs to fit. Experiments on both face verification and person re-identification show advantages of the proposed QAN. The source code and network structure can be downloaded at GitHub.

243 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
83% related
Optimization problem
96.4K papers, 2.1M citations
83% related
Fuzzy logic
151.2K papers, 2.3M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
83% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202253
20213,191
20203,141
20192,843
20182,731
20172,341