scispace - formally typeset
Search or ask a question
Topic

Metric (mathematics)

About: Metric (mathematics) is a research topic. Over the lifetime, 42617 publications have been published within this topic receiving 836571 citations. The topic is also known as: distance function & metric.


Papers
More filters
Proceedings ArticleDOI
01 Jun 2019
TL;DR: In this paper, a generalized IoU (GIoU) metric is proposed for non-overlapping bounding boxes, which can be directly used as a regression loss.
Abstract: Intersection over Union (IoU) is the most popular evaluation metric used in the object detection benchmarks. However, there is a gap between optimizing the commonly used distance losses for regressing the parameters of a bounding box and maximizing this metric value. The optimal objective for a metric is the metric itself. In the case of axis-aligned 2D bounding boxes, it can be shown that IoU can be directly used as a regression loss. However, IoU has a plateau making it infeasible to optimize in the case of non-overlapping bounding boxes. In this paper, we address the this weakness by introducing a generalized version of IoU as both a new loss and a new metric. By incorporating this generalized IoU ( GIoU) as a loss into the state-of-the art object detection frameworks, we show a consistent improvement on their performance using both the standard, IoU based, and new, GIoU based, performance measures on popular object detection benchmarks such as PASCAL VOC and MS COCO.

1,527 citations

Proceedings Article
05 Dec 2016
TL;DR: This paper proposes a new metric learning objective called multi-class N-pair loss, which generalizes triplet loss by allowing joint comparison among more than one negative examples and reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples.
Abstract: Deep metric learning has gained much popularity in recent years, following the success of deep learning. However, existing frameworks of deep metric learning based on contrastive loss and triplet loss often suffer from slow convergence, partially because they employ only one negative example while not interacting with the other negative classes in each update. In this paper, we propose to address this problem with a new metric learning objective called multi-class N-pair loss. The proposed objective function firstly generalizes triplet loss by allowing joint comparison among more than one negative examples - more specifically, N-1 negative examples - and secondly reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples, instead of (N+1) x N. We demonstrate the superiority of our proposed loss to the triplet loss as well as other competing loss functions for a variety of tasks on several visual recognition benchmark, including fine-grained object recognition and verification, image clustering and retrieval, and face verification and identification.

1,454 citations

Journal Article
TL;DR: The aim of this paper is to apply the concept of fuzziness to the clasical notions of metric and metric spaces and to compare the obtained notions with those resulting from some other, namely probabilistic statistical, generalizations of metric spaces.
Abstract: The adjective "fuzzy" seems to be a very popular and very frequent one in the contemporary studies concerning the logical and set-theoretical foundations of mathematics. The main reason of this quick development is, in our opinion, easy to be understood. The surrounding us world is full of uncertainty, the information we obtain from the environment, the notions we use and the data resulting from our observation or measurement are, in general, vague and incorrect. So every formal description of the real world or some of its aspects is, in every case, only an approxima­ tion and an idealization of the actual state. The notions like fuzzy sets, fuzzy orderings, fuzzy languages etc. enable to handle and to study the degree of uncertainty mentioned above in a purely mathematic and formal way. A very brief survey of the most interest­ ing results and applications concerning the notion of fuzzy set and the related ones can be found in [l]. The aim of this paper is to apply the concept of fuzziness to the clasical notions of metric and metric spaces and to compare the obtained notions with those resulting from some other, namely probabilistic statistical, generalizations of metric spaces. Our aim is to write this paper on a quite self-explanatory level the references being necessary only for the reader wanting to study these matters in more details.

1,438 citations

Journal ArticleDOI
TL;DR: This paper develops a robust hierarchical clustering algorithm ROCK that employs links and not distances when merging clusters, and indicates that ROCK not only generates better quality clusters than traditional algorithms, but it also exhibits good scalability properties.

1,383 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work presents a deep convolutional architecture with layers specially designed to address the problem of re-identification, and significantly outperforms the state of the art on both a large data set and a medium-sized data set, and is resistant to over-fitting.
Abstract: In this work, we propose a method for simultaneously learning features and a corresponding similarity metric for person re-identification. We present a deep convolutional architecture with layers specially designed to address the problem of re-identification. Given a pair of images as input, our network outputs a similarity value indicating whether the two input images depict the same person. Novel elements of our architecture include a layer that computes cross-input neighborhood differences, which capture local relationships between the two input images based on mid-level features from each input image. A high-level summary of the outputs of this layer is computed by a layer of patch summary features, which are then spatially integrated in subsequent layers. Our method significantly outperforms the state of the art on both a large data set (CUHK03) and a medium-sized data set (CUHK01), and is resistant to over-fitting. We also demonstrate that by initially training on an unrelated large data set before fine-tuning on a small target data set, our network can achieve results comparable to the state of the art even on a small data set (VIPeR).

1,371 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
83% related
Optimization problem
96.4K papers, 2.1M citations
83% related
Fuzzy logic
151.2K papers, 2.3M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
83% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202253
20213,191
20203,141
20192,843
20182,731
20172,341