scispace - formally typeset
Search or ask a question
Topic

Metric (mathematics)

About: Metric (mathematics) is a research topic. Over the lifetime, 42617 publications have been published within this topic receiving 836571 citations. The topic is also known as: distance function & metric.


Papers
More filters
Posted Content
TL;DR: The core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage, and applies a learning-to-learn approach to search for the hyper-parameters of the feature- wise transformation layers.
Abstract: Few-shot classification aims to recognize novel categories with only few labeled images in each class. Existing metric-based few-shot classification algorithms predict categories by comparing the feature embeddings of query images with those from a few labeled images (support examples) using a learned metric function. While promising performance has been demonstrated, these methods often fail to generalize to unseen domains due to large discrepancy of the feature distribution across domains. In this work, we address the problem of few-shot classification under domain shifts for metric-based methods. Our core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage. To capture variations of the feature distributions under different domains, we further apply a learning-to-learn approach to search for the hyper-parameters of the feature-wise transformation layers. We conduct extensive experiments and ablation studies under the domain generalization setting using five few-shot classification datasets: mini-ImageNet, CUB, Cars, Places, and Plantae. Experimental results demonstrate that the proposed feature-wise transformation layer is applicable to various metric-based models, and provides consistent improvements on the few-shot classification performance under domain shift.

181 citations

Posted Content
Qi Qian, Lei Shang, Baigui Sun, Juhua Hu, Hao Li, Rong Jin 
TL;DR: Zhang et al. as mentioned in this paper proposed the SoftTriple loss to extend the SoftMax loss with multiple centers for each class. And they showed that the softmax loss is equivalent to a smoothed triplet loss where each class has a single center.
Abstract: Distance metric learning (DML) is to learn the embeddings where examples from the same class are closer than examples from different classes. It can be cast as an optimization problem with triplet constraints. Due to the vast number of triplet constraints, a sampling strategy is essential for DML. With the tremendous success of deep learning in classifications, it has been applied for DML. When learning embeddings with deep neural networks (DNNs), only a mini-batch of data is available at each iteration. The set of triplet constraints has to be sampled within the mini-batch. Since a mini-batch cannot capture the neighbors in the original set well, it makes the learned embeddings sub-optimal. On the contrary, optimizing SoftMax loss, which is a classification loss, with DNN shows a superior performance in certain DML tasks. It inspires us to investigate the formulation of SoftMax. Our analysis shows that SoftMax loss is equivalent to a smoothed triplet loss where each class has a single center. In real-world data, one class can contain several local clusters rather than a single one, e.g., birds of different poses. Therefore, we propose the SoftTriple loss to extend the SoftMax loss with multiple centers for each class. Compared with conventional deep metric learning algorithms, optimizing SoftTriple loss can learn the embeddings without the sampling phase by mildly increasing the size of the last fully connected layer. Experiments on the benchmark fine-grained data sets demonstrate the effectiveness of the proposed loss function. Code is available at this https URL

181 citations

Proceedings ArticleDOI
29 Jul 2009
TL;DR: It is shown that the proposed metric results in a very good correlation with subjective scores especially for images with varying foreground and background perceived blur qualities, and with a significantly lower computational complexity as compared to existing methods that take into account the visual attention information.
Abstract: In this paper, a no-reference objective sharpness metric based on a cumulative probability of blur detection is proposed. The metric is evaluated by taking into account the Human Visual System (HVS) response to blur distortions. The perceptual significance of the metric is validated through subjective experiments. It is shown that the proposed metric results in a very good correlation with subjective scores especially for images with varying foreground and background perceived blur qualities. This is accomplished with a significantly lower computational complexity as compared to existing methods that take into account the visual attention information.

181 citations

Journal ArticleDOI
TL;DR: It is demonstrated that the cross-spectral metric results in a low-dimensional detector which provides nearly optimal performance when the noise covariance is known, closely approximating the performance of the matched filter.
Abstract: This work extends the recently introduced cross-spectral metric for subspace selection and dimensionality reduction to partially adaptive space-time sensor array processing. A general methodology is developed for the analysis of reduced-dimension detection tests with known and unknown covariance. It is demonstrated that the cross-spectral metric results in a low-dimensional detector which provides nearly optimal performance when the noise covariance is known. It is also shown that this metric allows the dimensionality of the detector to be reduced below the dimension of the noise subspace eigenstructure without significant loss. This attribute provides robustness in the subspace selection process to achieve reduced-dimensional target detection. Finally, it is demonstrated that the cross-spectral subspace reduced-dimension detector can outperform the full-dimension detector when the noise covariance is unknown, closely approximating the performance of the matched filter.

181 citations

Journal ArticleDOI
TL;DR: The problem of finding a translation to minimize the distance between point patterns is discussed and the sum of the distances in the minimal pairing is used as the “match distance” between the histograms.
Abstract: A metric is defined on the space of multidimensional histograms. Such histograms store in thexth location the number of events with feature vectorx; examples are gray level histograms and co-occurrence matrices of digital images. Given two multidimensional histograms, each is “unfolded” and a minimum distance pairing is performed using a distance metric on the feature vectorsx. The sum of the distances in the minimal pairing is used as the “match distance” between the histograms. This distance is shown to be a metric, and in the one-dimensional case is equal to the absolute difference of the two cumulative distribution functions. Among other applications, it facilitates direct computation of the distance between co-occurrence matrices or between point patterns. The problem of finding a translation to minimize the distance between point patterns is also discussed.

180 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
83% related
Optimization problem
96.4K papers, 2.1M citations
83% related
Fuzzy logic
151.2K papers, 2.3M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
83% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202253
20213,191
20203,141
20192,843
20182,731
20172,341