A Metric Learning Reality Check
Kevin Musgrave,Serge Belongie,Ser-Nam Lim +2 more
- pp 681-699
Reads0
Chats0
TLDR
In this article, the authors take a closer look at the field to see if this is actually true, and find flaws in the experimental methodology of numerous metric learning papers, and show that the actual improvements over time have been marginal at best.Abstract:
Deep metric learning papers from the past four years have consistently claimed great advances in accuracy, often more than doubling the performance of decade-old methods. In this paper, we take a closer look at the field to see if this is actually true. We find flaws in the experimental methodology of numerous metric learning papers, and show that the actual improvements over time have been marginal at best. Code is available at github.com/KevinMusgrave/powerful-benchmarker.read more
Citations
More filters
Journal ArticleDOI
e-Tourism beyond COVID-19: a call for transformative research
Ulrike Gretzel,Matthias Fuchs,Rodolfo Baggio,Wolfram Hoepken,Rob Law,Julia Neidhardt,Juho Pesonen,Markus Zanker,Zheng Xiang +8 more
TL;DR: This viewpoint article argues that the impacts of the novel coronavirus COVID-19 call for transformative e-Tourism research, and presents six pillars to guide scholars in their efforts to transform e- Tourism through their research, including historicity, reflexivity, equity, transparency, plurality, and creativity.
Proceedings ArticleDOI
Ranked List Loss for Deep Metric Learning
TL;DR: This work presents two limitations of existing ranking-motivated structured losses and proposes a novel ranked list loss to solve both of them and proposes to learn a hypersphere for each class in order to preserve the similarity structure inside it.
Posted Content
Are we done with ImageNet
TL;DR: A significantly more robust procedure for collecting human annotations of the ImageNet validation set is developed, which finds the original ImageNet labels to no longer be the best predictors of this independently-collected set, indicating that their usefulness in evaluating vision models may be nearing an end.
Proceedings ArticleDOI
DCN V2: Improved Deep & Cross Network and Practical Lessons for Web-scale Learning to Rank Systems
TL;DR: This work proposes an improved framework DCN-V2, which is simple, can be easily adopted as building blocks, and has delivered significant offline accuracy and online business metrics gains across many web-scale learning to rank systems at Google.
Proceedings ArticleDOI
Probabilistic Embeddings for Cross-Modal Retrieval
TL;DR: Probabilistic Cross-Modal Embedding (PCME) as mentioned in this paper proposes to use probabilistic distributions in the common embedding space for cross-modal retrieval, which not only improves the retrieval performance over its deterministic counterpart but also provides uncertainty estimates that render the embeddings more interpretable.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Journal ArticleDOI
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky,Jia Deng,Hao Su,Jonathan Krause,Sanjeev Satheesh,Sean Ma,Zhiheng Huang,Andrej Karpathy,Aditya Khosla,Michael S. Bernstein,Alexander C. Berg,Li Fei-Fei +11 more
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.