scispace - formally typeset
Proceedings ArticleDOI

Ranking with ordered weighted pairwise classification

Nicolas Usunier, +2 more
- pp 1057-1064
Reads0
Chats0
TLDR
This work proposes to optimize a larger class of loss functions for ranking, based on an ordered weighted average (OWA) (Yager, 1988) of the classification losses, and shows that OWA aggregates of margin-based classification losses have good generalization properties.
Abstract
In ranking with the pairwise classification approach, the loss associated to a predicted ranked list is the mean of the pairwise classification losses. This loss is inadequate for tasks like information retrieval where we prefer ranked lists with high precision on the top of the list. We propose to optimize a larger class of loss functions for ranking, based on an ordered weighted average (OWA) (Yager, 1988) of the classification losses. Convex OWA aggregation operators range from the max to the mean depending on their weights, and can be used to focus on the top ranked elements as they give more weight to the largest losses. When aggregating hinge losses, the optimization problem is similar to the SVM for interdependent output spaces. Moreover, we show that OWA aggregates of margin-based classification losses have good generalization properties. Experiments on the Letor 3.0 benchmark dataset for information retrieval validate our approach.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book

Machine Learning : A Probabilistic Perspective

TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Book

Learning to Rank for Information Retrieval

TL;DR: Three major approaches to learning to rank are introduced, i.e., the pointwise, pairwise, and listwise approaches, the relationship between the loss functions used in these approaches and the widely-used IR evaluation measures are analyzed, and the performance of these approaches on the LETOR benchmark datasets is evaluated.
Proceedings ArticleDOI

WSABIE: scaling up to large vocabulary image annotation

TL;DR: This work proposes a strongly performing method that scales to image annotation datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations.
Posted Content

Zero-Shot Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly

TL;DR: A new zero-shot learning dataset is proposed, the Animals with Attributes 2 (AWA2) dataset which is made publicly available both in terms of image features and the images themselves and compares and analyzes a significant number of the state-of-the-art methods in depth.
Journal ArticleDOI

Zero-Shot Learning—A Comprehensive Evaluation of the Good, the Bad and the Ugly

TL;DR: The Animals with Attributes 2 (AWA2) dataset as mentioned in this paper is a new dataset for zero-shot learning, which is publicly available both in terms of image features and the images themselves.
References
More filters
Book

Introduction to Information Retrieval

TL;DR: In this article, the authors present an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections.
Journal ArticleDOI

On ordered weighted averaging aggregation operators in multicriteria decisionmaking

TL;DR: A type of operator for aggregation called an ordered weighted aggregation (OWA) operator is introduced and its performance is found to be between those obtained using the AND operator and the OR operator.
Proceedings ArticleDOI

Optimizing search engines using clickthrough data

TL;DR: The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking.
Proceedings Article

Boosting the margin: A new explanation for the effectiveness of voting methods

TL;DR: In this paper, the authors show that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero.
Related Papers (5)