scispace - formally typeset
Proceedings ArticleDOI

Optimizing search engines using clickthrough data

Reads0
Chats0
TLDR
The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking.
Abstract
This paper presents an approach to automatically optimizing the retrieval quality of search engines using clickthrough data. Intuitively, a good information retrieval system should present relevant documents high in the ranking, with less relevant documents following below. While previous approaches to learning retrieval functions from examples exist, they typically require training data generated from relevance judgments by experts. This makes them difficult and expensive to apply. The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking. Such clickthrough data is available in abundance and can be recorded at very low cost. Taking a Support Vector Machine (SVM) approach, this paper presents a method for learning retrieval functions. From a theoretical perspective, this method is shown to be well-founded in a risk minimization framework. Furthermore, it is shown to be feasible even for large sets of queries and features. The theoretical results are verified in a controlled experiment. It shows that the method can effectively adapt the retrieval function of a meta-search engine to a particular group of users, outperforming Google in terms of retrieval quality after only a couple of hundred training examples.

read more

Citations
More filters
Proceedings ArticleDOI

Characterizing search intent diversity into click models

TL;DR: A new intent hypothesis is proposed as a complement to the examination hypothesis and is used to characterize the bias between the user search intent and the query in each search session.
Posted Content

A General Framework for Counterfactual Learning-to-Rank

TL;DR: This paper provides a general and theoretically rigorous framework for counterfactual learning-to-rank that enables unbiased training for a broad class of additive ranking metrics as well as a broadclass of models (e.g., deep networks).
Proceedings ArticleDOI

Addressing Trust Bias for Unbiased Learning-to-Rank

TL;DR: This paper model the noise as the position-dependent trust bias and proposes a noise-aware Position-Based Model, named TrustPBM, to better capture user click behavior and shows that the proposed model can significantly outperform the existing unbiased learning-to-rank methods.
Proceedings ArticleDOI

Mining preferences from superior and inferior examples

TL;DR: This paper proposes a greedy method for mining user preferences in a multidimensional space where the user preferences on some categorical attributes are unknown and shows that the method is practical using real data sets and synthetic data sets.
Journal ArticleDOI

Preference Learning for Cognitive Modeling: A Case Study on Entertainment Preferences

TL;DR: A comparative study of four alternative instance preference learning algorithms for the investigated complex case study of cognitive modeling in physical games indicates the benefit of the use of neuroevolution and sequential forward selection.
References
More filters
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Journal ArticleDOI

Support-Vector Networks

TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Proceedings ArticleDOI

A training algorithm for optimal margin classifiers

TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Book

Modern Information Retrieval

TL;DR: In this article, the authors present a rigorous and complete textbook for a first course on information retrieval from the computer science (as opposed to a user-centred) perspective, which provides an up-to-date student oriented treatment of the subject.
Related Papers (5)