B
Ben Carterette
Researcher at University UCINF
Publications - 141
Citations - 4935
Ben Carterette is an academic researcher from University UCINF. The author has contributed to research in topics: Relevance (information retrieval) & Ranking (information retrieval). The author has an hindex of 34, co-authored 138 publications receiving 4353 citations. Previous affiliations of Ben Carterette include University of Massachusetts Amherst & University of Delaware.
Papers
More filters
Proceedings ArticleDOI
A comparison of statistical significance tests for information retrieval evaluation
TL;DR: It is discovered that there is little practical difference between the randomization, bootstrap, and t tests and their use should be discontinued for measuring the significance of a difference between means.
Proceedings ArticleDOI
Minimal test collections for retrieval evaluation
TL;DR: This work links evaluation with test collection construction to gain an understanding of the minimal judging effort that must be done to have high confidence in the outcome of an evaluation.
Here or there: preference judgments for relevance
TL;DR: This work hypothesizes that preference judgments of the form "document A is more relevant than document B" are easier for assessors to make than absolute judgments, and investigates methods to evaluate search engines using preference judgments.
Proceedings Article
Million Query Track 2008 Overview
TL;DR: The 2008 edition of the TREC Million Query Track (1MQ) as mentioned in this paper was the second edition of TREC's Track 1.1, which was designed to serve two purposes: first, it is an exploration of ad-hoc retrieval over a large set of queries and a large collection of documents; second, it investigates questions of system evaluation.
Proceedings Article
Evaluating Search Engines by Modeling the Relationship Between Relevance and Clicks
Ben Carterette,Rosie Jones +1 more
TL;DR: A model that leverages the millions of clicks received by web search engines to predict document relevance can predict the relevance score of documents that have not been judged and is general enough to be applicable to algorithmic web search results.