Topic
Ranking (information retrieval)
About: Ranking (information retrieval) is a research topic. Over the lifetime, 21109 publications have been published within this topic receiving 435130 citations.
Papers published on a yearly basis
Papers
More filters
••
01 Nov 2019TL;DR: An in-depth analysis of the largest publicly available dataset of naturally occurring factual claims, collected from 26 fact checking websites in English, paired with textual sources and rich metadata, and labelled for veracity by human expert journalists is presented.
Abstract: We contribute the largest publicly available dataset of naturally occurring factual claims for the purpose of automatic claim verification. It is collected from 26 fact checking websites in English, paired with textual sources and rich metadata, and labelled for veracity by human expert journalists. We present an in-depth analysis of the dataset, highlighting characteristics and challenges. Further, we present results for automatic veracity prediction, both with established baselines and with a novel method for joint ranking of evidence pages and predicting veracity that outperforms all baselines. Significant performance increases are achieved by encoding evidence, and by modelling metadata. Our best-performing model achieves a Macro F1 of 49.2%, showing that this is a challenging testbed for claim veracity prediction.
167 citations
•
01 Jun 2012TL;DR: In this article, a query from a first user regarding a proposed transaction was sent to the plurality of potential entities for the proposed transaction based on the at least one affinity between the first user and the potential entities.
Abstract: A method includes: receiving information regarding a plurality of completed transactions from a plurality of users; receiving a query from a first user regarding a proposed transaction; determining at least one affinity between the first user and the plurality of users based on the information; determining a ranking or expectation of success for each of a plurality of potential entities for the proposed transaction based on the at least one affinity; selecting a plurality of selected entities based on the ranking or expectation of success for each of the potential entities; and sending, in response to the query, the plurality of selected entities to the first user.
167 citations
•
26 Feb 2012
TL;DR: A comprehensive overview of the mathematical algorithms and methods used to rate and rank sports teams, political candidates, products, Web pages, and more can be found in Who's #1? as discussed by the authors.
Abstract: A website's ranking on Google can spell the difference between success and failure for a new business. NCAA football ratings determine which schools get to play for the big money in postseason bowl games. Product ratings influence everything from the clothes we wear to the movies we select on Netflix. Ratings and rankings are everywhere, but how exactly do they work? Who's #1? offers an engaging and accessible account of how scientific rating and ranking methods are created and applied to a variety of uses.Amy Langville and Carl Meyer provide the first comprehensive overview of the mathematical algorithms and methods used to rate and rank sports teams, political candidates, products, Web pages, and more. In a series of interesting asides, Langville and Meyer provide fascinating insights into the ingenious contributions of many of the field's pioneers. They survey and compare the different methods employed today, showing why their strengths and weaknesses depend on the underlying goal, and explaining why and when a given method should be considered. Langville and Meyer also describe what can and can't be expected from the most widely used systems.The science of rating and ranking touches virtually every facet of our lives, and now you don't need to be an expert to understand how it really works. Who's #1? is the definitive introduction to the subject. It features easy-to-understand examples and interesting trivia and historical facts, and much of the required mathematics is included.
167 citations
••
TL;DR: The challenges and opportunities encountered in adapting ranking-and-selection techniques to stochastic simulation problems are described, along with key theorems, results and analysis tools that have proven useful in extending them to this setting.
Abstract: We describe the basic principles of ranking and selection, a collection of experiment-design techniques for comparing “populations” with the goal of finding the best among them. We then describe the challenges and opportunities encountered in adapting ranking-and-selection techniques to stochastic simulation problems, along with key theorems, results and analysis tools that have proven useful in extending them to this setting. Some specific procedures are presented along with a numerical illustration.
166 citations
•
IBM1
TL;DR: In this paper, a framework for use with object-oriented programming systems provides a reusable object oriented (OO) framework that provides an information retrieval (IR) shell that permits a framework user to define an index class that includes word index objects and provides an extensible information retrieval system that evaluates a user query by comparing information contained in the user query with information contained within the word index object that relates to stored documents.
Abstract: A framework for use with object-oriented programming systems provides a reusable object oriented (OO) framework for use with object oriented programming systems that provides an information retrieval (IR) shell that permits a framework user to define an index class that includes word index objects and provides an extensible information retrieval system that evaluates a user query by comparing information contained in the user query with information contained in the word index objects that relates to stored documents. The information in word index objects is produced by preprocessing operations on documents such that the documents relevant to the user query will be identified, thereby providing a query result. The information retrieval system user can load documents into the computer system storage, index documents so their information can be subject to a query search, and request query evaluation to identify and retrieve documents most closely related to the subject matter of a user query.
166 citations