scispace - formally typeset
Search or ask a question
Topic

Ranking (information retrieval)

About: Ranking (information retrieval) is a research topic. Over the lifetime, 21109 publications have been published within this topic receiving 435130 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The second edition of the VHB-JOURQUAL was published in 2017 as mentioned in this paper, with the main results and additional analyses on the validity of the rating and the underlying decision processes of the respondents.
Abstract: VHB-JOURQUAL represents the official journal ranking of the German Academic Association for Business Research. Since its introduction in 2003, the ranking has become the most influential journal evaluation approach in German-speaking countries, impacting several key managerial decisions of German, Austrian, and Swiss business schools. This article reports the methodological approach of the ranking’s second edition. It also presents the main results and additional analyses on the validity of the rating and the underlying decision processes of the respondents. Selected implications for researchers and higher-education institutions are discussed.

130 citations

Journal ArticleDOI
TL;DR: An algorithm for optimal data collection in random fields, the so-called variance reduction analysis, which is an extension of kriging, is presented, which shows a high degree of stability with respect to noisy inputs.
Abstract: This paper presents an algorithm for optimal data collection in random fields, the so-called variance reduction analysis, which is an extension of kriging. The basis of variance reduction analysis is an information response function (i.e., the amount of information gain at an arbitrary point due to a measurement at another site). The ranking of potential sites is conducted using an information ranking function. The optimal number of new points is then identified by an economic gain function. The selected sequence of sites for further sampling shows a high degree of stability with respect to noisy inputs.

130 citations

Patent
07 Jun 2004
TL;DR: In this article, a method and system for information retrieval are provided whereby at least one search criterion is received from a user; a query is created based on the search criterion; the query is executed to generate results, each of the results corresponding to a respective data entity which satisfies the at least search criterion.
Abstract: A method and system for information retrieval are provided whereby at least one search criterion is received from a user; a query is created based on the at least one search criterion; the query is executed to generate results, each of the results corresponding to a respective data entity which satisfies the at least one search criterion; the results are arranged into an order, the order being determined at least in part by a characteristic of the data entity corresponding to each result and a previous act by a user with respect to the data entity corresponding to each result; and the results are displayed to the user.

130 citations

Proceedings ArticleDOI
24 Oct 2011
TL;DR: This paper derives an unbiased estimator of comparison outcomes and shows how marginalizing over possible comparison outcomes given the observed click data can make this estimator even more effective.
Abstract: Evaluating rankers using implicit feedback, such as clicks on documents in a result list, is an increasingly popular alternative to traditional evaluation methods based on explicit relevance judgments. Previous work has shown that so-called interleaved comparison methods can utilize click data to detect small differences between rankers and can be applied to learn ranking functions online. In this paper, we analyze three existing interleaved comparison methods and find that they are all either biased or insensitive to some differences between rankers. To address these problems, we present a new method based on a probabilistic interleaving process. We derive an unbiased estimator of comparison outcomes and show how marginalizing over possible comparison outcomes given the observed click data can make this estimator even more effective.We validate our approach using a recently developed simulation framework based on a learning to rank dataset and a model of click behavior. Our experiments confirm the results of our analysis and show that our method is both more accurate and more robust to noise than existing methods.

130 citations

Proceedings Article
01 Nov 2009
TL;DR: The goal of the entity track is to perform entity-oriented search tasks on the World Wide Web to answer many user information needs that would be better answered by specific entities instead of just any type of documents.
Abstract: : The goal of the entity track is to perform entity-oriented search tasks on the World Wide Web. Many user information needs would be better answered by specific entities instead of just any type of documents. The track defines entities as "typed search results," "things," represented by their homepages on the web. Searching for entities thus corresponds to ranking these homepages. The track thereby investigates a problem quite similar to the QA list task. In this pilot year, we limited the track's scope to searches for instances of the organizations, people, and product entity types.

129 citations


Network Information
Related Topics (5)
Web page
50.3K papers, 975.1K citations
83% related
Ontology (information science)
57K papers, 869.1K citations
82% related
Graph (abstract data type)
69.9K papers, 1.2M citations
82% related
Feature learning
15.5K papers, 684.7K citations
81% related
Supervised learning
20.8K papers, 710.5K citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
20233,112
20226,541
20211,105
20201,082
20191,168