scispace - formally typeset
Search or ask a question
Topic

Ranking (information retrieval)

About: Ranking (information retrieval) is a research topic. Over the lifetime, 21109 publications have been published within this topic receiving 435130 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A flexible ranking approach to identify interesting and relevant relationships in the semantic Web and the authors demonstrate the scheme's effectiveness through an empirical evaluation over a real-world data set.
Abstract: Industry and academia are both focusing their attention on information retrieval over semantic metadata extracted from the Web, and it is increasingly possible to analyze such metadata to discover interesting relationships. However, just as document ranking is a critical component in today's search engines, the ranking of complex relationships would be an important component in tomorrow's semantic Web engines. This article presents a flexible ranking approach to identify interesting and relevant relationships in the semantic Web. The authors demonstrate the scheme's effectiveness through an empirical evaluation over a real-world data set.

169 citations

Journal ArticleDOI
TL;DR: The view is that the Shanghai ranking, in spite of the media coverage it receives, does not qualify as a useful and pertinent tool to discuss the “quality” of academic institutions, let alone to guide the choice of students and family or to promote reforms of higher education systems.
Abstract: This paper proposes a critical analysis of the “Academic Ranking of World Universities”, published every year by the Institute of Higher Education of the Jiao Tong University in Shanghai and more commonly known as the Shanghai ranking. After having recalled how the ranking is built, we first discuss the relevance of the criteria and then analyze the proposed aggregation method. Our analysis uses tools and concepts from Multiple Criteria Decision Making (MCDM). Our main conclusions are that the criteria that are used are not relevant, that the aggregation methodology is plagued by a number of major problems and that the whole exercise suffers from an insufficient attention paid to fundamental structuring issues. Hence, our view is that the Shanghai ranking, in spite of the media coverage it receives, does not qualify as a useful and pertinent tool to discuss the “quality” of academic institutions, let alone to guide the choice of students and family or to promote reforms of higher education systems. We outline the type of work that should be undertaken to offer sound alternatives to the Shanghai ranking.

169 citations

Patent
02 Jun 2011
TL;DR: In this paper, a computer-implemented method is described to generate a local result set and one or more non-local result sets for a search query, and determine a display location for the local result sets relative to the nonlocal result set based on the position of the search query in a local relevance indicator.
Abstract: A computer-implemented method is disclosed. The method includes receiving from a remote device a search query, generating a local result set and one or more non-local result sets for the search query, determining a display location for the local result set relative to the non-local result set based on a position of the search query in a local relevance indicium.

169 citations

Journal ArticleDOI
TL;DR: The proposed ranking index is intelligible and intrinsically superior to the original ranking index in seeking compromised solutions and showed that if the number of alternatives exceeds two or if the relative importance of the two separations should be considered, the proposedranking index would be a better choice.

168 citations

Proceedings Article
Wei Chen1, Tie-Yan Liu2, Yanyan Lan1, Zhi-Ming Ma1, Hang Li2 
07 Dec 2009
TL;DR: In this article, the authors reveal the relationship between ranking measures and loss functions in learning-to-rank methods, such as Ranking SVM, RankBoost, RankNet, and ListMLE, and show that the loss functions of these methods are upper bounds of the measure-based ranking errors.
Abstract: Learning to rank has become an important research topic in machine learning. While most learning-to-rank methods learn the ranking functions by minimizing loss functions, it is the ranking measures (such as NDCG and MAP) that are used to evaluate the performance of the learned ranking functions. In this work, we reveal the relationship between ranking measures and loss functions in learning-to-rank methods, such as Ranking SVM, RankBoost, RankNet, and ListMLE. We show that the loss functions of these methods are upper bounds of the measure-based ranking errors. As a result, the minimization of these loss functions will lead to the maximization of the ranking measures. The key to obtaining this result is to model ranking as a sequence of classification tasks, and define a so-called essential loss for ranking as the weighted sum of the classification errors of individual tasks in the sequence. We have proved that the essential loss is both an upper bound of the measure-based ranking errors, and a lower bound of the loss functions in the aforementioned methods. Our proof technique also suggests a way to modify existing loss functions to make them tighter bounds of the measure-based ranking errors. Experimental results on benchmark datasets show that the modifications can lead to better ranking performances, demonstrating the correctness of our theoretical analysis.

168 citations


Network Information
Related Topics (5)
Web page
50.3K papers, 975.1K citations
83% related
Ontology (information science)
57K papers, 869.1K citations
82% related
Graph (abstract data type)
69.9K papers, 1.2M citations
82% related
Feature learning
15.5K papers, 684.7K citations
81% related
Supervised learning
20.8K papers, 710.5K citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
20233,112
20226,541
20211,105
20201,082
20191,168