scispace - formally typeset
Search or ask a question
Topic

Ranking (information retrieval)

About: Ranking (information retrieval) is a research topic. Over the lifetime, 21109 publications have been published within this topic receiving 435130 citations.


Papers
More filters
Patent
24 Apr 2001
TL;DR: In this article, the authors propose new approaches to fulfilling an information need, in particular to finding a result for a query based on a large body of information such as a collection of documents.
Abstract: The invention offers new approaches to fulfilling an information need, in particular to finding a result for a query based on a large body of information such as a collection of documents. The invention accepts a query containing an unspecified portion that expresses the information need. The invention locates matches for the query within a body of information and returns the matches or portions thereof in addition to or instead of identifiers for documents in which the matches are found. The invention ranks the matches in order to provide the most relevant information. One preferred method of ranking considers the number of instances of a match among a plurality of documents. The invention further defines a new type of index that includes contexts in which terms occur and provides methods of searching such indices to fulfill an information need.

101 citations

Patent
04 Feb 2008
TL;DR: In this article, the authors proposed a method to determine a new collaborative ranking of a set of digital content items, which comprises receiving a plurality of ranking votes for each digital content item in the set, tallying the received user ranking votes, and calculating a ranking score by applying an algorithm comprising the number of users ranking votes to each item and updating the ranking for each item.
Abstract: Methods and systems for collaboratively ranking a set of digital content items are disclosed. The invention utilizes users of a website who wish to participate in ranking a set of digital content items. In one embodiment, the method to determine a new collaborative ranking of a set of digital content items comprises receiving a plurality of ranking votes for each digital content item in the set, tallying the received user ranking votes for each digital content item in the set, calculate a ranking score by applying an algorithm comprising the number of user ranking votes to each digital content item and updating the ranking for each digital content item in the set.

101 citations

Book ChapterDOI
28 Mar 2010
TL;DR: This paper introduces xQuAD, a novel framework for search result diversification that builds such a diversified ranking by explicitly accounting for the relationship between documents retrieved for the original query and the possible aspects underlying this query, in the form of sub-queries.
Abstract: Queries submitted to a retrieval system are often ambiguous. In such a situation, a sensible strategy is to diversify the ranking of results to be retrieved, in the hope that users will find at least one of these results to be relevant to their information need. In this paper, we introduce xQuAD, a novel framework for search result diversification that builds such a diversified ranking by explicitly accounting for the relationship between documents retrieved for the original query and the possible aspects underlying this query, in the form of sub-queries. We evaluate the effectiveness of xQuAD using a standard TREC collection. The results show that our framework markedly outperforms state-of-the-art diversification approaches under a simulated best-case scenario. Moreover, we show that its effectiveness can be further improved by estimating the relative importance of each identified sub-query. Finally, we show that our framework can still outperform the simulated best-case scenario of the state-of-the-art diversification approaches using sub-queries automatically derived from the baseline document ranking itself.

101 citations

Proceedings ArticleDOI
01 May 1990
TL;DR: Experimental results show that when there is a single query object, searching in parameter space can be faster than searching in native space, if the data and query objects are large enough, and if sufficient redundancy is used for the query representation.
Abstract: Spatial queries can be evaluated in native space or in a parameter space. In the latter case, data objects are transformed into points and query objects are transformed into search regions. The requirement for different data and query representations may prevent the use of parameter-space searching in some applications. Native-space and parameter-space searching are compared in the context of a z order-based spatial access method. Experimental results show that when there is a single query object, searching in parameter space can be faster than searching in native space, if the data and query objects are large enough, and if sufficient redundancy is used for the query representation. The result is, however, less accurate than the native space result. When there are multiple query objects, native-space searching is better initially, but as the number of query objects increases, parameter space searching with low redundancy is superior. Native-space searching is much more accurate for multiple-object queries.

101 citations

Journal ArticleDOI
TL;DR: An expert ranking of forestry journals was compared with Journal Impact Factors and h-indices computed from the ISI Web of Science and internet-based data, finding a h-index that exhibited a high correlation with the Journal Impact Factor.

100 citations


Network Information
Related Topics (5)
Web page
50.3K papers, 975.1K citations
83% related
Ontology (information science)
57K papers, 869.1K citations
82% related
Graph (abstract data type)
69.9K papers, 1.2M citations
82% related
Feature learning
15.5K papers, 684.7K citations
81% related
Supervised learning
20.8K papers, 710.5K citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
20233,112
20226,541
20211,105
20201,082
20191,168