scispace - formally typeset
Search or ask a question
Topic

Ranking (information retrieval)

About: Ranking (information retrieval) is a research topic. Over the lifetime, 21109 publications have been published within this topic receiving 435130 citations.


Papers
More filters
Book
27 Feb 2015
TL;DR: A formal definition of the search result diversification problem is provided and the most successful approaches in the literature for producing and evaluating diversity in multiple search domains are described.
Abstract: Ranking in information retrieval has been traditionally approachedas a pursuit of relevant information, under the assumption that theusers' information needs are unambiguously conveyed by their submittedqueries. Nevertheless, as an inherently limited representation of amore complex information need, every query can arguably be consideredambiguous to some extent. In order to tackle query ambiguity,search result diversification approaches have recently been proposed toproduce rankings aimed to satisfy the multiple possible informationneeds underlying a query. In this survey, we review the published literatureon search result diversification. In particular, we discuss themotivations for diversifying the search results for an ambiguous queryand provide a formal definition of the search result diversification problem.In addition, we describe the most successful approaches in theliterature for producing and evaluating diversity in multiple search domains.Finally, we also discuss recent advances as well as open researchdirections in the field of search result diversification.

110 citations

Proceedings ArticleDOI
Fabio Aiolli1
12 Oct 2013
TL;DR: A simple and scalable algorithm for top-N recommendation able to deal with very large datasets and (binary rated) implicit feedback and focuses on memory-based collaborative filtering algorithms similar to the well known neighboor based technique for explicit feedback.
Abstract: We present a simple and scalable algorithm for top-N recommendation able to deal with very large datasets and (binary rated) implicit feedback. We focus on memory-based collaborative filtering algorithms similar to the well known neighboor based technique for explicit feedback. The major difference, that makes the algorithm particularly scalable, is that it uses positive feedback only and no explicit computation of the complete (user-by-user or item-by-item) similarity matrix needs to be performed.The study of the proposed algorithm has been conducted on data from the Million Songs Dataset (MSD) challenge whose task was to suggest a set of songs (out of more than 380k available songs) to more than 100k users given half of the user listening history and complete listening history of other 1 million people.In particular, we investigate on the entire recommendation pipeline, starting from the definition of suitable similarity and scoring functions and suggestions on how to aggregate multiple ranking strategies to define the overall recommendation. The technique we are proposing extends and improves the one that already won the MSD challenge last year.

110 citations

Proceedings ArticleDOI
24 Oct 2011
TL;DR: This paper analyzes some of the challenges in performing automatic annotation and ranking of music audio, and proposes a few improvements, including the use of principal component analysis on the mel-scaled spectrum and the idea of multiscale learning.
Abstract: This paper analyzes some of the challenges in performing automatic annotation and ranking of music audio, and proposes a few improvements. First, we motivate the use of principal component analysis on the mel-scaled spectrum. Secondly, we present an analysis of the impact of the selection of pooling functions for summarization of the features over time. We show that combining several pooling functions improves the performance of the system. Finally, we introduce the idea of multiscale learning. By incorporating these ideas in our model, we obtained state-of-the-art performance on the Magnatagatune dataset.

110 citations

Proceedings ArticleDOI
Tao Qin1, Xudong Zhang1, De-Sheng Wang1, Tie-Yan Liu2, Wei Lai2, Hang Li2 
23 Jul 2007
TL;DR: This paper looks at an alternative approach to Ranking SVM, which it is called "Multiple Hyperplane Ranker" (MHR), and makes comparisons between the two approaches, which takes the divide-and-conquer strategy.
Abstract: The central problem for many applications in Information Retrieval is ranking and learning to rank is considered as a promising approach for addressing the issue. Ranking SVM, for example, is a state-of-the-art method for learning to rank and has been empirically demonstrated to be effective. In this paper, we study the issue of learning to rank, particularly the approach of using SVM techniques to perform the task. We point out that although Ranking SVM is advantageous, it still has shortcomings. Ranking SVM employs a single hyperplane in the feature space as the model for ranking, which is too simple to tackle complex ranking problems. Furthermore, the training of Ranking SVM is also computationally costly. In this paper, we look at an alternative approach to Ranking SVM, which we call "Multiple Hyperplane Ranker" (MHR), and make comparisons between the two approaches. MHR takes the divide-and-conquer strategy. It employs multiple hyperplanes to rank instances and finally aggregates the ranking results given by the hyperplanes. MHR contains Ranking SVM as a special case, and MHR can overcome the shortcomings which Ranking SVM suffers from. Experimental results on two information retrieval datasets show that MHR can outperform Ranking SVM in ranking.

110 citations

Proceedings ArticleDOI
01 Aug 1998
TL;DR: The effects of query structures and query expansion (QE) on retrieval performance were tested with a best match retrieval system and, with weak structures and Boolean structured queries, QE was not very effective.
Abstract: The effects of query structures and query expansion (QE) on retrieval performance were tested with a best match retrieval system (INQUERY1) Query structure means the use of operators to express the relations between search keys Eight different structures were tested, representing weak structures (averages and weighted averages of the weights of the keys) and strong structures (eg, queries with more elaborated search key relations) QE was based on concepts, which were first selected from a conceptual model, and then expanded by semantic relationships given in the model The expansion levels were (a) no expansion, (b) a synonym expansion, (c) a narrower concept expansion, (d) an associative concept expansion, and (e) a cumulative expansion of all other expansions With weak structures and Boolean structured queries, QE was not very effective The best performance was achieved with one of the strong structures at the largest expansion level

110 citations


Network Information
Related Topics (5)
Web page
50.3K papers, 975.1K citations
83% related
Ontology (information science)
57K papers, 869.1K citations
82% related
Graph (abstract data type)
69.9K papers, 1.2M citations
82% related
Feature learning
15.5K papers, 684.7K citations
81% related
Supervised learning
20.8K papers, 710.5K citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
20233,112
20226,541
20211,105
20201,082
20191,168