Topic
Ranking (information retrieval)
About: Ranking (information retrieval) is a research topic. Over the lifetime, 21109 publications have been published within this topic receiving 435130 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article, a two-branch neural network is proposed to learn the similarity between image-sentence matching and visual grounding, which achieves high accuracies for phrase localization on the Flickr30K Entities dataset and for bi-directional imagesentence retrieval on Flickr30k and MSCOCO datasets.
Abstract: Image-language matching tasks have recently attracted a lot of attention in the computer vision field. These tasks include image-sentence matching, i.e., given an image query, retrieving relevant sentences and vice versa, and region-phrase matching or visual grounding, i.e., matching a phrase to relevant regions. This paper investigates two-branch neural networks for learning the similarity between these two data modalities. We propose two network structures that produce different output representations. The first one, referred to as an embedding network , learns an explicit shared latent embedding space with a maximum-margin ranking loss and novel neighborhood constraints. Compared to standard triplet sampling, we perform improved neighborhood sampling that takes neighborhood information into consideration while constructing mini-batches. The second network structure, referred to as a similarity network , fuses the two branches via element-wise product and is trained with regression loss to directly predict a similarity score. Extensive experiments show that our networks achieve high accuracies for phrase localization on the Flickr30K Entities dataset and for bi-directional image-sentence retrieval on Flickr30K and MSCOCO datasets.
391 citations
••
02 Feb 2017TL;DR: A counterfactual inference framework is presented that provides the theoretical basis for unbiased LTR via Empirical Risk Minimization despite biased data, and a Propensity-Weighted Ranking SVM is derived for discriminative learning from implicit feedback, where click models take the role of the propensity estimator.
Abstract: Implicit feedback (e.g., clicks, dwell times, etc.) is an abundant source of data in human-interactive systems. While implicit feedback has many advantages (e.g., it is inexpensive to collect, user centric, and timely), its inherent biases are a key obstacle to its effective use. For example, position bias in search rankings strongly influences how many clicks a result receives, so that directly using click data as a training signal in Learning-to-Rank (LTR) methods yields sub-optimal results. To overcome this bias problem, we present a counterfactual inference framework that provides the theoretical basis for unbiased LTR via Empirical Risk Minimization despite biased data. Using this framework, we derive a Propensity-Weighted Ranking SVM for discriminative learning from implicit feedback, where click models take the role of the propensity estimator. In contrast to most conventional approaches to de-biasing the data using click models, this allows training of ranking functions even in settings where queries do not repeat. Beyond the theoretical support, we show empirically that the proposed learning method is highly effective in dealing with biases, that it is robust to noise and propensity model misspecification, and that it scales efficiently. We also demonstrate the real-world applicability of our approach on an operational search engine, where it substantially improves retrieval performance.
388 citations
•
10 May 1996
TL;DR: In this paper, a system, method, and various software products provide improved information retrieval performance from multiple document databases by retrieving from the multiple document database in response to a user query, a set of documents that globally satisfy the query, even though each database maintains independent document indices, term frequency information, and scoring functions.
Abstract: A system, method, and various software products provide improved information retrieval performance from multiple document databases by retrieving from the multiple document databases in response to a user query, a set of documents that globally satisfy the query, even though each database maintains independent document indices, term frequency information, and scoring functions. The global search result approximates, to any desired degree of error, the search results that would have been obtained had the multiple document databases been globally indexed. This is done by sharing at the time the query is executed, a small subset of information about the local relative significance of terms related to the user's query, and from this information, determining a global relative significance of such terms. From the global relative significance, the individual document databases determine their query results, which are then merged into a global set of documents satisfying the query. The shared local relative significance information may be the inverse document frequency of each of a number of terms related to the query, or it may be the total frequency of each of such terms. The global relative significance may correspondingly be a global inverse document frequency, or a global term frequency from which the global inverse document frequency is calculated.
385 citations
••
20 Jun 2011TL;DR: This work proposes a principled approach for multi-attribute retrieval which explicitly models the correlations that are present between the attributes in the vocabulary, and integrates ranking and retrieval within the same formulation.
Abstract: We propose a novel approach for ranking and retrieval of images based on multi-attribute queries. Existing image retrieval methods train separate classifiers for each word and heuristically combine their outputs for retrieving multiword queries. Moreover, these approaches also ignore the interdependencies among the query terms. In contrast, we propose a principled approach for multi-attribute retrieval which explicitly models the correlations that are present between the attributes. Given a multi-attribute query, we also utilize other attributes in the vocabulary which are not present in the query, for ranking/retrieval. Furthermore, we integrate ranking and retrieval within the same formulation, by posing them as structured prediction problems. Extensive experimental evaluation on the Labeled Faces in the Wild(LFW), FaceTracer and PASCAL VOC datasets show that our approach significantly outperforms several state-of-the-art ranking and retrieval methods.
384 citations
•
01 Aug 2013TL;DR: This work demonstrates that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions and automatically generalizes a seed lexicon, and includes a scalable, parallelized perceptron parameter estimation scheme.
Abstract: We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8% loss in precision.
382 citations