scispace - formally typeset
Search or ask a question
Topic

Document retrieval

About: Document retrieval is a research topic. Over the lifetime, 6821 publications have been published within this topic receiving 214383 citations.


Papers
More filters
ReportDOI
01 Jan 2004
TL;DR: The High Accuracy Retrieval from Documents (HARD) Track as mentioned in this paper explores methods for improving the accuracy of document retrieval systems by considering three questions: can additional metadata about the query, the searcher, or the context of the search provide more focused and, therefore, more accurate results?
Abstract: : The High Accuracy Retrieval from Documents (HARD) track explores methods for improving the accuracy of document retrieval systems. It does so by considering three questions. Can additional metadata about the query, the searcher, or the context of the search provide more focused and, therefore, more accurate results? These metadata items generally do not directly affect whether or not a document is on topic, but they do affect whether it is relevant. For example, a person looking for introductory material will not find an on-topic but highly technical document relevant. Can highly focused, short-duration, interaction with the searcher be used to improve the accuracy of a system? Participants created "clarification forms" generated in response to a query -- and leveraging any information available in the corpus -- that were filled out by the searcher. Typical clarification questions might ask whether some titles seem relevant, whether some words or names are on topic, or whether a short passage of text is related. Can passage retrieval be used to effectively focus attention on relevant material, increasing accuracy by eliminating unwanted text in an otherwise useful document? For this aspect of the problem, there are challenges in finding relevant passages, but also in determining how best to evaluate the results. The HARD track ran for the second time in TREC 2004. It used a new corpus and a new set of 50 topics for evaluation. All topics included metadata information and clarification forms were considered for each of them. Because of the expense of sub-document relevance judging, only half of the topics were used in the passage-level evaluation. A total of 16 sites participated in HARD, up from 14 sites the year before. Interest remains strong, so the HARD track will run again in TREC 2005, but because of funding uncertainties will only address a subset of the issues.

208 citations

Journal ArticleDOI
01 May 1995
TL;DR: Improvements to the probabilistic information retrieval system based upon a Bayesian inference network model are described, including transforming forms-based specifications of information needs into complex structured queries, automatic query expansion, automatic recognition of features in documents, relevance feedback, and simulated document routing.
Abstract: INQUERY is a probabilistic information retrieval system based upon a Bayesian inference network model. This paper describes recent improvements to the system as a result of participation in the TIPSTER project and the TREC-2 conference. Improvements include transforming forms-based specifications of information needs into complex structured queries, automatic query expansion, automatic recognition of features in documents, relevance feedback, and simulated document routing. Experiments with one- and two-gigabyte document collections are also described.

206 citations

Patent
Doreen Y. Chen1
28 Dec 1998
TL;DR: In this article, an information organization and retrieval system that efficiently organizes documents for rapid and efficient search and retrieval based upon topical content is presented, where documents may be relevant to one or more topics, and will be associated with each topic via the topical hierarchies maintained by the information servers.
Abstract: An information organization and retrieval system that efficiently organizes documents for rapid and efficient search and retrieval based upon topical content is presented. The information organization and retrieval system is optimized for the organization and retrieval of only those documents that are relevant to a given set of predefined topics. If a document does not have a topic that is included in the given set of topics, the document is excluded from the provided service. In like manner, if a document includes a topic that is specifically banned from the provided service, it is excluded. In this paradigm, the provider purposely limits the scope of the provided search and retrieval services, but in so doing provides a more efficient and effective service that is targeted to an expected user demand. The information organization and retrieval system also supports context-sensitive search and retrieval techniques, including the use of predefined or user-defined views for augmenting the search criteria, as well as the use of user specific vocabularies. In a preferred embodiment, the select set of topics are organized in multiple overlapping hierarchies, and a distributed software architecture is used to support the topic-based information organization, routing, and retrieval services. Documents may be relevant to one or more topics, and will be associated with each topic via the topical hierarchies that are maintained by the information servers.

205 citations

Book ChapterDOI
05 Apr 2004
TL;DR: Phrases, word senses and syntactic relations derived by Natural Language Processing techniques were observed ineffective to increase retrieval accuracy.
Abstract: Previous researches on advanced representations for document retrieval have shown that statistical state-of-the-art models are not improved by a variety of different linguistic representations. Phrases, word senses and syntactic relations derived by Natural Language Processing (NLP) techniques were observed ineffective to increase retrieval accuracy. For Text Categorization (TC) are available fewer and less definitive studies on the use of advanced document representations as it is a relatively new research area (compared to document retrieval).

203 citations

01 Jan 2005
TL;DR: What Happened in CLEF 2004?.- What Happens in CLEf 2004?
Abstract: What Happened in CLEF 2004?.- What Happened in CLEF 2004?.- I. Ad Hoc Text Retrieval Tracks.- CLEF 2004: Ad Hoc Track Overview and Results Analysis.- Selection and Merging Strategies for Multilingual Information Retrieval.- Using Surface-Syntactic Parser and Deviation from Randomness.- Cross-Language Retrieval Using HAIRCUT at CLEF 2004.- Experiments on Statistical Approaches to Compensate for Limited Linguistic Resources.- Application of Variable Length N-Gram Vectors to Monolingual and Bilingual Information Retrieval.- Integrating New Languages in a Multilingual Search System Based on a Deep Linguistic Analysis.- IR-n r2: Using Normalized Passages.- Using COTS Search Engines and Custom Query Strategies at CLEF.- Report on Thomson Legal and Regulatory Experiments at CLEF-2004.- Effective Translation, Tokenization and Combination for Cross-Lingual Retrieval.- Two-Stage Refinement of Transitive Query Translation with English Disambiguation for Cross-Language Information Retrieval: An Experiment at CLEF 2004.- Dictionary-Based Amharic - English Information Retrieval.- Dynamic Lexica for Query Translation.- SINAI at CLEF 2004: Using Machine Translation Resources with a Mixed 2-Step RSV Merging Algorithm.- Mono- and Crosslingual Retrieval Experiments at the University of Hildesheim.- University of Chicago at CLEF2004: Cross-Language Text and Spoken Document Retrieval.- UB at CLEF2004: Cross Language Information Retrieval Using Statistical Language Models.- MIRACLE's Hybrid Approach to Bilingual and Monolingual Information Retrieval.- Searching a Russian Document Collection Using English, Chinese and Japanese Queries.- Dublin City University at CLEF 2004: Experiments in Monolingual, Bilingual and Multilingual Retrieval.- Finnish, Portuguese and Russian Retrieval with Hummingbird SearchServerTM at CLEF 2004.- Data Fusion for Effective European Monolingual Information Retrieval.- The XLDB Group at CLEF 2004.- The University of Glasgow at CLEF 2004: French Monolingual Information Retrieval with Terrier.- II. Domain-Specific Document Retrieval.- The Domain-Specific Track in CLEF 2004: Overview of the Results and Remarks on the Assessment Process.- University of Hagen at CLEF 2004: Indexing and Translating Concepts for the GIRT Task.- IRIT at CLEF 2004: The English GIRT Task.- Ricoh at CLEF 2004.- GIRT and the Use of Subject Metadata for Retrieval.- III. Interactive Cross-Language Information Retrieval.- iCLEF 2004 Track Overview: Pilot Experiments in Interactive Cross-Language Question Answering.- Interactive Cross-Language Question Answering: Searching Passages Versus Searching Documents.- Improving Interaction with the User in Cross-Language Question Answering Through Relevant Domains and Syntactic Semantic Patterns.- Cooperation, Bookmarking, and Thesaurus in Interactive Bilingual Question Answering.- Summarization Design for Interactive Cross-Language Question Answering.- Interactive and Bilingual Question Answering Using Term Suggestion and Passage Retrieval.- IV. Multiple Language Question Answering.- Overview of the CLEF 2004 Multilingual Question Answering Track.- A Question Answering System for French.- Cross-Language French-English Question Answering Using the DLT System at CLEF 2004.- Experiments on Robust NL Question Interpretation and Multi-layered Document Annotation for a Cross-Language Question/Answering System.- Making Stone Soup: Evaluating a Recall-Oriented Multi-stream Question Answering System for Dutch.- The DIOGENE Question Answering System at CLEF-2004.- Cross-Lingual Question Answering Using Off-the-Shelf Machine Translation.- Bulgarian-English Question Answering: Adaptation of Language Resources.- Answering French Questions in English by Exploiting Results from Several Sources of Information.- Finnish as Source Language in Bilingual Question Answering.- miraQA: Experiments with Learning Answer Context Patterns from the Web.- Question Answering for Spanish Supported by Lexical Context Annotation.- Question Answering Using Sentence Parsing and Semantic Network Matching.- First Evaluation of Esfinge - A Question Answering System for Portuguese.- University of Evora in QA@CLEF-2004.- COLE Experiments at QA@CLEF 2004 Spanish Monolingual Track.- Does English Help Question Answering in Spanish?.- The TALP-QA System for Spanish at CLEF 2004: Structural and Hierarchical Relaxing of Semantic Constraints.- ILC-UniPI Italian QA.- Question Answering Pilot Task at CLEF 2004.- Evaluation of Complex Temporal Questions in CLEF-QA.- V. Cross-Language Retrieval in Image Collections.- The CLEF 2004 Cross-Language Image Retrieval Track.- Caption and Query Translation for Cross-Language Image Retrieval.- Pattern-Based Image Retrieval with Constraints and Preferences on ImageCLEF 2004.- How to Visually Retrieve Images from the St. Andrews Collection Using GIFT.- UNED at ImageCLEF 2004: Detecting Named Entities and Noun Phrases for Automatic Query Expansion and Structuring.- Dublin City University at CLEF 2004: Experiments with the ImageCLEF St. Andrew's Collection.- From Text to Image: Generating Visual Query for Image Retrieval.- Toward Cross-Language and Cross-Media Image Retrieval.- FIRE - Flexible Image Retrieval Engine: ImageCLEF 2004 Evaluation.- MIRACLE Approach to ImageCLEF 2004: Merging Textual and Content-Based Image Retrieval.- Cross-Media Feedback Strategies: Merging Text and Image Information to Improve Image Retrieval.- ImageCLEF 2004: Combining Image and Multi-lingual Search for Medical Image Retrieval.- Multi-modal Information Retrieval Using FINT.- Medical Image Retrieval Using Texture, Locality and Colour.- SMIRE: Similar Medical Image Retrieval Engine.- A Probabilistic Approach to Medical Image Retrieval.- UB at CLEF2004 Cross Language Medical Image Retrieval.- Content-Based Queries on the CasImage Database Within the IRMA Framework.- Comparison and Combination of Textual and Visual Features for Interactive Cross-Language Image Retrieval.- MSU at ImageCLEF: Cross Language and Interactive Image Retrieval.- VI. Cross-Language Spoken Document Retrieval.- CLEF 2004 Cross-Language Spoken Document Retrieval Track.- VII. Issues in CLIR and in Evaluation.- The Key to the First CLEF with Portuguese: Topics, Questions and Answers in CHAVE.- How Do Named Entities Contribute to Retrieval Effectiveness?.

201 citations


Network Information
Related Topics (5)
Web page
50.3K papers, 975.1K citations
81% related
Metadata
43.9K papers, 642.7K citations
79% related
Recommender system
27.2K papers, 598K citations
79% related
Ontology (information science)
57K papers, 869.1K citations
78% related
Natural language
31.1K papers, 806.8K citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202239
2021107
2020130
2019144
2018111