scispace - formally typeset
Journal ArticleDOI

User evaluation of information retrieval systems

Cyril W. Cleverdon
- 01 Feb 1974 - 
- Vol. 30, Iss: 2, pp 170-180
TLDR
For this author, Fairthorne was the first to make explicit the fundamental problems of information retrieval systems, namely the clash between OBNA and ABNO (Only‐But‐Not‐All and All‐But-Not‐Only).
Abstract
While Fairthorne may not have been the first person to recognize it, certainly, for this author, Fairthorne was the first to make explicit the fundamental problems of information retrieval systems, namely the clash between OBNA and ABNO (Only‐But‐Not‐All and All‐But‐Not‐Only). Although it was not until 1958 that the terms occur in Fairthorne's writings, the concept had been discussed in many meetings of the AGARD Documentation Panel and elsewhere. Originally it was considered that to meet these two requirements, it might be necessary to have two separate systems, and the test of the UNITERM system in 1954 was based on the hypothesis that a ‘Marshalling’ system (e.g. U.D.C.) was fundamentally different from a ‘Retrieval’ system (e.g. UNITERM). While the idea persisted in this form for some time, it gradually evolved into the inverse relationship of recall and precision, which is to say that while it is possible to obtain, of the relevant documents, All‐But‐Not‐Only, or alternatively to obtain Only‐But‐Not‐All, it is not possible to obtain All and Only.

read more

Citations
More filters
Book

Information Retrieval Interaction

TL;DR: This electronic version was converted to PDF from the original manuscript with no changes apart from typographical adjustments and it has been ensured that the page numbering of the electronic version matches that of the printed version.
Journal ArticleDOI

Information retrieval through man‐machine dialogue

TL;DR: Initial tests with a prototype program indicate that a performance equal to that obtainable from a more conventional on‐line retrieval system is possible without obliging the user to formulate his query.
Journal ArticleDOI

Evaluation measures for interactive information retrieval

TL;DR: It was showed that value of search results as a whole is the best single measure of interactive IR performance among the measures selected and Precision, one of the most important traditional measures of effectiveness, is not significantly correlated with success.
Journal ArticleDOI

Interactive query expansion: a user-based evaluation in a relevance feedback environment

TL;DR: A user-centered investigation of interactive query expansion within the context of a relevance feedback system is presented, providing evidence for the effectiveness of interactive querying and highlighting the need for more research on.
Book

Search User Interface Design

TL;DR: This book aims to provide the reader with a framework for thinking about how different innovations each contribute to the overall design of a SUI, and provides a series of 20 SUI design recommendations that are listed in the conclusions.
References
More filters
Journal ArticleDOI

On selecting a measure of retrieval effectiveness

TL;DR: It is argued that a user's subjective evaluation of the personal utility of a retrieval system's output to him, if it could be properly quantified, would be a near-ideal measure of retrieval effectiveness.
Journal ArticleDOI

Relevance assessments and retrieval system evaluation

TL;DR: It is found that large scale differences in the relevance assessments do not produce significant variations in average recall and precision, and it thus appears that properly computed recall and Precision data may represent effectiveness indicators which are generally valid for many distinct user classes.
Journal ArticleDOI

On the Inverse Relationship of Recall and Precision.

TL;DR: In this paper, it was shown that within a single system, assuming that a sequence of sub-searches for a particular question is made in the logical order of expected decreasing precision, and the requirements are those stated in the question, there is an inverse relationship between recall and precision, if the results of a number of different searches are averaged.