scispace - formally typeset
Search or ask a question

Showing papers on "Human–computer information retrieval published in 1994"


Proceedings ArticleDOI
24 May 1994
TL;DR: In this paper, the problem of incremental updates of inverted lists is addressed using a new dual-structure index that dynamically separates long and short inverted lists and optimizes retrieval, update, and storage of each type of list.
Abstract: With the proliferation of the world's “information highways” a renewed interest in efficient document indexing techniques has come about. In this paper, the problem of incremental updates of inverted lists is addressed using a new dual-structure index. The index dynamically separates long and short inverted lists and optimizes retrieval, update, and storage of each type of list. To study the behavior of the index, a space of engineering trade-offs which range from optimizing update time to optimizing query performance is described. We quantitatively explore this space by using actual data and hardware in combination with a simulation of an information retrieval system. We then describe the best algorithm for a variety of criteria.

200 citations


Proceedings ArticleDOI
01 Aug 1994
TL;DR: This paper applies LSI to the routing task, which operates under the assumption that a sample of relevant and non-relevant documents is available to use in constructing the query, and finds that when LSI is used is conjuction with statistical classification, there is a dramatic improvement in performance.
Abstract: Latent Semantic Indexing (LSI) is a novel approach to information retrieval that attempts to model the underlying structure of term associations by transforming the traditional representation of documents as vectors of weighted term frequencies to a new coordinate space where both documents and terms are represented as linear combinations of underlying semantic factors. In previous research, LSI has produced a small improvement in retrieval performance. In this paper, we apply LSI to the routing task, which operates under the assumption that a sample of relevant and non-relevant documents is available to use in constructing the query. Once again, LSI slightly improves performance. However, when LSI is used is conjuction with statistical classification, there is a dramatic improvement in performance.

197 citations


Proceedings Article
12 Sep 1994
TL;DR: This work describes the system and presents experimental results showing superior incremental indexing and competitive query processing performance, using a traditional inverted file index built on top of a persistent object store.
Abstract: Full-text information retrieval systems have traditionally been designed for archival environments. They often provide little or no support for adding new documents to an existing document collection, requiring instead that the entire collection be re-indexed. Modern applications, such as information filtering, operate in dynamic environments that require frequent additions to document collections. We provide this ability using a traditional inverted file index built on top of a persistent object store. The data management facilities of the persistent object store are used to produce efficient incremental update of the inverted lists. We describe our system and present experimental results showing superior incremental indexing and competitive query processing performance.

149 citations


Proceedings ArticleDOI
01 Aug 1994
TL;DR: The paper outlines the principles underlying the theory of polyrepresentation and suggests to represent the current user’s information need, problem state, and domain work task or interest in a structure of causality as well as to embody semantic full-text entities by means of the principle of 'intentional redundancy'.
Abstract: The paper outlines the principles underlying the theory of polyrepresentation applied to the user’s cognitive space and the information space of IR systems, set in a cognitive framework. By means of polyrepresentation it is suggested to represent the current user’s information need, problem state, and domain work task or interest in a structure of causality as well as to embody semantic full-text entities by means of the principle of ‘intentional redundancy’. hi IR systems this principle implies simultaneously to apply different methods of representation and a variety of IR techniques of different cognitive origin to each entity. The objective is to aproximate as close as possible text retrieval to retrieval of information in a cognitive sense.

109 citations


Journal ArticleDOI
TL;DR: CodeFinder is a retrieval system that combines retrieval by reformulation and spreading activation and retrieves items related to the query to help users find information, and was more helpful to users seeking relevant information with ill-defined tasks and vocabulary mismatches than other query systems.
Abstract: Component libraries are the dominant paradigm for software reuse, but they suffer from a lack of tools that support the problem-solving process of locating relevant components. Most retrieval tools assume that retrieval is a simple matter of matching well-formed queries to a repository. But forming queries can be difficult. A designer's understanding of the problem evolves while searching for a component, and large repositories often use an esoteric vocabulary. CodeFinder is a retrieval system that combines retrieval by reformulation (which supports incremental query construction) and spreading activation (which retrieves items related to the query) to help users find information. I designed it to investigate the hypothesis that this design makes for a more effective retrieval system. My study confirmed that it was more helpful to users seeking relevant information with ill-defined tasks and vocabulary mismatches than other query systems. The study supports the hypothesis that combining techniques effectively satisfies the kind of information needs typically encountered in software design. >

108 citations


ReportDOI
01 May 1994
TL;DR: This paper describes how information agents represent their knowledge, communicate with other agents, dynamically construct information retrieval plans, and learn about other agents to improve efficiency in a network of cooperating information agents.
Abstract: : With the vast number of information resources available today, a critical problem is how to locate, retrieve and process information. It would be impratical to build a single unified system that combines all of these information resources. A more promising approach is to build specialized information retrieval agents that provide access to a subset of the information resources and can send requests to other information retrieval agents when appropriate. In this paper we present the architecture of the individual information retrieval agents and describe how this architecture supports a network of cooperating information agents. We describe how these information agents represent their knowledge, communicate with other agents, dynamically construct information retrieval plans, and learn about other agents to improve efficiency. We have already built a small network of agents that have these capabilities and provide access to information for transportation planning. Information agents, Information access, Distributed, Heterogeneous, Autonomous, Knowledge representation, Planning.

104 citations


Journal ArticleDOI
TL;DR: An iterative model of retrieval evaluation is proposed, starting first with the use of topical relevance to insure documents on the subject can be retrieved, followed by theUse of situational relevance to show the user can interact positively with the system.
Abstract: The traditional notion of topical relevance has allowed much useful work to be done in the evaluation of retrieval systems, but has limitations for complete assessment of retrieval systems. While topical relevance can be effective in evaluating various indexing and retrieval approaches, it is ineffective for measuring the impact that systems have on users. An alternative is to use a more situational definition of relevance, which takes account of the impact of the system on the user. Both types of relevance are examined from the standpoint of the medical domain, concluding that each have their appropriate use. But in medicine there is increasing emphasis on outcomes-oriented research which, when applied to information science, requires that the impact of an information system on the activities which prompt its use be assessed. An iterative model of retrieval evaluation is proposed, starting first with the use of topical relevance to insure documents on the subject can be retrieved. This is followed by the use of situational relevance to show the user can interact positively with the system. The final step is to study how the system impacts the user in the purpose for which the system was consulted, which can be done by methods such as protocol analysis and simulation. These diverse types of studies are necessary to increase our understanding of the nature of retrieval systems. © 1994 John Wiley & Sons, Inc.

64 citations


Proceedings ArticleDOI
13 Oct 1994
TL;DR: It is demonstrated that the use of syntactic compounds in the representation of database documents as well as in the user queries, coupled with an appropriate term weighting strategy, can considerably improve the effectiveness of retrospective search.
Abstract: We report on the results of a series of experiments with a prototype text retrieval system which uses relatively advanced natural language processing techniques in order to enhance the effectiveness of statistical document retrieval. In this paper we show that large-scale natural language processing (hundreds of millions of words and more) is not only required for a better retrieval, but it is also doable, given appropriate resources. In particular, we demonstrate that the use of syntactic compounds in the representation of database documents as well as in the user queries, coupled with an appropriate term weighting strategy, can considerably improve the effectiveness of retrospective search. The experiments reported here were conducted on TIPSTER database in connection with the Text REtrieval Conference series (TREC).

58 citations


Proceedings Article
12 Sep 1994
TL;DR: The integration of a structured-text retrieval system (TextMachine) into an object-oriented database system (Op) is described, using the external function capability of the database system to encapsulate the text retrieval system as an external information source.
Abstract: We describe the integration of a structured-text retrieval system (TextMachine) into an object-oriented database system (OpOur approach is a light-weight one, using the external function capability of the database system to encapsulate the text retrieval system as an external information source. Yet, we are able to provide a tight integration in the query language and processing; the user can access the text retrieval system using a standard database query language. The effcient and effective retrieval of structured text performed by the text retrieval system is seamlessly combined with the rich modeling and general-purpose querying capabilities of the database system, resulting in an integrated system with querying power beyond those of the underlying systems. The integrated system also provides uniform access to textual data in the text retrieval system and structured data in the database system, thereby achieving information fusion. We discuss the design and implementation of our prototype system, and address issues such as the proper framework for external integration, the modeling of complex categorization and structure hierarchies of documents (under automatic document schema impand techniques to reduce the performance overhead of accessing an external source.

55 citations


Proceedings ArticleDOI
01 Aug 1994
TL;DR: The results show that the most effective sources were the users written question statement, user terms derived during the interaction and terms selected from particular database fields.
Abstract: To improve information retrieval effectiveness, research in both the algorithmic and human approach to query expansion is required. This paper uses the human approach to examine the selection and effectiveness of search terms sources for query expansion. The results show that the most effective sources were the users written question statement, user terms derived during the interaction and terms selected from particular database fields. These findings indicate the need for the design and testing of automatic relevance feedback techniques that place greater emphasis on these sources.

53 citations


01 Jan 1994
TL;DR: This dissertation examines the use of adaptive methods to automatically improve the performance of ranked text retrieval systems and proposes and empirically validate general adaptive methods which improve the ability of a large class of retrieval systems to rank documents effectively.
Abstract: This dissertation examines the use of adaptive methods to automatically improve the performance of ranked text retrieval systems. The goal of a ranked retrieval system is to manage a large collection of text documents and to order documents for a user based on the estimated relevance of the documents to the user's information need (or query). The ordering enables the user to quickly find documents of interest. Ranked retrieval is a difficult problem because of the ambiguity of natural language, the large size of the collections, and because of the varying needs of users and varying collection characteristics. We propose and empirically validate general adaptive methods which improve the ability of a large class of retrieval systems to rank documents effectively. Our main adaptive method is to numerically optimize free parameters in a retrieval system by minimizing a non-metric criterion function. The criterion measures how well the system is ranking documents relative to a target ordering, defined by a set of training queries which include the users' desired document orderings. Thus, the system learns parameter settings which better enable it to rank relevant documents before irrelevant. The non-metric approach is interesting because it is a general adaptive method, an alternative to supervised methods for training neural networks in domains in which rank order or prioritization is important. A second adaptive method is also examined, which is applicable to a restricted class of retrieval systems but which permits an analytic solution. The adaptive methods are applied to a number of problems in text retrieval to validate their utility and practical efficiency. The applications include: A dimensionality reduction of vector-based document representations to a vector space in which inter-document similarity more accurately predicts semantic association; the estimation of a similarity measure which better predicts the relevance of documents to queries; and the estimation of a high-performance neural network combination of multiple retrieval systems into a single overall system. The applications demonstrate that the approaches improve performance and adapt to varying retrieval environments. We also compare the methods to numerous alternative adaptive methods in the text retrieval literature, with very positive results.

Journal ArticleDOI
01 Dec 1994
TL;DR: An information retrieval system that simultaneously allows to search for text and speech documents and it is shown that the retrieval effectiveness based on such a small indexing vocabulary is similar to the retrieved effectiveness of a Boolean retrieval system.
Abstract: We present an information retrieval system that simultaneously allows to search for text and speech documents. The retrieval system accepts vague queries and performs a best-match search to find those documents that are relevant to the query. The output of the retrieval system is a list of ranked documents where the documents on the top of the list satisfy best the user's information need. The relevance of the documents is estimated by means of metadata (document description vectors). The metadata is automatically generated and it is organized such that queries can be processed efficiently. We introduce a controlled indexing vocabulary for both speech and text documents. The size of the new indexing vocabulary is small (1000 features) compared with the sizes of indexing vocabularies of conventional text retrieval (10000 - 100000 features). We show that the retrieval effectiveness based on such a small indexing vocabulary is similar to the retrieval effectiveness of a Boolean retrieval system.

Proceedings Article
Peter E. Hart1, Jamey Graham
01 Jan 1994
TL;DR: The Fixit system as mentioned in this paper integrates an expert diagnostic system with a preexisting full-text database of maintenance manuals to liberate users from burdensome information retrieval activities while incurring minimal system development and runtime costs.
Abstract: To liberate users from burdensome information-retrieval activities while incurring minimal system-development and runtime costs, the authors present query-free information retrieval Their system, Fixit, integrates an expert diagnostic system with a preexisting full-text database of maintenance manuals

Book ChapterDOI
02 May 1994
TL;DR: This work has taken an existing information retrieval system (INQUERY) and substituted a persistent object store (Mneme) for the portion of the custom data management system that manages an inverted file index, resulting in an improvement in performance and significant opportunities for the information retrieved system to take advantage of the standard data management services provided by the persistence store.
Abstract: Full-text information retrieval systems have unusual and challenging data management requirements Attempts have been made to satisfy these requirements using traditional (eg, relational) database management systems Those attempts, however, have produced rather discouraging results Instead, information retrieval systems typically use custom data management facilities that require significant development effort and usually do not provide all of the services available from a standard database management system Advanced data management systems, such as object-oriented database management systems and persistent object stores, offer a reasonable alternative to the two previous approaches We have taken an existing information retrieval system (INQUERY) and substituted a persistent object store (Mneme) for the portion of the custom data management system that manages an inverted file index The result is an improvement in performance and significant opportunities for the information retrieval system to take advantage of the standard data management services provided by the persistent object store We describe our implementation, present performance results on a variety of document collections, and discuss the advantages of using a persistent object store to support information retrieval

01 Jan 1994
TL;DR: This paper presents an architecture for building such agents that addresses the Agent issues of representation, communication, problem Each information agent is specialized to a particular solving, and learning, and provides a modular organizaarchitecture that supports agents that are modular, extion of the vast number of information sources and pro-matter.
Abstract: Marina del Rey, CA 90292, USA {KNOBLOCK,ARENS}QISI.ED Abstract sources that are available to it. Given an information request, an agent identifies an appropriate set of With the vast number of information resources information sources, generates a plan to retrieve and available today, a critical problem is how to loprocess the data, uses knowledge about the data to recate, retrieve and process information. It would formulate the plan, and then executes it. This paper be impractical to build a single unified system that describes our approach to the issues of representation, combines all of these information resources. A communication, problem solving, and learning, and demore promising approach is to build specialized scribes how this approach supports multiple, collaboinformation retrieval agents that provide access rating information retrieval agents. to a subset of the information resources and can send requests to other information retrieval agents Representing the Knowledge of an when needed. In this paper we present an architecture for building such agents that addresses the Agent issues of representation, communication, problem Each information agent is specialized to a particular solving, and learning. We also describe how this area of expertise. This provides a modular organizaarchitecture supports agents that are modular, extion of the vast number of information sources and pro-

Proceedings ArticleDOI
01 Aug 1994
TL;DR: The results indicate that searchers with higher levels of perceptual speed will learn additional search vocabulary, and use that vocabulary to complete higher quality searches, when they use a system designed to optimize scanning of subject descriptors, which supports the idea that cognitive abilities influence information system usability.
Abstract: Although the cognitive ability “perceptual speed” is known to influence search performance by end-users, previous research has not established the mechanism by which this influence occurred. Results from educational psychology suggest that learning that occurs during searching is likely to be influenced by perceptual speed. An experiment was designed to test how this cognitive ability would interact with a system feature designed to enhance learning of search vocabulary, specifically, presenting subject descriptors as the first element in the display of a reference. Results showed significant interactions between perceptual speed and the order of presentation of data elements in predicting both vocabulary learning and search performance. These results indicate that searchers with higher levels of perceptual speed will learn additional search vocabulary, and use that vocabulary to complete higher quality searches, when they use a system designed to optimize scanning of subject descriptors. This outcome supports the idea that cognitive abilities influence information system usability, and that usability is determined by interactions between characteristics of users and system features. The findings also suggest that system features that enhance the learning of search vocabulary, such as query expansion mechanisms, can have a significant positive effect on the quality of end-user searching.

Proceedings ArticleDOI
Peter Anick1
01 Aug 1994
TL;DR: The challenges of tuning an IR system to the domain of computer troubleshooting, where user queries tend to be very short and natural language query terms are intermixed with terminology from a variety of technical sublanguages are considered.
Abstract: There has been much research in full-text information retrieval on automated and semi-automated methods of query expansion to improve the effectiveness of user queries In this paper we consider the challenges of tuning an IR system to the domain of computer troubleshooting, where user queries tend to be very short and natural language query terms are intermixed with terminology from a variety of technical sublanguages A number of heuristic techniques for domain knowledge acquisition are described in which the complementary contributions of query log data and corpus analysis are exploited We discuss the implications of sublanguage domain tuning for run-time query expansion tools and document indexing, arguing that the conventional devices for more purely “natural language” domains may be inadequate

Journal ArticleDOI
TL;DR: An overview of newer techniques and their usage in information science research is provided and the algorithms adopted for a hybrid Genetic Algorithms and Neural Nets based system, called GANNET, are presented.
Abstract: Information retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also have made an impressive contribution to "intelligent" information retrieval and indexing. More recently, information science researchers have turned to other, newer artificial intelligence-based inductive learning techniques including neural networks, symbolic learning, and genetic algorithms. The newer techniques have provided great opportunities for researchers to experiment with diverse paradigms for effective information processing and retrieval.In this article we first provide an overview of newer techniques and their usage in information science research. We then present in detail the algorithms we adopted for a hybrid Genetic Algorithms and Neural Nets based system, called GANNET. GANNET performed concept (keyword) optimization for user-selected documents during information retrieval using the genetic algorithms. It then used the optimized concepts to perform concept exploration in a large network of related concepts through the Hopfield net parallel relaxation procedure. Based on a test collection of about 3,000 articles from DIALOG and an automatically created thesaurus, and using Jaccard's score as a performance measure, our experiment showed that GANNET improved the Jaccard's scores by about 50 percent and it helped identify the underlying concepts (keywords) that best describe the user-selected documents.


Proceedings ArticleDOI
28 Apr 1994
TL;DR: This tutorial presents the key issues involved in the use and design of effective interfaces to information retrieval systems and outlines some user-centered design strategies for retrieval systems.
Abstract: The need for effective information retrieval systems becomes increasingly important as computer-based information repositories grow larger and more diverse. In this tutorial, we will present the key issues involved in the use and design of effective interfaces to information retrieval systems. The process of satisfying information needs is analyzed as a problem solving activity in which users learn and refine their needs as they interact with a repository. Current systems are analyzed in terms of key interface and interaction techniques such as querying, browsing, and relevance feedback, We will discuss the impact of information seeking strategies on the search process and what is needed to more effectively support the search process. Retrieval system evaluation techniques will be discussed in terms of their implications for users. We close by outlining some user-centered design strategies for retrieval systems. INFORMATION RETRIEVAL AS A PROBLEM SOLVING PROCESS The field of information retrieval can be divided along the lines of its system-based and user-based concerns. While the system-based view is concerned with efficient search techniques to match query and document representations, the user-based view must account for the cognitive state of the searcher and the problem solving context. People are drawn to an information retrieval system because they perceive that they lack some knowledge to solve a problem or perform a task. This creates an “anomalous state of knowledge” [1] or “situation of irresolution” [6] in which information seekers must find something they know little or nothing about. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of ACM. To copy otherwise, or to republish, requires a fee and/or specific permission. CHI’ Companion 95, Denver, Colorado, USA Q 1995 ACM 0-89791 -755-3/95/0005... $3.50 Nick Belkin Rutgers University The School of Communication, Information, and Library Studies 4 Huntington Street New Brunswick, NJ 08901-1071 USA nick@ belkin.rutgers .edu Information retrieval systems must not only provide efficient retrieval, but must also support the user in describing a problem that they do not understand well. The process is not only one of providing a good query language, but also supporting an iterative dialogue model. As users query and browse the repository, they learn more about the problem and potential solutions and therefore refine their conceptualization of the problem, The information being sought differs from that being sought at the beginning of the session. The user is engaged in a problem solving session in which the problem to be solved, that of finding relevant information, evolves and is refined through the process of seeing the results of intermediate queries. THE VOCABULARY PROBLEM Even in cases where the information is well-known, a vocabulary problem still exists. Users may know what they are looking for, but lack the knowledge needed to articulate the problem in terms and abstractions used by the retrieval system. An inherent problem is that people use a surprisingly diverse set of terms to refer to the same object, such that the probability for choosing the same term for a familiar object is less than 15 percent [3]. This problem is exacerbated by the fact that information repositories are often indexed by experts and by the inherent properties of the objects. Expert indexing causes problems because less knowledgeable users, who define the majority of people experiencing an anomalous state of knowledge, are less likely understand the terminology used by experts. Indexing by inherent properties causes problems because most information seeking is engaged in some problem solving context. People are looking for information that is used for something and are therefor more concerned with how an object is used, not its inherent properties [4]. INTERFACES FOR RETRIEVAL SYSTEMS Current information retrieval systems have addressed these inherent properties of information seeking and indexing in a variety of ways. Browsing has been employed to facilitate the iterative and ill-defined nature of information seeking, but can lead a loss of direction and overly narrow

Book
01 Jan 1994
TL;DR: A framework for the theoretical comparison of information retrieval models based on how the models decide aboutness is presented, based on concepts emerging from the eld of situation theory.
Abstract: This paper presents a framework for the theoretical comparison of information retrieval models based on how the models decide aboutness. The framework is based on concepts emerging from the eld of situation theory. So called infons and profons represent elementary information carriers which can be manipulated by union and fusion operators. These operators allow relationships between information carriers to be established. Sets of infons form so called situations which are used to model the information born by objects such as documents. An arbitrary information retrieval model can be mapped down into the framework. Special functions are de ned for this purpose depending on the model at hand. An important aspect is the inference mechanism which is mapped to inference between situations. Two examples are given based on the Boolean retrieval and coordination level matching models. The framework allows the comparison of retrieval models at an abstract level. Starting from an axiomatization of aboutness, retrieval models can be compared according to which axioms they are governed by. This approach is highlighted by the theoretical comparison of Boolean retrieval with coordinate level matching. This work was partly performed while employed at the Utrecht University.

Book ChapterDOI
16 Aug 1994
TL;DR: A knowledge base consisting of over 12,000 case frames for verbs and a large number of other linguistic patterns that reveal conceptual relations was constructed and used to process a Wall Street Journal database covering a period of three years.
Abstract: This paper describes our large-scale effort to build a conceptual Information Retrieval system that converts a large volume of natural language text into Conceptual Graph representation by means of knowledge-based processing. In order to automatically extract concepts and conceptual relations between concepts from texts, we constructed a knowledge base consisting of over 12,000 case frames for verbs and a large number of other linguistic patterns that reveal conceptual relations. They were used to process a Wall Street Journal database covering a period of three years. We describe our methods for constructing the knowledge base, how the linguistic knowledge is used to process the text, and how the retrieval system makes use of the rich representation of documents and information needs.

31 Dec 1994
TL;DR: A collection of 46,000 documents from the Federal Register is used as a test database to demonstrate the usefulness of vector processing methods and to illustrate the text analysis and retrieval operations.
Abstract: The vector space model of retrieval has been in use for thirty years, and it has consistently produced superior retrieval results for collections of natural-language texts. A collection of 46,000 documents from the Federal Register is used as a test database to demonstrate the usefulness of vector processing methods and to illustrate the text analysis and retrieval operations.

Proceedings Article
11 Oct 1994
TL;DR: An architecture for an interactive retrieval system based on abduction is proposed comprising a schema-level representation of the documents' contents and structure, an abductive retrieval engine, and a user interface which allows to control the inference process.
Abstract: The problem of automatic query expansion is studied in the context of a logic-based information retrieval system that employs - in contrast to approaches based on deductive reasoning - an abductive inference engine. Given a query, the abduction process yields a set of possible expansions to the query. An architecture for an interactive retrieval system based on abduction is proposed comprising a schema-level representation of the documents' contents and structure, an abductive retrieval engine, and a user interface which allows to control the inference process. The retrieval engine was tested on a collection of SGML-structured texts. We report on experimental results in the last section of the paper.

Journal ArticleDOI
TL;DR: The paper proposes the Interaction Information Retrieval model, in which documents are interconnected, queries and documents are treated in the same way, and in which retrieval is the result of the interconnection between query and documents.
Abstract: In existing information retrieval models there are three different ways documents are represented for retrieval purposes: vectors of weights, collections of sentences and artificial neurons. Accordingly, retrieval depends on a similarity function, or means an inference, or is a spreading of activation. Relevancy is considered to be a critical modelling parameter which is either a priori or it is not treated at all. Assuming that relevancy may equally be an emergent entity, thus not requiring any a priori modelling, the paper proposes the Interaction Information Retrieval model in which documents are interconnected, queries and documents are treated in the same way, and in which retrieval is the result of the interconnection between query and documents. Algorithms and experiences gained with practical applications are presented. A theoretical mathematical formulation of this type of retrieval is also given.

Book ChapterDOI
07 Sep 1994
TL;DR: C-TORI (Cooperative TORI), a cooperative version of TORI (Task-Oriented Database Retrieval Interface), is presented in this paper and is based on the concept of shared UI objects as an application-independent cooperation and communication model.
Abstract: C-TORI (Cooperative TORI), a cooperative version of TORI (Task-Oriented Database Retrieval Interface), is presented in this paper. It extends interactive query formulation and result browsing by supporting cooperation between multiple users. In the cooperative environment, three basic additional operations are provided: copying, merging and coupling for three types of TORI objects (query forms, result forms, and query history windows). Cooperation with query forms allows end users to jointly formulate queries; cooperation with result forms supports users in jointly browsing through results and in sharing retrieved data without re-accessing the database; cooperative use of query histories yields a specific mechanism to share “memory” between users. The implementation is based on the concept of shared UI objects as an application-independent cooperation and communication model.

Journal ArticleDOI
TL;DR: Information customisation is characterised as the transformation of information into its most appropriate form, which makes existing information more useful.
Abstract: As we move further into the information age, it is becoming ever more apparent that society as a whole, and information and computing specialists as its agents, will have to confront the general problem of information overload. The rising flood of information will soon compel us to use techniques and resources aimed at maximising our information handling efficiency. Storing and retrieving digital information according to consumer requirements is only part of the equation, information must also be presented in a form suited to the consumer's needs at the time of consumption. We call this information customisation and characterise it as the transformation of information into its most appropriate form. Thus, customisation makes existing information more useful. >

Journal ArticleDOI
TL;DR: It is argued that adequate information retrieval in hospital records will have to rely on the exploitation of the conceptual knowledge in those records rather than superficial string searches, and a retrieval system, called CONIR, is presented, which attempts to realise the second of these developments.