scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the Association for Information Science and Technology in 1993"


Journal ArticleDOI
TL;DR: The results showed that search experience affected searchers' use of many search tactics, and suggested that subject knowledge became a factor only after searchers have had a certain amount of search experience.
Abstract: This study investigated the effects of subject knowledge and search experience on novices' and experienced searchers' use of search tactics in online searches. Novice and experienced searchers searched a practice question and two test questions in the ERIC database on the DIALOG system and their use of search tactics were recorded by protocols, transaction logs, and observation. Search tactics were identified from the literature and verified in 10 pretests, and nine search tactics variables were operationalized to describe the differences between the two searcher groups. Data analyses showed that subject knowledge interacted with search experience, and both variables affected searchers' behavior in four ways: (1) when questions in their subject areas were searched, experience affected searchers' use of synonymous terms, monitoring of the search process, and combinations of search terms; (2) when questions outside their subject areas were searched, experience affected searchers' reliance on their own terminology, use of the thesaurus, offline term selection, use of synonymous terms, and combinations of search terms; (3) within the same experience group, subject knowledge had no effect on novice searchers; but (4) subject knowledge affected experienced searcher's reliance on their own language, use of the thesaurus, offline term selection, use of synonymous terms, monitoring of the search, and combinations of search terms. The results showed that search experience affected searchers' use of many search tactics, and suggested that subject knowledge became a factor only after searchers have had a certain amount of search experience. © 1993 John Wiley & Sons, Inc.

340 citations


Journal ArticleDOI
TL;DR: A fuzzy linguistic model is defined, starting from an existing weighted Boolean retrieval model, a linguistic extension, formalized within fuzzy set theory, in which numeric query weights are replaced by linguistic descriptors which specify the degree of importance of the terms.
Abstract: The generalization of Boolean Information Retrieval Systems (IRS) is still an open research field; in fact, though such systems are diffused on the market, they present some limitations; one of the main features lacking in these systems is the ability to deal with the “imprecision” and “subjectivity” characterizing retrieval activity. However, the replacement of such systems would be much more costly than their evolution through the incorporation of new features to enhance their efficiency and effectiveness. Previous efforts in this area have led to the introduction of numeric weights to improve both document representation and query language. By attaching a numeric weight to a term in a query, a user can provide a quantitative description of the “importance” of that term in the documents he or she is looking for. However, the use of weights requires a clear knowledge of their semantics for translating a fuzzy concept into a precise numeric value. Our acquaintance with these problems led us to define, starting from an existing weighted Boolean retrieval model, a linguistic extension, formalized within fuzzy set theory, in which numeric query weights are replaced by linguistic descriptors which specify the degree of importance of the terms. This fuzzy linguistic model is defined and an evaluation is made of its implementation on a Boolean IRS. © 1993 John Wiley & Sons, Inc.

285 citations



Journal ArticleDOI
TL;DR: A survey of the environmental scanning behavior of 207 CEOs in two Canadian industries as discussed by the authors found that the amount of scanning increases with perceived environmental uncertainty, and that the CEOs use a mix of internal and external, as well as personal and impersonal sources, to scan the environment.
Abstract: The work of managers is information-intensive. Managers receive a huge amount of information from a wide range of sources and use the information to make day-to-day decisions and to formulate longer-term strategies. Yet much remains to be learned about the information behavior of managers as a distinct user group. This article reports on how top managers acquire and use information about the external business environment. Today's firms have to adapt to turbulent environments in which the competition, market, technology, and social conditions are constantly changing. Environmental scanning is the activity of gaining information about events and relationships in the organization's environment, the knowledge of which would assist management in planning future courses of action. We present the findings of a survey of the environmental scanning behavior of 207 CEOs in two Canadian industries—publishing and telecommunications. The CEOs indicated their perceptions of the level of uncertainty in the external environment, which sources they used to scan the environment, and their perceptions of the accessibility and quality of various sources. The survey found that the amount of scanning increases with perceived environmental uncertainty, and that the CEOs use a mix of internal and external, as well as personal and impersonal sources, to scan the environment. Analysis suggests that between environmental uncertainty, source accessibility, and source quality, source quality is the most important factor in explaining source use in scanning. This runs contrary to earlier user studies, particularly those of engineers and scientists, which concluded that perceived source accessibility was the overwhelming factor in source selection. A number of plausible explanations for this difference are discussed. © 1993 John Wiley & Sons, Inc.

171 citations


Journal ArticleDOI
TL;DR: A graphical Filter/Flow representation of Boolean queries was designed to provide users with an interface that visually conveys the meaning of the Boolean operators (AND, OR, and NOT) by implementing a graphical interface prototype that uses the metaphor of water flowing through filters.
Abstract: One of the powerful applications of Boolean expression is to allow users to extract relevant information from a database. Unfortunately, previous research has shown that users have difficulty specifying Boolean queries. In an attempt to overcome this limitation, a graphical Filter/Flow representation of Boolean queries was designed to provide users with an interface that visually conveys the meaning of the Boolean operators (AND, OR, and NOT). This was accomplished by implementing a graphical interface prototype that uses the metaphor of water flowing through filters. Twenty subjects having no experience with Boolean logic participated in an experiment comparing the Boolean operations represented in the Filter/Flow interface with a text-only SQL interface. The subjects independently performed five comprehension tasks and five composition tasks in each of the interfaces. A significant difference (p < 0.05) in the total number of correct queries in each of the comprehension and composition tasks was found favoring Filter/Flow. © 1993 John Wiley & Sons, Inc.

161 citations


Journal ArticleDOI
TL;DR: Development of the Envision database, system software, and protocol for client-server communication builds upon work to identify and represent “ objects” that will facilitate reuse and high-level communication of information from author to reader (user).
Abstract: Project Envision aims to build a “user-centered database from the computer science literature,” initially using the publications of the Association for Computing Machinery (ACM) Accordingly, we have interviewed potential users, as well as experts in library, information, and computer science—to understand their needs, to become aware of their perception of existing information systems, and to collect their recommendations Design and formative usability evaluation of our interface have been based on those interviews, leading to innovative query formulation and search results screens that work well according to our usability testing Our development of the Envision database, system software, and protocol for client-server communication builds upon work to identify and represent “objects” that will facilitate reuse and high-level communication of information from author to reader (user) All these efforts are leading not only to a usable prototype digital library but also to a set of nine principles for digital libraries, which we have tried to follow, covering issues of representation, architecture, and interfacing © 1993 John Wiley & Sons, Inc

157 citations


Journal ArticleDOI
TL;DR: This paper explored children's information retrieval behavior using an online public access catalog (OPAC) in an elementary school library and reported the overall patterns of children's behavior that influence success and breakdown in information retrieval as well as findings about the intentions, moves, plans, strategies, and search terms of children in grades one through six.
Abstract: This article reports research that explored children's information retrieval behavior using an online public access catalog (OPAC) in an elementary school library. The study considers the impact of a variety of factors including user characteristics, the school setting, interface usability, and information access features on children's information retrieval success and breakdown. The study reports the overall patterns of children's behavior that influence success and breakdown in information retrieval as well as findings about the intentions, moves, plans, strategies, and search terms of children in grades one through six. © 1993 John Wiley & Sons, Inc.

146 citations



Journal ArticleDOI
TL;DR: A review of the various different approaches to Chinese text segmentation has been classified in order to provide a general picture of the research activity in this area and demonstrate thatText segmentation remains one of the most challenging and interesting areas for Chinese text retrieval.
Abstract: Present text retrieval systems are generally built on the reductionist basis that words in texts (keywords) are used as indexing terms to represent the texts. A necessary precursor to these systems is word extraction which, for English texts, can be achieved automatically by using spaces and punctuations as word delimiters. This cannot be readily applied to Chinese texts because they do not have obvious word boundaries. A Chinese text consists of a linear sequence of nonspaced or equally spaced ideographic characters, which are similar to morphemes in English. Researchers of Chinese text retrieval have been seeking methods of text segmentation to divide Chinese texts automatically into words. First, a review of these methods is provided in which the various different approaches to Chinese text segmentation have been classified in order to provide a general picture of the research activity in this area. Some of the most important work is described. There follows a discussion of the problems of Chinese text segmentation with examples to illustrate. These problems include morphological complexities, segmentation ambiguity, and parsing problems, and demonstrate that text segmentation remains one of the most challenging and interesting areas for Chinese text retrieval. © 1993 John Wiley & Sons, Inc.

135 citations


Journal ArticleDOI
TL;DR: A two-year project to study how advanced humanities scholars operate as end users of online databases analyzes how much searching the scholars did, the kinds of search techniques and DIALOG features they used, and their learning curves.
Abstract: The Getty Art History Information Program carried out a two-year project to study how advanced humanities scholars operate as end users of online databases. Visiting Scholars at the Getty Center for the History of Art and the Humanities in Santa Monica, California, were offered the opportunity to do unlimited subsidized searching of DIALOG® databases. The second report from the project analyzes how much searching the scholars did, the kinds of search techniques and DIALOG features they used, and their learning curves. Search features studied included commands, Boolean logic, types of vocabulary, and proximity operators. Error rates were calculated, as well as how often the scholars used elementary search formulations and introduced new search features and capabilities into their searches. The amount of searching done ranged from none at all to dozens of hours. A typical search tended to be simple, using one-word search terms and little or no Boolean logic. Starting with a full day of DIALOG training, the scholars began their search experience at a reasonably high level of competence; in general, they maintained a stable level of competence throughout the early hours of their search experience. © 1993 John Wiley & Sons, Inc.

103 citations


Journal ArticleDOI
TL;DR: A suffixing algorithm which uses grammatical categories to enhance the stemming process and always returns a linguistically correct lemma, but not necessarily the “right” one.
Abstract: Automatic indexing systems use suffix stripping algorithms to cluster various words derived from a common root under the same stem. Currently, removing affixes to either a context-free or context-sensitive operation, where the context refers to the remaining stem. In this article, we propose a suffixing algorithm which uses grammatical categories to enhance the stemming process. This approach supports the use of foreign languages. In our case, the language is French, and a morphological analysis is required for removing inflectional suffixes or morphosyntactic variants of a lemma. After this analysis, we implement a suffix stripping algorithm which uses a dictionary and the grammatical categories to remove derivational suffixes. Our approach always returns a linguistically correct lemma, but not necessarily the “right” one. Based on our tests, this solution is an attractive one, with a mean error rate of 16%. We finish by explaining why we cannot expect significantly better results with this approach.


Journal ArticleDOI
TL;DR: Results are reported on the application of the compression methods to several substantial full-text databases, and show that a large, unindexed text can be stored, along with indexes that facilitate fast searching, in less than half its original size—at some appreciable cost in primary memory requirements.
Abstract: When data compression is applied to full-text retrieval systems, intricate relationships emerge between the amount of compression, access speed, and computing resources required. We propose compression methods, and explore corresponding tradeoffs, for all components of static full-text systems such as text databases on CD-ROM. These components include lexical indexes, inverted files, bitmaps, signature files, and the main text itself. Results are reported on the application of the methods to several substantial full-text databases, and show that a large, unindexed text can be stored, along with indexes that facilitate fast searching, in less than half its original size—at some appreciable cost in primary memory requirements. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: Shannon's theory of communication is discussed from the point of view of his concept of uncertainty, and it is suggested that there are two information concepts in Shannon, two different uncertainties, and at least two different entropy concepts.
Abstract: Shannon's theory of communication is discussed from the point of view of his concept of uncertainty. It is suggested that there are two information concepts in Shannon, two different uncertainties, and at least two different entropy concepts. Information science focuses on the uncertainty associated with the transmission of the signal rather than the uncertainty associated with the selection of a message from a set of possible messages. The author believes the latter information concept, which is from the sender's point of view, has more to say to information science about what information is than the former, which is from the receiver's point of view and is mainly concerned with “noise” reduction. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: An expert system for online search assistance automatically reformulates queries to improve the search results, and ranks the retrieved passages to speed the identification of relevant information.
Abstract: Unfamiliarity with search tactics creates difficulties for many users of online retrieval systems. User observations indicate that even experienced searchers use vocabulary incorrectly and rarely reformulate their queries. To address these problems, an expert system for online search assistance was developed. This prototype automatically reformulates queries to improve the search results, and ranks the retrieved passages to speed the identification of relevant information. Users' search performance using the expert system was compared with their search performance on their own, and their search performance using an online thesaurus. The following conclusions were reached: (1) The expert system significantly reduced the number of queries necessary to find relevant passages compared with the user searching alone or with the thesaurus. (2)The expert system puced marinally significant improvemen in precision compared with e user searching on their own. There was no significant differnce in e call achieved b e thre system configurations. (3) Overall, the expert system ranked relevand passages above irrelevant passages

Journal ArticleDOI
TL;DR: A system is described which digests large volumes of text, filtering out irrelevant articles and distilling the remainder into templates that represent information from the articles in simple slot/filler pairs, taking advantage of simple string matching techniques to improve the effectiveness of more complex sentence‐level semantic processes.
Abstract: 0 system is described which digests large volumes of text, filtering out irrelevant articles and distilling the remainder into templates that represent information from the articles in simple slot/filler pairs. The system is highly modular in that it consists of a series of programs, each of which contributes information to the text to help in the final analysis of determining which strings constitute valid values for the slots in the template. This modular design has the dual advantage of allowing relatively easy debugging and of permitting many of the component programs to participate in other projects

Journal ArticleDOI
Liwen Qiu1
TL;DR: The objective of this research is to discover the search state patterns through which users retrieve information in hypertext systems by comparing the corresponding transition probability matrices of different user groups.
Abstract: The objective of this research is to discover the search state patterns through which users retrieve information in hypertext systems. The Markov model is used to describe users' search behavior. As determined by the log-linear model test, the second-order Markov model is the best model. Search patterns of different user groups were studied by comparing the corresponding transition probability matrices. The comparisons were made based on the following factors: gender, search experience, search task, and the user's academic background. The statistical tests revealed that there were significant differences between all the groups being compared. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: In this article, a generalized inverse Gaussian-Poisson (GIGP) model for informetric data sets is proposed, which allows a unified and theoretically sound approach to the fitting of the GIGP and is illustrated using several of the classic informetric datasets.
Abstract: The fact that many informetric data sets exclude the zero-category—corresponding to the nonproducers being unobserved—has led to difficulties in the implementation of Sichel's generalized inverse Gaussian-Poisson (GIGP) process for informetric modeling, despite its theoretical attraction. These computational problems have been surmounted by the development of a program giving maximum likelihood estimates of the parameters of the zero-truncated GIGP. This allows a unified and theoretically sound approach to the fitting of the GIGP and is illustrated using several of the classic informetric data sets. The method also highlights situations in which the model motivating the GIGP is inappropriate. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: The purpose of the research is to investigate the semantic relationship between citing and cited documents for a sample of document pairs in three journals in library and information science: Library Journal, College and Research Libraries, and Journal of the American Society for Information Science.
Abstract: The act of referencing another author's work in a scholarly or research paper is usually assumed to signal a direct semantic relationship between the citing and cited work. The present article reports a study that examines this assumption directly. The purpose of the research is to investigate the semantic relationship between citing and cited documents for a sample of document pairs in three journals in library and information science: Library Journal, College and Research Libraries, and Journal of the American Society for Information Science. A macroanalysis, based on a comparison of the Library of Congress class numbers assigned citing and cited documents, and a microanalysis, based on a comparison of descriptors assigned citing and cited documents by three indexing and abstracting journals, ERIC, LISA, and Library Literature, were conducted. Both analyses suggest that the subject similarity among pairs of cited and citing documents is typically very small, supporting a subjective, psychological view of relevance and a trial-and-error, heuristic understanding of the information search and research processes. The results of the study have implications for collection development, for an understanding of psychological relevance, and for the results of doing information retrieval using cited references. Several intriguing methodological questions are raised for future research, including the role of indexing depth, specificity, and quality on the measurement of document similarity. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: The scale and significance of acknowledgment behavior in ten highly ranked sociology journals over a 10-year period is analyzed in this article, and the case for incorporating acknowledgment data into the academic audit process, along with more established bibliometric indicators, such as publication and citation counts, is considered.
Abstract: The scale and significance of acknowledgment behavior in ten highly ranked sociology journals over a 10-year period is analyzed. Almost three quarters of all articles (N = 4200) included an acknowledgment statement; more than half included an acknowledgment attesting to peer interactive communication. Functional and symbolic parallels between acknowledgment and citation are discussed. Almost 5000 individuals were explicitly acknowledged. Only a few were highly acknowledged. No correlation was found between frequency of acknowledgment and frequency of citation. Nor was there a correlation between frequency of acknowledgment and time-in-field as measured from date of terminal degree. The case for incorporating acknowledgment data into the academic audit process, along with more established bibliometric indicators, such as publication and citation counts, is considered. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: The challenge of matching specific communication technologies to phases and functions of knowledge utilization is renewed by the present mix of analog and digital media.
Abstract: From the mid-1960s until the end of the 1970s, knowledge utilization was a framing concept for policy research on dissemination and social change in the U.S. The 1980s were a hiatus in the development of dissemination and social change strategies, but the present domestic refocusing of national policy brings knowledge utilization once again to the forefront. The communication technologies used in knowledge utilization programs of the 1960s and 1970s consisted of analog media such as printed materials and video. The technologies used in knowledge utilization programs of the 1990s will include several digital media such as ISDN, online search services, e-mail, facsimile, and CD-ROM. The sweeping claims made for digital media today are similar to those made for analog media 20 years ago, when in fact the analog media played only a secondary role to the prime movers of social networks and personal influence. Some properties of digital media such as asynchronicity and transformability will meet previously unmet needs in knowledge utilization. The challenge of matching specific communication technologies to phases and functions of knowledge utilization is renewed by the present mix of analog and digital media. Reasons why communication technologies succeed range from “meets an important need” to “avoids the technophobic pitfalls of deskilling, destatusing, undue technological literacy, and inhibition of human contact.” © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: Knowledge Utilization refers to interventions aimed at increasing the use of knowledge to solve human problems as mentioned in this paper. But knowledge utilization is not limited to health and social service fields, it can also include other fields, such as information science.
Abstract: Strategies for knowledge utilization in the health and social service fields have many conceptual linkages with the field of information science. Knowledge utilization involves interventions aimed at increasing the use of knowledge to solve human problems. A review of definitions of various subfields included under this term is followed by a discussion of the historical evolution of knowledge utilization concepts and practices. Basic principles and strategies are presented, along with key issues confronting the field for the 1990s. Areas of current and future interaction with information science also are discussed. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: The authors examines the future of the book publishing industry and presents strategies for publishers to decrease risk and increase profit, which also benefit education, science, and technology by making books cheaper, more flexible, and more easily and quickly available.
Abstract: This article examines the future of the book publishing industry and presents strategies for publishers to decrease risk and increase profit. These strategies also benefit education, science, and technology by making books cheaper, more flexible, and more easily and quickly available. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: This article proposed a mathematical model that matches empirical data closely to elucidate citation patterns and showed promise for estimating individuals' influence in a field and may assist in determining cognitive interdependence among disciplines.
Abstract: Acknowledgments have received relatively little attention in spite of what at least one researcher has called their role as “super-citations.” Unlike many citations, such acknowledgments necessarily imply a high degree of social interaction. Examining those acknowledgments that suggest significant intellectual indebtedness, the authors propose a mathematical model that matches empirical data closely. The proposed model is one of several used to elucidate citation patterns. When applied to acknowledgments, it shows promise for estimating individuals' influence in a field and may assist in determining cognitive interdependence among disciplines. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: The conclusion is that a more realistic and complete view of IR is obtained if the authors do not consider documents and queries to be elements of the same space, which implies that certain restrictions usually applied in the design of an IR system are obviated.
Abstract: Many authors, who adopt the vector space model, take the view that documents, terms, queries, etc., are all elements within the same (conceptual) space. This view seems to be a natural one, given that documents and queries have the same vector notation. We show, however, that the structure of the query space can be very different from that of the document space. To this end, concepts like preference, similarity, term independence, and linearity, both in the document space and in the query space, are discussed. Our conclusion is that a more realistic and complete view of IR is obtained if we do not consider documents and queries to be elements of the same space

Journal ArticleDOI
TL;DR: It is proposed that the set of documents be ranked according to their agreement with the given user query, and it is shown that the Belief Function Model is wider in scope than the Standard Vector Space Model.
Abstract: The Belief Function Model for automatic indexing and ranking of documents with respect to a given user query is proposed. The model is based on a controlled vocabulary, like a thesaurus, and on term frequencies in each document. Descriptors in this vocabulary are terms chosen from among their synonyms to be used as index terms. A descriptor can have a subset of broader descriptors, a subset of narrower descriptors, and a subset of related descriptors. Thus, descriptors are not mutually exclusive and naive probabilistic models are inadequate for handling them. However, a belief function can still be defined over a thesaurus of descriptors. Belief functions over the descriptors can represent a document or a user query. We can compute the agreement between a document belief function and a query belief function. Therefore, we propose that the set of documents be ranked according to their agreement with the given user query. We show that the Belief Function Model is wider in scope than the Standard Vector Space Model. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: Five interfaces to distributed systems of servers that have been designed and implemented and the challenges addressed are how to provide passive alerts, how to make information easily scannable, and how to support retrieval and browsing by nontechnical users.
Abstract: Interfaces for information access and retrieval are a long way from the ideal of the electronic book that you can cuddle up with in bed. Nevertheless, today's interfaces are coming closer to supporting browsing, selection, and retrieval of remote information by nontechnical users. This article describes five interfaces to distributed systems of servers that have been designed and implemented: WAIStation for the Macintosh, XWAIS for X-Windows, GWAIS for Gnu-Emacs, SWAIS for dumb terminals, and Rosebud for the Macintosh. These interfaces talk to one of two server systems: the Wide Area Information Server (WAIS) system on the internet, and the Rosebud Server System, on an internal network at Apple Computer. Both server systems are built on Z39.50, a standard protocol, and thus support access to a wide range of remote databases. The interfaces described here reflect a variety of design constraints. Such constraints range from the mundane—coping with dumb terminals and limited screen space—to the challenging. Among the challenges addressed are how to provide passive alerts, how to make information easily scannable, and how to support retrieval and browsing by nontechnical users. There are a variety of other issues which have received little or no attention, including budgeting money for access to “for pay” databases, privacy, and how to assist users in finding out which of a large (changing) set of databases holds relevant information. We hope that the challenges we have identified, as well as the existence and public availability of source code for the WAIS system, will serve as a stimulus for further design work on interfaces for information retrieval. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: In this article, the authors show that the evidence and arguments are inconclusive, and that the question of efficiency is an open one, which is also one which information science has an interest in pursuing.
Abstract: If communication in research and development is efficient, then the current cognitive situation in any specialty should fully reflect all available relevant information. Available evidence suggests that communication in R & D is not in that sense efficient, and a priori arguments seem to show that it could not be. But we try to show that the evidence and arguments are inconclusive, and that the question of efficiency is an open one. It is also one which information science has an interest in pursuing. © 1993 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: Work is going forward at CNRI on the design and implementation of an Electronic Copyright Management System (EMCS) to demonstrate the licensing of rights and permissions, payment of any royalties, and the electronic deposit, registration, and recordation of copyright works and related documents in a computer network environment.
Abstract: Work is going forward at CNRI, in conjunction with ARPA and the Library of Congress, on the design and implementation of an Electronic Copyright Management System (EMCS) to demonstrate the licensing of rights and permissions, payment of any royalties, and the electronic deposit, registration, and recordation of copyright works and related documents in a computer network environment. A variety of economic, social, and legal issues need to be addressed by both rightsholders and users in the course of this project. Cooperation among the copyright, computer, and communications industries concerned, taking into account the balance between protection for copyright works and the needs of the public for access to information, is essential if such a capability is to become a valuable maketplace tool

Journal ArticleDOI
TL;DR: The RightPages™ Service brings the library to users' desktop work‐stations by incorporating ASCII text, bitmap images, and an alerting service, in a system that is comfortable and efficient to use.
Abstract: The AT & T Bell Laboratories Library Network is striving to fulfill its vision of providing all AT & T professional employees with an «electronic window» to meet their information needs. People must be able to access information quickly and easily without the fear of information overload. Taking these needs and requirements into account, the Library and computer researchers combined their interests and areas of expertise to create The RightPages TM Service. This service brings the library to users' desktop work-stations by incorporating ASCII text, bitmap images, and an alerting service, in a system that is comfortable and efficient to use