scispace - formally typeset
Search or ask a question

Showing papers in "Information Technology and Libraries in 1992"


Journal Article
TL;DR: Browsing is an important aspect of the information-seeking activities of library users and is primarily visual, but second-generation OPACs lack lack necessary visual characteristics for browsing, and Public Access Catalog Extension (PACE) is designed as an alternative interface based on mental images of users and MARC records.
Abstract: Browsing is an important aspect of the information-seeking activities of library users and is primarily visual. Second-generation OPACs lack lack necessary visual characteristics for browsing. These characteristics may be best implemented through simulation of images of books and library shelves on the computer monitors. To mimic users" mental models of the real world may be costly, however, unless new interfaces can tap into existing sources of information. A possible solution may be found in using the information embedded in the MARC record pertaining to the physical description of a book. Public Access Catalog Extension (PACE) is designed as an alternative interface based on mental images of users and MARC records. In the 1960s MARC was introduced to the library world; in the 1970s automated cataloging systems were developed; and in the 1980s came the introduction of online public access catalogs (OPACs). As each decade passed, sophisticated systems were produced and implemented to automate library functions and to provide end users with more efficient and effective services. Today, automated systems are used extensively in different information environments and online catalogs have had overwhelming acceptance by the public, replacing the more traditional card catalogs. (1) OPACs have evolved from the first to the second generation, enabling users to search through keywords in a variety of fields using Boolean logic, truncation, and proximity operators. In addition, many second-generation systems enable end users to have access to two or more search modes, such as menus and commands, and several display options. These OPACs, however, are different from commercial databases available through systems such as DIALOG and BRS in several respects. They are primarily designed for end users. They do not have extensive descriptors, abstracts, or many other accessible fields as traditional online bibliographical databases do. Furthermore, they cover a variety of subjects and are not confined to a particular discipline. (2) Traditionally, OPACs have involved mini or mainframe computers in a multi-user environment. With the introduction of CD-ROMs, a new generation of online catalogs, sometimes referred to as PACs (public access catalogs), has emerged in the market. PACs have been associated with single-user computing, but some may be used in a network environment. The distinction between the two categories of online catalogs has become blurry; one CD-ROM vendor reports that its largest system serves over 250 stations and includes two million records. (3) Using the microcomputer's graphic capabilities, user interfaces of CD-ROM PACs are generally more "friendly" than OPACs, with such features as help windows and pull-down menus. Despite their sophisticated retrieval engines, research shows users have a number of problems in interacting with OPACs. An in-depth analysis of these problems may be found in a recent article by Martha M. Yee, who has reviewed over 150 studies in this area. (4) Yee summarizes the obstacles facing users of OPACs as: finding appropriate subject terms, large number of hits and failure to reduce the retrieval sets, zero hits and failure to increase the retrieval sets, failure to understand cataloging rules, and spelling and typographical errors. In addition, lack of understanding of the indexes, files, and the basic database structure has led to the use of articles, stop words, inputting author's first name before the last name, and hyphenation problems. Interfaces and retrieval systems have caused a few problems of their own--namely, complex interfaces and the need for training and relearning when used infrequently, incomprehensible error messages, problems associated with displays both for brief records and for complete cataloging information, incomprehensible HELP messages, and predicaments of Boolean logic. These problems have prompted one researcher to state that the second-generation OPACs, like many other online retrieval systems, are "powerful and efficient but are dumb, passive systems which require resourceful, active, intelligent human searchers to produce acceptable results. …

29 citations


Journal Article
TL;DR: Given the serials pricing crisis, librarians should encourage the development of network-based electronic serials, which hold great promise, but a variety of problems currently limit their effectiveness.
Abstract: New forms of scholarly communication are evolving on international computer networks such as BITNET and Internet. Scholars are exchanging information on a daily basis via computer conferences, personal e-mail, and file transfers. Electronic serials are being distributed on networks, often at no charge to the subscriber. Electronic newsletters provide timely information about current topics of interest. Electronic journals, which are often refereed, provide scholarly articles, columns, and reviews. Utilizing computer networks, scholars have become electronic publishers, creating an alternative publication system. Electronic serials hold great promise, but a variety of problems currently limit their effectiveness. Given the serials pricing crisis, librarians should encourage the development of network-based electronic serials.

23 citations


Journal Article
TL;DR: The findings of this study attempt to rep resent users" early reception of keyword searching; the results should be useful for future comparison to similar data collected about keyword searching and about user reactions to future new OPAC features.
Abstract: Keyword and Boolean searching modes are now becoming more commonly available on online public access catalogs, and questions have arisen regarding their use by library patrons. How difficult do users perceive these searches to be? Do those who use them tend to rely on them all the time to the exclusion of all other methods? This study attempts to provide answers to these questions in the context of an academic library that uses the Northwestern Online Total Integrated System online catalog. INTRODUCTION After keyword/Boolean searching mode was available for awhile on Indiana State Universities (ISU) Libraries" online public access catalog (OPAC), questions began to arise. The percentage of searches done in keyword mode rose steadily from 15.6% in November 1988 to 21.4% in November 1989 before leveling off. How did those who used keyword/Boolean searching use and perceive it? Did they find it difficult? Did they prefer to use it most of the time? Were they satisfied with it? The author undertook a study of the practices and perceptions of the users of keyword searching on the Northwestern Online Total Integrated System (NOTIS) to answer these questions and others. The two guiding theses of the study were: (1) the use or nonuse of keyword searching on LUIS is related to variables such as age, computer experience, subject area, status, and frequency of searching, and (2) there are certain measures ISU Libraries can take to increase the chance that patrons will use keyword searching and to improve the quality of their keyword searching. The findings of this study attempt to rep resent users" early reception of keyword searching; the results should be useful for future comparison to similar data collected about keyword searching and about user reactions to future new OPAC features. (The full project report, submitted to ERIC, and an article published elsewhere also detail other aspects of patron keyword searching. (1) The University Indiana State University has approximately 11,000 students, including about 2,000 graduate students. A small number of doctorates are offered are in the fields of education and psychology. Master's degrees are awarded in all schools; they include the College of Arts and Sciences and the Schools of Business, Education, Nursing, Technology, Health, Physical Education, and Recreation. The University has approximately 700 faculty members. The Library Indiana State University Libraries include a main library, Cunningham Memorial Library, and a science library, which covers chemistry, biology, and geology. Since March 1985, the ISU Libraries have made the NOTIS online catalog, LUIS, available to the public. It lists over 99 percent of the library's holdings, with 1,751,000 bibliographic records. It also includes the holdings of two nearby smaller institutions: Rose-Hulman Institute of Technology, an engineering school, and St. Mary-of-the-Woods College, a liberal arts institution. Keyword Searching Indiana State University Libraries made the keyword mode of LUIS searching available in the late spring of 1988, so that it had been available for almost two years when this study was done. Prior to the introduction of keyword/Boolean searching, NOTIS had three modes of searching available: author, done by typing in "a=[author's last name first name]"; title, done by typing in "t=[title of work, omitting initial article]"; and subject, done by typing in "s=[Library of Congress subject heading]." In early 1988, NOTIS introduced the keyword/Boolean search mode, done in its most basic form by entering "k=[word or phrase]." The syntax of this search mode is a very simplified form of the BRS search language. Library Instruction Library instruction at ISU Libraries is primarily carried out by the Library Instruction & Orientation Department, which has two librarians. The author and four other librarians participate in instruction when needed. …

21 citations


Journal Article
TL;DR: Hopes to develop a universal character set are described, and its potential effect on USMARC is described, so that text in the authentic script(s) may be included in bibliographic records.
Abstract: The representation of nonroman scripts in Latin characters causes information to be distorted in various ways. USMARC now provides for "alternate graphic representation," so that text in the authentic script(s) may be included in bibliographic records. As more library systems with nonroman capability are developed, conformance to standards for the encoding of nonroman data becomes more critical. The development of a single global character set standard is a significant change that must be accommodated in USMARC. In Rule 1.0E, AACR2 mandates that the bibliographic description be written in the same script as the source of information "if practicable." (1) For more than a decade, machine-readable cataloging and bibliographic transcription in a nonroman script were mutually exclusive. During this period, the only way to represent nonroman data in machine-readable form was by transcription into Latin letters (romanization). The first part of this paper criticizes romanization as information distortion. The USMARC Format for Bibliographic Data was modified to accommodate nonroman scripts in 1984. (2) The previous September, a Chinese/Japanese/Korean (CJK) capability had been added to the Research Libraries Information Network (RLIN) system. (3) The USMARC modifications are outlined in the second part of this paper, since not all readers will be familiar with them. The remainder of this paper describes efforts to develop a universal character set, and its potential effect on USMARC. ROMANIZATION AS INFORMATION DISTORTION Currently, most local systems are limited to Latin script; romanization is necessary if the automated catalog is to be a comprehensive representation of the library's holdings. The practice of romanization has two causes: the lack of the proper typographical facilities and the concept of the "universal" catalog, "the catalog in which all items in the collection are entered in a single alphabet from A to Z, regardless of language, regardless of form, regardless of subject. The American ideal." (4) The deficiencies of romanization from the point of view of the reader have been documented. (5-7) However, many nonspecialist librarians are unaware of the deficiencies and still regard romanization as adequate for access. Language experts reject this view; they persuaded the Library of Congress (LC) to continue to provide original script cataloging on cards for material in the so-called JACKPHY languages: Japanese, Arabic, Chinese, Korean, Persian (Farsi), Hebrew, and Yiddish. Not only does romanization impede access, it distorts the presentation of information in a number of ways. The presentation of the text is unnatural. Distinctions present in the original language may be lost, or distinctions not present in the original script may be artificially created. Different transliteration schemes are used in different countries or contexts. Finally, the normalization used in automated indexing and searching, when applied to romanized text, introduces another layer of distortion. Unnatural Presentation Romanization is the presentation of language text in unfamiliar letters. Readers of a language may, in time, become used to a particular romanization scheme, and be able to read their language even when it is written in Latin letters. In the People's Republic of China, pinyin, the national standard for the romanization of Chinese, has a number of applications: it is used to show the pronunciation of ideographs (in which Chinese is normally written), and it underlies a system of finger-spelling for the blind. A reader faced with text rendered in an unfamiliar way may find it incomprehensible. This can be illustrated by the case of alternative romanization methods. Hebraica bibliographers in the United States have become used to reading Hebrew written in Library of Congress romanization (which includes the vowels that are usually omitted in Hebrew orthography). …

13 citations


Journal Article
TL;DR: Advice from librarians who have gone through system migration is offered to those who face the migration process as well as candid analyses of mistakes, false assumptions, and delays provide information and advice for those about to embark on the Migration process.
Abstract: Automation administrators of thirty-three libraries discussed challenges, rewards, and problems associated with migration to new automated systems. Interviews focused on motivation for migration, planning for implementation, technical decisions and considerations, training of staff and users, publicity used to promote the change, and relationships with vendors. Descriptions of successful experiences as well as candid analyses of mistakes, false assumptions, and delays provide information and advice for those about to embark on the migration process. Migration to a new automated system is a fact of life in the world of library automation. Sooner or later many libraries will conclude that, for a variety of reasons, their present system is inadequate. They may require additional functionality, or the current vendor may no longer be viable in terms of products or service. In some cases the library may simply have outgrown the present system; in others, ongoing costs may have become prohibitive. New and emerging technologies often provide the impetus for migration, making possible faster access, lower cost, and enhanced services. Maintaining the appropriate automated system for a library is an ongoing and never-ending process. The "final" system is seldom final. As needs change, libraries may upgrade their system with their current vendor or choose an entirely new automation solution. As Jacob points out: System migration is the evolutionary process that bridges one system to the next. It is an ongoing process of renewal. It makes available the latest computer applications while addressing traditional information needs. System migration is a continuing process that reaffirms a library's automation commitment. (1) Concern about the system migration process and its associated problems remains intense within the profession, as evidenced by the continuing flurry of conferences, journal articles, and workshops on this topic. One expression of this concern, overheard at a seminar on technostress, aptly sums up the feelings of many librarians and offers a rationale for research in the area: "We need a new term in this business: 'technodepression'--when you've just finished installing a new automated system and you realize that in a few years you'll have to do it all over again!" In general, system migration has been defined as the process followed by a library in (1) replacing one automated system with another from a different vendor or (2) remaining with the current vendor and upgrading the present system in order to obtain enhancements and improved performance. This study reports on an investigation of the first aspect of migration, i.e., when a new vendor is hired by the library. The emphasis of the study is on advice from librarians who have gone through system migration to those who face the migration process. By candidly describing mistakes and problems and answering the question, What would you do differently if you could do it all over again? participants paint an honest and realistic picture of their experience. METHODOLOGY Seven library automation vendors provided upon request the names of thirty-nine libraries that had recently migrated to their systems. A letter to each of the automation administrators of these libraries described the proposed research and requested their participation in the project. The letter explained that the administrator could expect a call from one of the researchers to set up a convenient time for a telephone interview. Thirty-three of the thirty-nine libraries participated in the study; a list of these appears in appendix A. Reasons for nonparticipation did not include disinterest or unwillingness to assist in the research, but such factors as the administrator's having moved to another library or the library's being in the very early stages of the migration process. Lasting about an hour, the interviews revealed details of each library's experiences with migration in the areas described below: * General facts about the institution (supplemented by the American Library Directory) such as staff size, number of titles, number of branches, name of the previous system, and year of installation * Motivation for migration * Planning for implementation including development of the implementation plan, staff involvement, timing, and schedules * Technical decisions and considerations such as migration of data, equipment, and telecommunication issues * Training of the staff and users * Publicity used to promote the change to the staff and users * Relations with vendors in such areas as performance and expectations The following sections of the paper provide an analysis and summary of the findings within these areas. …

12 citations


Journal Article
TL;DR: The article provides background information on the processing of Arabic materials using a combination of local and modified cataloging rules and the creation of the Arabic card catalog at the King Fahd University of Petroleum and Minerals Library (KFUPM), and presents various options considered for developing the Arabized version of DOBIS/LIBIS.
Abstract: The article provides background information on the processing of Arabic materials using a combination of local and modified cataloging rules and the creation of the Arabic card catalog at the King Fahd University of Petroleum and Minerals Library (KFUPM). It also gives a brief history of KFUPM library automation and then presents various options considered for developing the Arabized version of DOBIS/LIBIS. Finally, the functions and features of the Arabic online catalog are described. King Fahd University of Petroleum and Minerals (KFUPM, formerly UPM) was founded as a college in 1963. The status of the college was later changed to a university in 1975. KFUPM provides advanced training of students in the fields of science and engineering to prepare them for service and leadership in the Kingdom's petroleum and mineral industries. (1) The academic programs of the university are well supported by a central library with a strong collection of more than 250,000 volumes. Because the university's main focus is on study and teaching in scientific and technical areas, the library's collection is comprised mostly of non-Arabic materials. Only about 7.5 percent of its collection is in Arabic, most of which supports the Islamic and Arabic studies programs. PROCESSING OF ARABIC MATERIALS The Arabic collection of the library received less attention than others in both development and processing. Until 1976, there was only one person with a professional degree in library science, responsible for acquisitions, bibliographic control, shipment clearance, and all paperwork regarding Arabic materials. For processing, a brief description of books was provided on cards without much attention to cataloging rules. For classification, a temporary modified Dewey Decimal Classification scheme was used, according to which a book was assigned a call number composed of a general number for the class, followed by a slash and an accession number. (2) It was soon realized that the local system for processing Arabic materials was creating access problems. For example, a second copy of a title would appear with a call number different from the first copy, whereas two separate books were at times assigned the same call number. A decision was made therefore to shift from the local system to a standard cataloging system using AACR, Library of Congress (LC) Subject Headings, and the LC Classification Scheme. The idea was also to make use of the LC card sets. To save effort and time in consulting two separate catalogs for Arabic and non-Arabic materials, integration of the two was considered necessary. The transliterated LC cards enabled us to interfile them with the non-Arabic catalog cards. However, to satisfy the library patrons, who still preferred access to the collection through the Arabic alphabet, an Arabic title file was provided wherein one card in every card set was arranged alphabetically by Arabic title. On the other hand, the Arabic collection was also integrated with the non-Arabic collection so that the readers could browse the library's holdings in their subject of interest in one area of the stacks. The decision to adopt a transliterated system for Arabic material was taken also in view of a growing backlog of Arabic books for cataloging and the shortage of Arabic catalogers. The idea was to make use of the LC catalog records and thus reduce the amount of original cataloging. This decision ran against the general feeling of the Arabic speakers who take pride in their native language and show strong opposition to subordinating Arabic to another language in bibliographic records. It was observed that fewer and fewer people were using the card catalog. They resented this scheme and were not willing to learn it. In 1979, the policy was modified to drop transliteration practice in favor of vernacular script records, except for subject heading and class number, which remained in English. …

11 citations


Journal Article
TL;DR: It was found that loan requests most often fail because local policies prevent loan or the items were in use, which may prove possible to reduce the probability of failure and, consequently, improve the turnaround time.
Abstract: Interlibrary Loan Offices can supply documents for only a fraction of the lending requests received (i.e., requests for the loan of a book or photocopy of an article received from another library), despite sophisticated electronic verification/locating systems. Lending fill rates of 50 percent are common. As a consequence, an interlibrary loan request must be referred to several different libraries before being satisfied. This inefficiency significantly lengthens the time required to deliver a document to a patron. This study analyzes 7,587 failed OCLC interlibrary loan and copy requests to determine why the requests could not be supplied. It was found that loan requests most often fail because local policies prevent loan or the items were in use. Copy requests most often failed due to the requested volume of a serial not being owned. Although significant technological improvements have been made to the interlibrary loan (ILL) process to speed receipt of requested materials, at least one study has observed that turnaround time for ILL transactions has varied little since 1979. (1) One reason cited for long turnaround times is the inability of a requesting library to locate potential lenders of materials accurately. Despite sophisticated electronic verification and locating systems, ILL offices can often fill only 50 percent of the lending requests received. Each of these unfilled requests must be referred again and again until satisfied. This inefficiency is a source of considerable delay in the ILL process. Each time a request is referred, the total turnaround time of the request is measurably increased. Because multiple referrals add significantly to turnaround, one strategy to speed turnaround time is to request from as few institutions as possible. Identifying why libraries must refer 50 percent of their lending requests has been the subject of only a handful of recent research studies. Furthermore, previous studies have focused on failed book requests without examining the reasons serial requests fail. By identifying the reasons ILL and photocopy or copy requests are not filled, it may prove possible to reduce the probability of failure and, consequently, improve the turnaround time. Several earlier studies have shown that ILL success rates (the proportion of all ILL borrowing requests successfully completed) are between 80 and 90 percent. (2) Each of these studies based success on the final transaction, ignoring the number of referrals a request may have required. This is particularly true of libraries using the OCLC ILL Subsystem as a primary means for sending requests. However, when success is limited to that of being the first library in the OCLC lender string, Dodson et al. found that only 57.1% of requests were completed. (3) Nearly 43% were not filled and were referred to the next potential lender. Gorin and Kanen found similar results: 52.6% for the first library, 21.4% for the second, and 16.4% for the third. (4) Explanations as to why such a large proportion of OCLC requests must be referred, despite having accurate location information, are elusive. Robert B. Winger examined the book-lending requests of the University of Chicago's Joseph Regenstein Library. (5) Of approximately 8,061 book requests received, 55.5% (approximately 4,471) were not filled. Winger analyzed and grouped a sample of 347 of the unfilled book requests into five categories: not owned, 29.14% (volume or edition not owned, not owned as cited), no longer in collection, 14.28% (missing, lost, discarded, transferred), unavailable for lending, 49.43% (in use, at bindery, noncirculating), miscellaneous/policy, 6.27% (lent before to same reader, duplicate request, request cancelled), and no reason given, 0.50%. A 1986 study of the Illinois Library and Information Network (ILLINET) examined failed ILL book borrowing (not lending) requests to determine reason for nonsupply. Again, copy requests were excluded from the study. …

11 citations


Journal Article
TL;DR: This paper views the design of the next generation of public access information retrieval (IR) systems in higher education from the perspective of a decade of development, deployment, and operation of the MELVYL online system at the University of California (UC).
Abstract: This paper views the design of the next generation of public access information retrieval (IR) systems in higher education from the perspective of a decade of development, deployment, and operation of the MELVYL online system at the University of California (UC). It highlights design decisions and assumptions that were made for the MELVYL system that have proved advantageous, as well as those that have proved limiting or have led to dead ends. Our design choices were probably similar to those made by most other online catalog designers at the time. Some decisions at UC that have proved in hindsight to be shortsighted or cowardly (and also a few that proved better than we might have hoped) were only guesswork, because there was no base of experience from which to work. Other decisions were artifacts of limited functionality and capability from the underlying base of information technology upon which the catalog was built, or of a limited budget to acquire resources. Particularly in the case of computing hardware, it was not that desired technology did not exist ten years ago (unlike certain supercomputing applications--visualization being the most striking example--that emerged during the 1980s), but that the cost of the desired computing cycles, memory, and mass storage was out of reach. Costs of these resources have dropped now sufficiently that they can be used more freely as we consider systems for the 1990s. The available base of software technology was a different matter. The limited functionality in the software components, such as database management systems (DBMS), that might be used to build an online catalog was a serious problem. In 1980, the DBMS choices were few, and none of them was entirely satisfactory. Interestingly, as we consider future directions for the MELVYL system in 1992, the choices seem to have improved little in terms of functionality, although the available commercial software has matured considerably in terms of stability and performance. The full set of functionality still seems tantalizingly out of reach, manifested most broadly in database systems that remain as research vehicles within the computer science research community, and thus unsuitable for production use in a system the scale of the MELVYL catalog. Finally, in terms of delivery platforms, we viewed the system as limited by the installed base of character mode ASCII terminals and so designed to the lowest common denominator "glass teletype." In theory, we might have procured a special terminal for use with the MELVYL system (as some other systems had done), since, as discussed in more detail later, our initial assumption was that most terminals for catalog access would be placed in libraries. But we felt it was important to be able to support the installed base, presuming that networking on the UC campuses would continue to improve and that over time more of this installed base would be able to reach the catalog. Given the explosion of networking that occurred later in the 1980s, this proved to be a very wise decision as it greatly facilitated wide access to the catalog. The history and current status of the MELVYL system has been amply covered in the papers that have appeared in the two previous "MELVYL at Ten" special sections of Information Technology and Libraries and in the spring 1992 issue of the DLA Bulletin. But a review of the design assumptions and system objectives for the original MELVYL online catalog, many of which, to my knowledge, were never explicitly articulated and debated as part of the planning process prior to its development, forms an essential part of the context for this paper. Thus, the first part of the paper reviews them with the benefit of ten years of hindsight, along with certain realities of the information technology base of the late 1970s. The remainder of the paper focuses on key problems that emerged as we gained experience with patron use of online catalogs at UC and elsewhere, and as the MELVYL system has grown larger, more complex, and more capable. …

10 citations


Journal Article
TL;DR: It is widely perceived that spelling errors in OPACs and other large databases are few in number, randomly distributed, and impossible to locate in any systematic fashion, but the results of this study demonstrate that these perceptions are incorrect.
Abstract: In order to find and correct spelling errors in the online public access catalog at Adelphi University, a visual inspection was performed of the 117,000 keywords indexed in the system. More than 1,000 errors were found. Certain long but common words such as administration, education, and commercial were found to generate many different misspellings. Most of the records were derived from bibliographic utilities, so the findings can be generalized to other OPACs. The same misspellings were also found in substantial numbers in CD-ROM databases. Misspellings were analyzed by the machine-readable catalog (MARC) field in which they were found, part of speech, and type of mistake. Lists of commonly misspelled root words and specific mistakes are included. In the years since the online public access catalog (OPAC) has replaced the card catalog as the primary source of bibliographic information in research libraries, much has been written about miskeyings of library users by Peters and Blazek, among others. (1,2) However, little attention has been paid to spelling errors that become a part of the database. Perhaps this is because researchers can work with logs of OPAC transactions to find the searching errors of OPAC users, but there is no easy way to get at misspellings that are in a database containing millions of words. It may be widely perceived that spelling errors in OPACs and other large databases are few in number, randomly distributed, and impossible to locate in any systematic fashion. The results of this study demonstrate that these perceptions are incorrect. HISTORY OF THE STUDY In a recent issue of American Libraries, (3) there appeared a short article describing how Jeffrey Beall at Harvard had found words that are prone to misspelling, such as Febuary or government. Librarians at Adelphi University Library in Garden City, New York, checked the keyword index in the library's Innovative Interfaces OPAC (Innopac) and found single examples of two of the ten words that were featured. According to a formula provided in the article, Adelphi had a very clean database. This was not surprising because the cataloging supervisor has described the operation as one with a history of perfectionism. However, misspelled words did show up occasionally, and the author found a way to make a thorough search of the database for such problems. In the Inoopac system, a successful keyword search will display a record that contains the word that was queried or a menu if there is more than one hit. However, if there is no matching word, it will produce a screen of choices that are nearby in the alphabet. One may then browse forward or backward looking at eight rifles per semen A trial search of the A's was performed by typing in aaaaa and browsing forward through hundreds of screens. The result was the identification of forty-two spelling errors. This justified a search of the 117,000 words that are contained in Adelphi's 310,000 bibliographic records. A complete visual check of the keyword index represented a large volume of work, but it seemed like the only reasonable way to get the problem solved. Normally, two letters were searched in a single workday. Once a potential spelling mistake was identified, the full record was called up and checked for context, e.g., langage is correct in French, but it is a misspelling in English. If it did turn out to be a mistake, the screen containing the incorrect word was printed along with the menu screen of eight words that contained the error. At the end of reviewing a letter, the printout was marked for immediate correction by a student assistant. The system allows the staff member to call up a record, identify the field with the misspelling, and substitute the correct word for the misspelled one. In going through the screens, one major problem was non-English words that contained only a single letter's difference from the English equivalent. …

9 citations


Journal Article
TL;DR: This article describes how to implement TCP/IP communications with HyperCard in three steps, and illustrates the implementation process with two stacks : Mini-Atlas and ListManager.
Abstract: This article describes how to implement TCP/IP communications with HyperCard in three steps. First, it briefly examines the tools used to access information resources available through the Internet. Second, it oulines the necessary hardware and software requirements to make TCP/IP communications possible on a Macintosh. Third it illustrates the implementation process with two stacks : Mini-Atlas and ListManager

6 citations


Journal Article
TL;DR: The time has come for more libraries to consider mainstreaming data services, while keeping in mind that there is a wealth of experience to draw in the data library and archive community that is already well established across the United States and Canada.
Abstract: Libraries are increasingly aware that their role includes providing access to data in electronic form. Determining the level of service and acquiring the skills needed to provide effective access are key to integrating data services with success in any organization, as is having the administrative support to do so. The proliferation of electronic media, formats, hardware, and software requires new knowledge bases. Libraries that do not take a leadership role will forfeit a pivotal position in assisting patrons with accessing the wealth of information available in electronic form. For the past several years, U.S. academic libraries have been anticipating the inevitable--the receipt of large amounts of 1990 census data in electronic form. Statements of why libraries should be involved in providing access to computer-readable census data are not new, as this quote from Rowe and Ryan (1974) illustrates: Why not just store the tapes at the computer and let the computer people handle them? By doing this, the library would be abdicating its role as an information center. It would deny users the opportunity of locating information at the place we have trained them to look for it, the library.[1] What is new for the 1990s is the complication of a greater variety of electronic format, software, hardware, and network decisions to consider. The growth of involvement by academic libraries in the realm of computer-readable data, while slow in coming, has been incremental. Many of the libraries that formerly eschewed responsibility for providing services to computer-readable data are now facing the issues surrounding data files. The infusion of CD-ROMs containing numeric data in libraries as part of the U.S. Federal Depository Library Program has assisted in placing a sense of urgency among those receiving them. The time has come for more libraries to consider mainstreaming data services, while keeping in mind that there is a wealth of experience from which to draw in the data library and archive community that is already well established across the United States and Canada.[2] WHAT CONSTITUTES DATA, AND HOW ARE THEY USED? "Words, Pictures, Numbers, and Sounds: Priorities for the 1990s" was the theme of the International Association of Social Science Information Services and Technology's (IASSIST) 1990 annual meeting and illustrates the breadth of data usage for research and teaching.[3] Data in electronic form can include public opinion surveys, hospital-admission records, digital cartographic data, literary works, digital storage of sound bites, video footage, photographs, and much more. Computer-readable data are used routinely in the humanities, social sciences, and sciences. Libraries cannot afford to neglect them, but rather need to understand how data are created and how they are used in order to respond to new service demands. Social Data Computer-readable social data are a proliferating body of information and research sources, derived from the surveys, censuses, and administrative records of a multitude of commercial research groups, government agencies, academic institutions, and private research agencies. The amount of numeric data produced and available for secondary data analysis has dramatically increased and is distributed in a variety of computer-readable formats. Quantitative analysis of data is one of the essential methodological approaches used in a variety of social sciences and related disciplines, including anthropology, business, economics, education, geography, health-care fields, history, political science, psychology, and sociology. The scope of data available for secondary analysis has also grown tremendously. Common categories of data include economic, political, and social attitudes and behavior patterns; social indicators and quality of life; business and commerce; population and housing; education; employment; aging and life cycle; crime and criminal justice; and health care and health facilities. …


Journal Article
TL;DR: The development of the newest USMARC format for community information is addressed and some specifics of it are described, including a goal to develop a standardized list of community information data elements.
Abstract: The Library of Congress Network Development and MARC Standards Office (Net Dev/MSO) is concerned with the development, publication, and maintenance of the USMARC communications formats. This paper addresses the development of the newest USMARC format--the USMARC format for community information--and describes some specifics of it. It is important to note that Net Dev/MSO's work in this area over the last two and one-half years has been done in conjunction with the Technologies Committee of the Community Information Section of the Public Library Association; throughout this paper it will be referred to as the Technologies Committee. Cecelia Staudt, who is employed by EKI, Inc., was the chairperson of the committee during the time period. DEFINITION What is community information? The definition found in the community information format is as follows: Community information records describe nonbibliographic resources that fulfill the information needs of a community. Currently, the format allows one to describe programs, services, organizations, single and ongoing events, and individuals (e.g., experts, public officials) about which people in a community might want information. These entities can be for-profit, not-for-profit, or governmental, with a wide variety of missions or purposes (e.g., charitable, educational, informational, social, health, leisure), for a variety of audiences (children, youth, singles, men, women, alcoholics, mentally ill, criminal, etc.). A library itself could be included as an entity in a community information File. A particular community information file could include entries for * a baseball league for boys 11-12, * an arts organization offering classes and exhibition space for artists, * an alcoholic treatment center for persons aged 18-65, * a nursery school for children, * a Jewish war veterans group, * an organization offering vocational and employment counseling, * a women's shelter, or * an annual fund drive to gather toys for needy children. For each entry, the description could include the name of the entity, its address, telephone number, hours, fees, contact person, requirements for admission, etc. It should be pointed out that a library other than a public library may maintain files of community information. Also, some libraries may only maintain one kind of community information. For instance, a university library may have a file of events going on at that university. HISTORY How did the format for community information come about? Around 1985, the Technologies Committee became aware that libraries were inputting their community information into machine-readable form. The committee compiled a directory of libraries having automated community services; twenty-seven libraries representing fifteen states were included. In 1988 the committee subsequently contacted libraries known to be automating community information and asked them to furnish it with a list of the data elements they were using; the committee's goal was to develop a standardized list of community information data elements. In compiling the information received, the committee found that there were indeed elements common to community information users. The resulting standardized list can be found in appendix A. The committee did not consider the list exhaustive; rather, it considered the information representative of the information being referred to as "community information"--and it did provide a beginning. In doing this work, the committee observed that a number of libraries with automated bibliographic systems were attempting to accommodate their community information files on the same system. Since community information is not bibliographic data, the information could not easily be input into bibliographic fields. Some vendors responded to this situation by creating separate database management modules with unique formats for use with nonbibliographic files; others adopted modified versions of their systems' bibliographic formats and integrated this information into the bibliographic files. …

Journal Article
TL;DR: The University Libraries of the University of Houston created an experimental Intelligent Reference Information System (IRIS) over a two-year period that provided access to nineteen citation, full-text, graphic, and numeric databases.
Abstract: The University Libraries of the University of Houston created an experimental Intelligent Reference Information System (IRIS) over a two-year period. A ten-workstation CD-ROM LAN was implemented that provided access to nineteen citation, full-text, graphic, and numeric databases. An expert system, Reference Expert, was developed to assist users in selecting appropriate printed and electronic reference sources. This expert system was made available on both network and stand-alone workstations. Three research studies were conducted.

Journal Article
TL;DR: This study examines the potential application of the ANSI/NISO Z39.50 library networking protocol as a client/server environment for a "scholar's workstation" and finds it well suited to the communication needs of this environment.
Abstract: This study examines the potential application of the ANSI/NISO Z39.50 library networking protocol as a client/server environment for a "scholar's workstation." As we will see, Z39.50 is well suited to the communication needs of this environment, and can provide a major building block for a flexible environment that is vendor-independent at both ends of the link. Scholar's Workstation: Theory and Practice The scholar's workstation is defined for our purposes as a single-user microcomputer equipped with a network or telephone communication interface, local storage, and software capable of displaying and manipulating bibliographic data in the USMARC or similar formats. Weissman identified twelve elements considered essential to a modern scholar's personal computing environment: 1. Provide windowing capability for multiple documents. 2. Integrate text and graphics when desired. 3. Support multimedia (sound, graphics, text.) 4. Support complex documents with many parts. 5. Permit multitasking operations. 6. Accommodate large, fast mass storage devices. 7. Include connectivity to external databases. 8. Include electronic mail capabilities. 9. Support data acquisition devices (scanners, etc.) 10. Address substantial amounts of memory. 11. Permit the user to customize the environment readily. 12. Offer enough speed to permit intensive processing. Multitasking is the key to usability, according to Weissman. Human activities and the world in which they take place are inherently multitasking operations. The single-threaded paradigm of personal computing as it has become familiar to most of us is not adequate for efficient and productive work. (1) In that context, the scholar may need, for example, to recheck a reference while a document is being prepared. Multitasking permits the microcomputer to connect to the library or other bibliographic source, retrieve the needed reference, and insert it directly into the document, all without ever closing down the word processing software or writing either the document or the reference to disk storage. Both the communications session and the document remain open and accessible on the screen, in a situation analogous to opening a reference book on the desk while a document is in the typewriter. Direct comparison and transfer of information between the two applications is a much more natural and human operation than the more typical microcomputer sequence of steps in which the communication session would be captured to a file, the file edited, and finally inserted into the main document. Multitasking environments are readily available today. They range from relatively inexpensive systems such as Amiga OS or Microsoft Windows to high-powered and costly environments like Unix or VMS. It is from the growing availability of powerful and multifaceted environments that the scholar's workstation will arise. Of particular interest to librarians and designers of library technology is the essential connection between scholarly research of all sorts and bibliographic information. Whatever form a scholar's workstation may take, it requires a reliable and flexible communication link that permits retrieval and manipulation of bibliographic data from external suppliers. Simple communication software, though it has served until now, does not realize the full potential of modem technology. The user must still capture an entire session with the bibliographic source and later select and manually separate the needed data from the "chaff" inserted by the supplier's software to permit user communication. A further obstacle is posed by the fact that virtually every bibliographic source has a unique command language, and all are tied to textual commands that the user must remember and type in through a keyboard. Although a common command language has been designed and standardized, there has not yet been a concerted movement toward its implementation by data suppliers. …

Journal Article
TL;DR: The findings of this study will be of interest to library technical services or database maintenance staff, to managers who need to make cost estimates or plan work flows, and to systems staff and vendors who provide products and services based on the LC authority files.
Abstract: This paper presents new findings on how the Library of Congress (LC) authority files change over time. With the refinement of vendor services for one-time automated authority processing and local system authority control modules, there has been increasing interest in method for keeping local system files up-to-date following an initial, costly database preparation. (1,2) However, there has been no research published on how the LC authority files change and how the changes might impact local databases. The authors investigated authority record transactions issued by LC to provide reliable data on: 1. The number and percentages of new, changed, anti deleted name and subject authority records being issued by LC; 2. The percentage of changes affecting the authorized heading (lxx) field; 3. The daily rates of change of (a) heading fields and (b) all fields; and 4. The rate of occurrence of multiple changes to the same record over a thirty-day period. Because the data can be used to build a model of how LC headings in local system databases age the findings of this study will be of interest to library technical services or database maintenance staff, to managers who need to make cost estimates or plan work flows, and to systems staff and vendors who provide products and services based on the LC authority files. METHODOLOGY The authors based the analyses on updates to the LC Name Authority File (NAF) and LC Subject Authority File (SAF) that LC issued over thirty production days in spring 1991. The updates consisted of new authority records, changes to existing records, and deletions. OCLC receives updates to the LC NAF from LC daily via the Linked Systems Project (LSP) Authorities Implementation: updates to the LC SAF are received weekly on tape. The initial data file included all new records, changes, and delete transactions on NAF and SAF records over the thirty-day period. However, the only analyses of new records and delete transactions were simple frequency counts. The study concentrated primarily on change transactions--that is, changes to already existing LC authority records in the NAF and SAF. As a first step, a program removed Change Message Records (CMRs)--temporary records that are used by LC to alert catalogers that a change to a name authority record is in progress--from the data file of change transactions. Easily identified, CMRs contain code "b" in a fixed field element (0018/31, Record update in process). The authors chose to exclude CMRs from the study because they exist only in LSP systems and because, even if CMRs were generally available, they are irrelevant for database maintenance. After the removal of CMRs, software developed by the OCLC Office of Research created pre- and postimage authority records for each change transaction. The pre-image record stored the record as it was prior to the change, and the postimage record stored the changed record, Next, for each pre- and postimage pair, comparison software created field change records for every field added or modified. The field change records were then input to several programs for analysis and printing. Because changes to authorized heading (lxx) fields have the greatest impact on bibliographic databases, the authors examined them extensively. The software selected the sample of heading changes by out-putting change records whose field tags began with "1," the digit used for all authorized heading fields in authority records. The last analysis was a longitudinal study, which was accomplished by separating out pre- and postimage records that reappeared two or more times during the thirty-day period. The comparison software then analyzed the changes in the multiple-occurring record file. FINDINGS New Records, Changes, and Deletions Table 1 gives counts and percentages of new records, changes, and deletions of name and subject authority record updates over the thirty, production days. …

Journal Article
TL;DR: A two-year project to improve the quality of a major bibliographic database resulted in corrections to 2.1 million data fields and removal of 8.2 percent of the records.
Abstract: A two-year project to improve the quality of a major bibliographic database resulted in corrections to 2.1 million data fields and removal of 8.2 percent of the records. Queries now retrieve 20 percent more hits than the same queries did before the project began, despite removal of duplicate records. The literature of duplicate removal is discussed, with emphasis on the trade-offs between human and computer methods. INTRODUCTION Description of Database The Conservation Information Network (Network) was created by an international collaborative effort managed by the Getty Conservation Institute, an entity of the J. Paul Getty Trust. The Network consists of an electronic messaging service and access to the following three databases: * The bibliographic database (BCIN), consisting of references to international conservation literature; all records contain abstracts * The materials database, containing records on products relevant to conservation practice * The product/supplier directory, containing names and addresses of manufacturers and suppliers of materials used in conservation The institutions contributing data records are the following: * Canadian Conservation Institute (CCI) * Smithsonian Institution's Conservation Analytical Laboratory (CAL) * Getty Conservation Institute (GCI) * International Centre for the Study of the Preservation and the Restoration of Cultural Property (ICCROM) * International Council on Monuments and Sites (ICOMOS) * International Council of Museums (ICOM) Approximately four hundred institutions around the world use the Network regularly to improve their conservation of cultural property, especially art, archaeological sites, buildings, and museum collections. The Network is resident on a Control Data Corporation (CDC) mainframe managed by the Canadian Heritage Information Network (CHIN), a government installation in Ottawa. BCIN contained about 140,000 bibliographic records when this project began. Because BCIN was initially formed through the machine conversion of diverse files from the participants, numerous anomalies, errors, and duplicate records resulted. Furthermore, differences in cataloging standards between countries and over time led to variations in style (capitalization, punctuation, etc.). These factors contributed to the need for cleanup and deduplication. BCIN contains all of the normal data used for identifying and describing bibliographic records, except that LCCNs are absent, and ISBNs, ISSNs, and CODENs are rare. BCIN is not in a MARC format but is stored using Information Dimension's BASIS database management system. Maximum possible record length is 15,000 bytes, although the longest record is 5,254. The shortest is 82 bytes; the mean length is 973 bytes. Purpose of Project The purpose of the cleanup and deduplication project was threefold: 1. To locate and correct data errors automatically, using computer programs 2. To flag records with data errors that the programs could detect but could not correct 3. To identify for human review records likely to be duplicates During the summer of 1989, the author conducted a study of how these goals could be accomplished. Programs to implement the findings of the feasibility study were written during the winter of 1990 and were run against the database during April 1990-March 1991. Basic Procedure An early decision was made to perform as much of the cleanup and deduplication as possible on PCs, with the assumption that this would be more expeditious than developing programs to run on the CDC mainframe in Ottawa. In fact, it turned out that the entire project was done on PCs. However, the required time would have been too great and disk space (for work files) too large to process the 140-megabyte database in one batch. …

Journal Article
TL;DR: The extent to which assigned subject headings in a large bibliographic database are subdivided by topical subdivisions and the source of those subdivisions in printed and online cataloging tools is described.
Abstract: This paper recognizes the limitations of the existing file of Library of Congress subject authority records for subject heading assignment and validation. It makes recommendations for a new machine-readable file of authority records for topical subdivisions and for enhancements to the existing subject authority file. The recommended changes would enable online systems to assist in subject heading formulation and verify, with limited assistance by human intermediaries, the individual components of subdivided headings. A study of subdivided subject headings in a large bibliographic database forms the basis of the recommendations. No comprehensive list of topical subdivisions is yet available in machine-readable form. The machine-readable Library of Congress Subject Headings (LCSH-mr) contain a few records for subdivisions when subdivisions are the same as main headings or see references. LCSH-mr also contain subdivided headings, but the subdivisions in these headings are only authorized with the particular main headings to which they are appended. Generally, catalogers consult the printed publication Subject Cataloging Manual: Subject Headings (SCM:SH) to find appropriate subdivisions to append to subject headings. (1) The availability of machine-readable records for topical subdivisions would enable catalogers to "cut" subdivisions from authority records and "paste" them into bibliographic records. In online cataloging systems, such a capability would reduce typographical errors and minimize the assignment of unauthorized subdivisions. It would not, however, enable systems to determine automatically whether the individual components of subdivided headings are correctly formulated and authorized with the particular main heading. Additional information needs to be incorporated into machine-readable subdivision records to enable systems to perform automatic verification of subdivided subject headings. The purpose of this paper is to demonstrate the need for machine-readable authority records for topical subdivisions to improve the quality and accuracy of subdivision assignment. It describes the extent to which assigned subject headings in a large bibliographic database are subdivided by topical subdivisions and the source of those subdivisions in printed and online cataloging tools. It makes recommendations for machine-readable authority records for subdivisions and for enhancements to existing files of subject authority records. Such enhancements would enable online systems to verify automatically whether the individual components of subdivided headings are correctly formulated and authorized with the particular main heading. PREVIOUS CALLS FOR A SUBDIVISIONS FILE The most widely used subject authority file for subjects is the LCSH-mr. Since early 1986, the Library of Congress' Cataloging Distribution Service (CDS) has made LCSH-mr available to subscribers in the form of a cumulative master tape and a weekly update service. Only one-third of the subject headings in this file are subdivided. (2) In contrast, about two-thirds of the assigned subject headings in bibliographic databases are subdivided. (3,4) The existing subject authority file is of limited use for assignment and validation of subject headings because of the many options available to catalogers for adding subdivisions to the headings printed in LCSH. (5) Since the early 1980s, the library community has called for machine-readable authority files to aid in subject heading assignment and validation. Underlying such calls is indecision about the form of these files. Should these files consist of subdivision records or unique strings of subdivided headings? Such indecision is evident in the recommendation of an ALA subcommittee that encourages the Library of Congress (LC) to conduct research to determine whether separate authority records should be created for every unique heading or whether "separate files of authorized free-floating subject subdivisions would suffice. …

Journal Article
TL;DR: The paper describes an experiment currently being conducted at the Library of Congress to create USMARC classification records and use a classification database in classifying materials in the social sciences.
Abstract: This paper discusses the newly developed USMARC Format for Classification Data It reviews its potential uses within an online system and its development as one of the USMARC standards It provides a summary of the fields in the format and considers the prospects for its implementation The paper describes an experiment currently being conducted at the Library of Congress to create USMARC classification records and use a classification database in classifying materials in the social sciences The Library of Congress recently completed the development of a machine-readable format for classification data to allow for the communication of classification records between systems and to provide a standard for the storage of classification data in the computer The USMARC Format for Classification Data joins the family of machine-readable cataloging (MARC) formats: bibliographic, authority, and holdings formats Implementation poses great challenges for institutions, particularly for those responsible for the maintenance of library classification schemes POTENTIAL USES FOR ONLINE CLASSIFICATION Online classification data have many potential uses for information access They may provide the authority for classification numbers, terms, and shelflist information; they may be used for printing and maintaining a classification scheme; and they may enhance subject retrieval, assist the classifier, facilitate maintenance tasks for classification numbers in bibliographic records, and provide the basis for an online shelflist Authority Control for Classification Data Online classification data may provide authority control for the classification number and caption(a heading that corresponds to a classification number(s) and describes the subject covered) An authoritative file of classification records may be used by the classifier to assign classification numbers to bibliographic records It may also provide a system with the mechanism to validate the correct assignment of classification numbers In addition, it can provide authority control for synthesized classification numbers, ie, numbers that have been made more specific by adding other numbers frown a table or other parts of the schedule to a base number A synthesized classification number need not appear in the classification scheme itself, since it is built by following add instructions, which instruct the classifier to add or append other numbers from the schedule or a table to a base number Creating a classification record for a synthesized number can provide an authority for that number and facilitate its further use Printing and Maintenance of Classification Schedules Online classification data could be an efficient method for printing a classification schedule However, a print program for publishing the schedules will have different system requirements than the program for online display Specifications will need to be developed when implementing an online classification system and print program The two major classification schemes in use in the United States, the Library of Congress Classification (LCC) and the Dewey Decimal Classification (DDC), have been developed, produced, and maintained very differently over file years LCC is an enumerative scheme, with new classification numbers inserted where appropriate, and individual changes communicated through the publication LC Classification--Additions and Changes DDC is hierarchical and uses number building extensively by appending numbers from other parts of the schedule onto a base number to create a more specific classification number Revised editions of the whole scheme or of special sections have communicated changes to users; it is currently in its twentieth edition The LCC, now consisting of forty-six separate schedules, was developed over a period of time by different people It was designed as a shelf location and browsing device and has been maintained as such …

Journal Article
TL;DR: Fifteen recommendations are offered for the improvement of online catalogs within the categories of closer connections to the users' work environment, SDI, downloading, reform of LCSH, enhanced search capabilities, and linking with other bibliographies and text.
Abstract: Fifteen recommendations are offered for the improvement of online catalogs within the categories of closer connections to the users' work environment, SDI, downloading, reform of LCSH, enhanced search capabilities, and linking with other bibliographies and text. Recognition. of the achievements of the first ten years of the MELVYL online system of the University of California occasions an excellent opportunity to examine what needs to be done in the next ten years of online catalog design and development. What follows is a personal selection of improvements not only for the MELVYL system but for online catalogs generally. USER ENVIRONMENT The online catalog has two quite different kinds of impact. For all who visit the library, it is a different sort of catalog, with a keyboard, screen, and a new way of searching that replaces passive trays of cards. A different impact arises with the growing proportion of library users whose work habits and working environments have changed to include routine use of computers. For these persons. the option of remote access to the library's catalog has constituted an important new extension of library service. Not since library catalogs were (infrequently) printed and distributed in book form in the nineteenth century has this kind of catalog access been possible. This second impact is selective, an enhancement of service for those whose work habits and equipment enable them to benefit. Library automation to improve library service within the library is clearly useful. However, the ability of the library to arrange for access from outside the library to materials stored electronically, such that users with suitable equipment and skills can use these resources by themselves, constitutes a much more substantial extension of library service. Because people have moved to a personal computing environment for their work, they need the provision of online access to the online catalog, online bibliographies, and any other online resources because the effective performance of their work is based on access to electronic records. Their work is constrained if such access is not provided. For this reason library automation, hitherto based on factors internal to the library, should now be associated with and paced by the parallel shift in the "task environment of the people the library serves. Once library users begin to work electronically, they are hindered by the lack of remote access to an online catalog and to materials in electronic form. This close coupling of library development with changes in users" working styles requires a new perspective. Any serious agenda for automation in library service should include enhancements designed to bring service to where the users are and into their personal working (and computing) environment. Our first four agenda items are in this class. Automatic SDI The Selective Dissemination of Information (SDI) is the notification of library users of selected, newly received items relevant to their personal interests. SDI is a well-established practice in small, specialized libraries but is labor intensive and, therefore, rarely found in large libraries. The idea of SDI has found new currency outside of libraries as "information filtering." The (largely independent) developments of electronic mail and of online library catalogs can be combined to provide automatic SDI if the catalog has an "AND LOADED SINCE [date]" search limit capability (as the MELVYL system does) or can achieve a similar effect through, for example, record ID numbers in consecutive order. One feasible approach would be along the following lines. A library user's SDI profile can be expressed in terms of an online search statement (e.g,. FIND SUBJECT CATALOGS, ONLINE) and identified by the user's electronic mail address (e.g., buckland@otlet. berkeley.edu). During off-peak periods, at intervals such as once a month, an SDI program would initiate each search with the AND LOADED SINCE search limit set so as to capture records added to the catalog since the previous running of the program. …

Journal Article
TL;DR: A project was undertaken by the staff in the Albert R. Mann Library to design and implement a technical services workstation (TSW) for cataloging and acquisitions staff, showing that microcomputers can provide significant benefits for processing staff.
Abstract: Although automation had an early impact on technical services operations, the rate of progress has been slow. Recently there has been a great deal of professional interest in the concept of a "cataloger's workstation," a customized configuration of hardware and software designed to enhance the processing environment. A project was undertaken by the staff in the Albert R. Mann Library to design and implement a technical services workstation (TSW) for cataloging and acquisitions staff Using inexpensive and readily available products, the project showed that microcomputers can provide significant benefits for processing staff Technical Services staff have only begun to exploit the power of microcomputers, however, and new developments in computing and networking will have a significant impact on the technical services workstation of the future. At many libraries, automation came to technical services before it came to any other division. Beginning with the specification of the MARC record in 1968, followed by the creation of the bibliographic utilities and the implementation of local online cataloging and acquisitions systems, automation has had a profound impact on the conduct of technical services worK. In some respects, however, the rate of progress in automated systems has been slower in technical services than in library departments that automated much later. For many staff, automation of technical services has meant primarily that cataloging records are stored and created in an online environment, using a terminal instead of a typewriter. Recently, the ultimate promise of the benefits from automation for technical services has been embodied in the concept of the "cataloger's workstation," a customized configuration of hardware and software designed to vastly simplify the life of original and copy cataloger alike. It has partial parallels in other areas of library operations in which the microcomputer has been used to soften or completely replace rigid mainframe environments. Examples include pre- and postprocessing software for online searching and CD-ROM search stations. Unfortunately, much of the promise of the cataloger's workstation remains undelivered. Although general-purpose microcomputers are increasingly replacing dedicated terminals on technical services desks, staff continue to be underserved by the power and flexibility of the microcomputer. This is particularly true in large research libraries that still rely heavily on mainframe-based online catalogs such as NOTIS and on the bibliographic utilities. However, it is not necessary for the cataloger's workstation to emerge fully formed from the laboratory before technical services staff can enjoy more of the benefits that microcomputers have brought to other departments. Microcomputers can offer relief from some of the more tedious and burdensome mechanical aspects of technical services work right now, using inexpensive and primarily off-the-shelf products. What follows is a description of the background and rationale for the development of Mann Library's technical services workstation and the details of the microcomputer-based enhancements provided to staff. A concluding section discusses the potential application of other existing and emerging technologies to simplify further the mechanical and intellectual aspects of technical services operations. BACKGROUND Mann Library is the land-grant library of New York, serving the colleges of Agriculture and Life Sciences, and Human Ecology, as well as the Divisions of Biological Sciences and Nutritional Sciences at Cornell University. The project described here is a joint effort of the Information Technology Section, consisting of 6.5 FTE staff providing systems development and technical support for staff and patrons, and the Technical Services Division, consisting of 16.5 FTE staff, who carry out acquisitions, cataloging, and serials tasks. Those technical services staff who have worked at Mann since the early 1970s have witnessed and weathered four major shifts in the automated environment. …


Journal Article
TL;DR: Background information on Library of Congress preliminary cataloging for monographs into the online union catalog is presented and the records' impact on work flow in the cataloging department of a medium-sized research library is described and the results of a survey that queried ARL/OCLC libraries on the use of minimal-level/preliminary cataloging records are reported.
Abstract: OCLC's decision to load Library of Congress preliminary cataloging for monographs into the online union catalog resulted in the addition of a considerable number of these records to the database over a fifteen-month period before the project was suspended in March 1991. This paper presents background information on LC's decision to include these records as part of its tape-distribution service. It describes the records' impact on work flow in the cataloging department of a medium-sized research library and also reports the results of a survey that queried ARL/OCLC libraries on the use of minimal-level/preliminary cataloging records. OCLC users were notified in February 1990 (via a logon message) that the previous month the Library of Congress (LC) had begun to include Level 5 monographic records as part of its tape-distribution service. Since it had little information as to their nature, OCLC initially processed these as "normal records" until members complained that some of them had replaced member copy. More than 900 records were affected out of the first 6,800 loaded. OCLC quickly revised its tape-load procedures so that a Level 5 record was added only if matching member copy was not in the database; if a match occurred the Library of Congress' "DLC" holding symbol was added to the existing record. In a network newsletter issued shortly thereafter, OCLC acknowledged member complaints and warned that "no assumptions can be made about whether the [Level 5] record is correct . . . no authority work has been done, nor subject analysis provided."[1] These brief records had no call numbers, no added entries, and no notes; they occasionally lacked series statements, and some contained typographic errors (see figure 1). The descriptive cataloging elements for Level 5 monographic records are even briefer than those required by National Bibliographic Record -- Books for standard minimal-level cataloging. They include only the following data: 1xx: first personal name on the title page 245: title and statement of responsibility in full 250: full edition statement 260: first place and publisher, one date except for multipart items 300: simplified description 4xx: series in full as it appears on the piece, whether traced or not 5xx: acquisitions data 020: first or most appropriate ISBN 010: LCCN supplied 050: IN PROCESS note supplied Fixed fields: language, country of publication, priority[2] Responding to those who might be concerned about the prospect of using such bare-bones records for copy cataloging, OCLC promised to monitor the situation and to consider "additional changes to record processing." Meanwhile, members had the option of upgrading Level 5 copy to K- or I-level standards. LC's plan to share information about its recent acquisitions via its tape-distribution service dated back a number of years. The Technical Services Directors of Large Research Libraries (TSDLRL), which met at the ALA Annual Conference in Dallas in 1984, reacted favorably to LC's proposal to distribute in-process data.[3] This proposal came at a time when some research libraries also were beginning to consider standardized minimal-level record creation as an effective means of attacking cataloging backlogs and promoting resource sharing.[4] A 1985 report prepared for the Association of Research Libraries (ARL) Committee on Bibliographic Control explicitly recommended inclusion of LC in-process data in the national utilities. The expectation was that this information would decrease duplicate cataloging efforts by indicating the relative priority given to a work by LC. The report's authors also believed that the appearance of in-process data would allow libraries to "predict when LC copy may be forthcoming."[5] In the late 1980s LC reevaluated its methods for assigning cataloging priority levels to incoming materials (including monographs, serials, and microforms). …

Journal Article
TL;DR: Research into users" needs for searching and displaying the original nonroman alphabets has been under way for some time, and a simple process that eliminates the need for storing both original and transliterated forms in bibliographic records is introduced.
Abstract: Most libraries in English-speaking countries own materials in nonroman scripts. Access to these documents is provided through romanization. Online catalogs often hold a large number of romanized bibliographic records. Research into users" needs for searching and displaying the original nonroman alphabets has been under way for some time. In most libraries, users are not able to search and display a bibliographic record in the script of the original document. A simple process that eliminates the need for storing both original and transliterated forms in bibliographic records is introduced. The conversions are done locally using a microcomputer. The user can search both the romanized and the original alphabets. The system is based on the fact that transliteration must necessarily be a reversible process for alphabets with a limited number of graphemes. This system would fail for some of the simplified transliteration schemes and in these cases would only work with human intervention or with the use of a spelling dictionary. The Slavic languages with the Cyrillic alphabets are used as examples throughout. BIBLIOGRAPHIC ACCESS TO NONROMAN DOCUMENTS In Anglo-American libraries, most of the cataloging of documents written in nonroman scripts has been done in romanized (transliterated or transcribed) form. As a result, there now exists in card and online catalogs a large body of items cataloged in this way. A well-known but often ignored problem is native speakers" access to those vernacular documents through the transliterated bibliographic records. Early in this century, before library automation existed, linguists and librarians investigated the applicability and usefulness of transliteration. Sommer questions the usefulness of transliteration: For whose benefits is the transliteration made? Is it primarily for the readers or for the staff? On consideration, there can be but one answer: it is for the staff, or, more generally, for those who are unable to read the original script ... As to the foreign readers, they naturally prefer the original script and derive practically no benefit from the transliteration. (1) One of the many concerns of Wellisch is the incompatibility of transliteration schemes both nationally and worldwide. Furthermore, transliteration may result in the loss of information content of the original, nonroman scripts. For those able to read the original script, transliteration then serves as a barrier. (2) An example of this is found in ongoing research done with users. A sample of fifty undergraduate Russian-language students was randomly chosen from the Germanic and Slavic Languages Department at the University of Florida during the spring semester of 1991. None of them was familiar with the Library of Congress transliteration table. The students were asked to transliterate Russian data from Cyrillic into the roman alphabet in order to be able to search for the given item on a computer. In fact, they were asked to write down how they would enter the item into the computer in the roman alphabet. The data were collected using two tests that consisted of a list of titles and proper names in Russian. Two different conditions were observed in the same sample of students. Test A: How correct are the students' searches without the knowledge of the transliteration table, and what are the problems involved? Test B: How correctly do students search after instruction and practice in the library? Do they still have problems? How much did they improve? One finding that emerged from statistical analysis of the data was that without knowledge of the Library of Congress practice to transliterate, for example, the Russian [??] by ia, 80 percent of the students chose ya, whereas only 7 percent were correct. (For simplicity the diacritical marks were omitted from the test. The students were not asked to include these.) When the students were asked to transliterate the Cyrillic character phonetically similar to the English "sh" (as in the Russian word shuba, fur coat), 91 percent were successful even without knowing how to transliterate. …

Journal Article
TL;DR: The Coalition for Networked Information (CNI) and the National Research and Education Network (NREN) are "two of the most exciting initiatives" Jane Ryland has been involved with in her twenty-five years of working in the information technology and higher education fields.
Abstract: In her presentation Jane Ryland reviewed the Coalition for Networked Information (CNI) and the National Research and Education Network (NREN), "two of the most exciting initiatives (she has) been involved with in her twenty-five years of working in the information technology and higher education fields." Ryland also introduced many of the challenges that must be met in order to make NREN a reality. CNI was established by CAUSE, EDUCOM, and the Association for Research Libraries (ARL) in March 1990. The mission of CNI is to "promote the creation of and access to information resources in networked environments to enrich scholarship and to enhance intellectual productivity." Ryland reported, "CNI closely relates the mission of (our) libraries and higher education institutes which support and disseminate knowledge." The National Research and Education Network is the product of the National High-Performance Computing Technology Act of 1991, which recently passed in both houses. Ryland said the intent of the legislation is to build on existing infrastructures including NSFNET, regional networks, and all the connectivity that is currently available. The network will include common carriers and private lines. NREN is meant to be a partnership among government, education, and industry. According to Ryland, the future for higher education, libraries, and information and knowledge dissemination is the network: "The convergence of computers and communication technology will make this network possible." She cautioned librarians and information specialists to examine the current structure of the library within the information industry in order to maximize the benefits of the network. She said the library industry must successfully resolve issues, like the ones introduced at this conference, before it can consider how the network will help it. Ryland stated that three of the major concerns to librarians are (1) there is too much information available, (2) information printed in published form is highly specialized, and (3) information is too expensive. She said 20 percent of the purchased materials are circulated 80 percent of the time. Eighty percent of the materials purchased are used by only two hundred to three hundred people in the world, and 60 percent of the materials purchased don't even circulate after the first few months they are acquired. "How can (we) afford to continue to buy and put in our libraries materials that are underutilized and so expensive?" she asked. Ryland said at the current rate of increase of materials, the library will be the largest building on campus and will still be unable to house all the materials it owns. She said it is unrealistic to continue at this pace and that it is hoped the network will help. She indicated the network will help reduce costs associated with procurement and storage of materials by allowing a patron access to information that is not necessarily owned or stored by the library. NREN would allow a patron to browse the database and to select from the network only the information he or she needs. Ryland proposed that electronic storage would also help in preservation of information. She said, however, that the network is not a panacea, and there are many other factors that must be examined in order for NREN to be a successful solution to our present and future needs. It is imperative to establish what the charges will be for information on the network. Ryland said it was the dream that CD-ROM technology would provide at a very low cost an unlimited personal library. The value of the information was not considered. Ryland noted: "The owners of the information as well as the value adders put a cost on information; and this is fair. (We) must be careful that (we) don't confuse the possible price reduction that new technology allows with the fact that there is a price to pay for having access to valuable information. …

Journal Article
TL;DR: This paper will describe one library's trouble and success in creating a customized online service that could bring research material to remote locations, into the hands of medical clients who are too busy to visit the library to save other libraries that may want to provide a similar service from falling into technical problems.
Abstract: This is the description of one library's trouble and success in creating a customized online service that could bring published medical information to remote locations for physicians, researchers, and clinicians who were often too busy to visit the library. The staff at Lister Hill Library of the Health Sciences, serving the University of Alabama Medical Center at Birmingham, wanted to establish a dial-in modem system for clients to access twenty-five years of medical literature. They tried one method that failed and then developed a successful system. Today, clients are enthusiastically using so many hours on the system that the library may consider rationing time. This paper will describe one library's trouble and success in creating a customized online service that could bring research material to remote locations, into the hands of medical clients who are too busy to visit the library. The goal is to save other libraries that may want to provide a similar service from falling into technical problems while trying to provide the leading edge of technology. THE CHALLENGE As the library that services one of the top medical centers in the nation, Lister Hill Library (LHL) faced a challenge. In order to serve clients at the University of Alabama (UAB) Medical Center at Birmingham, LHL wanted to find a way to bring research material into the hands of clients who needed the information but could not take time away from their work to visit the library. The staff wanted to provide remote access to more than twenty-five years of medical literature indexed by the National Library of Medicine in an online database called MEDLINE, several years of the seventy-seven journal titles and twenty book rifles found in Comprehensive Core Medical Library (CCML) full-text medical journals and books, Medical and Psychological Previews, and the AIDS Knowledge Base information from San Francisco General Hospital. The library staff believed their clients needed ready access to this information through modems, the UAB fiber optics network, and computer screens in their offices and homes. The problem: which of the several technical options would provide the easiest method for clients to use and yet remain within the library's budget? The answer took time, trial, and error. However, the solution brought a customized online search service that made LHL one of the first libraries in the nation to provide remote access to this extensive database of medical information. It also brought very enthusiastic client response. CHRONOLOGY OF THE SEARCH FOR A SOLUTION Step number one was finding out if any system in the market already had what LHL wanted. Several vendors had MEDLINE, but aspects of some systems did not meet LHL's goals or the cost for equipment and personnel was prohibitive for the library's budget. Consequently, LHL staff set up their own hardware, software, and communication system. The library became one of the pioneers of a tie-in with the host computer system that provides the full complement of medical information LHL wanted, according to Pat Ryan, marketing representative from Maxwell Online. Maxwell is the prime vendor for BRS Colleague, a search retrieval system that allows access to MEDLINE, CCML, and other databases. Designing the System Lister Hill Library began with a powerful microprocessor, the IBM RT 135, that has a multiprotocol adapter card. It provides recovery backup to the library's DYNIX automated library system and allows access through fiber optics to the campus communication networks. In addition, its network software, Systems Network Architecture, together with a 3270 emulation program provided the best emulation for the Customer Information Control System. The library established an arrangement with the vendor to access directly the BRS Computer network based in Chicago. On the basis of BRS specifications, LHL's IBM RT was defined as a 3274 control unit with sixteen available 3270 ports. …

Journal Article
TL;DR: These papers are based on presentations made at a program titled "Using the Community Information Format to Access Nonbibliographic Data" sponsored jointly by the LITA Online Catalogs Interest Group and the PLA Community Information Section Technologies Committee at the ALA Annual Conference on June 28, 1992.
Abstract: These papers are based on presentations made at a program titled "Using the Community Information Format to Access Nonbibliographic Data." The program was sponsored jointly by the LITA Online Catalogs Interest Group and the PLA Community Information Section Technologies Committee at the ALA Annual Conference in San Francisco on June 28, 1992. Marilyn R. Lutz. Section Editor. Using the Community Information Format to Create a Public Service Resource Network Marilyn Lutz, Sharon Quinn Fitzgerald, and Thomas Zantow Institutions of higher education are experiencing increased demands to extend their resources beyond the traditional campus to meet the social, economic, and technological needs of society. A persistent problem in developing a workable interface between university resources and community needs, until recently, has been the issue of access, and the barriers that flow from isolation and unfamiliarity. Electronic information technologies are breaking down these barriers and have emerged as the mechanism for networking academic expertise within the university itself and linking that expertise with society at large. Electronic networks are being designed to expand the role of the university in addressing the needs of individuals, governments, businesses, and nonprofit organizations through greater accessibility to diverse resources. An information and retrieval database (I & R) is pivotal to the design of a public service resource network and a key component in making nonhibliographic information accessible via the network. The University of Maine System (UMS) is no exception in this effort to improve the linkage of systemwide resources with statewide needs. The university system is a seven-campus, public university system that provides a full range of higher education services to the citizens of Maine. The university employs over 1,400 faculty and nearly 1,200 professional and administrative personnel, and offers 260 undergraduate programs and more than 70 graduate programs to over 31,000 students. The university system was established to teach, conduct research, and provide public service. In the last category, each campus has both regional and statewide responsibilities. The University of Maine, the flagship campus located in Orono (twelve miles north of Bangor), given its land grant and sea grant responsibilities and the scope of its graduate programs, has the most substantial statewide programs. Each of the other campuses, though responsible for university services used by residents from all sections of the state, offers unique programs targeted primarily to the geographic area in which it is located. The flagship campus in Orono is also headquarters for URSUS, a Digital-based automated library system running Innovative Interfaces, Inc., Innopac software. The library system is accessible on a statewide telecommunications network that links over T1 facilities the seven campus libraries, the Maine state libraries, and the Bangor Public Library (all of which contribute holdings to the union catalog), and over eighty-three outreach sites of the Community College of Maine. The network is connected to the Internet. These characteristics represent a significant pool of resources to the people and economy of Maine. All campuses of the system share public service responsibilities for linking resources to community, regional, and state needs; however, there is no formal, statewide planning, coordination, or marketing of campus-based public service activities. As a result, important public service programs may not be accessible to all areas of the state, and the smaller campuses may require assistance in the development and implementation of public service activities. The University of Maine System defines public service to mean "organized non-credit programs and courses, services, and activities which link and extend university and community resources for the intellectual and educational benefit of individual, government, business and other organizations. …

Journal Article
TL;DR: The article's primary focus is on ways in which the growth, refinement, and development of the MELVYL system have entailed adaptive design, flexible instruction, and user tolerance for change.
Abstract: The first ten years of the MELVYL system have profoundly affected the lives of University of California librarians The rapid growth of the system's content, complexity, and use has required frequent modifications of its interface These changes have required the continuous involvement of librarians in advising the system's designers on new features and new databases, in instructing users, and in observing user behavior This article traces, from a librarian's perspective, the evolution of the system from its origins as a powerful prototype online catalog to its present role as a complex of multiple databases, services, and resources The article's primary focus is on ways in which the growth, refinement, and development of the system have entailed adaptive design, flexible instruction, and user tolerance for change ON FIRST SEEING UCPOLUC IN THE SPITING OF 1981 Ten years ago there were still fruit orchards in Northern California's Silicon Valley, now pop-dated only by plants, not to mention apples, of a different kind They were fragrantly and picturesquely in bloom on the March day carloads of librarians passed through on their way to West Valley College, Saratoga, for a workshop on bibliographic instruction, innocuously entitled "Teaching the Changing Catalog" I was there looking for new ways to teach the students and faculty of the University of California at Santa Cruz how to cope with the transition from a book catalog to a microfiche catalog I was vaguely interested in the promised presentation by one Katharina Klemperer from the UC Division of Library Automation (DLA): "Teaching the Use of a Prototype, Online User-Friendly Catalog," evidently a preview of a forthcoming attraction Obviously this cumbersome label--the University of California Prototype Online Union Catalog--badly needed an acronym Little did we know that the name given to the strange now system, one more mystifying but more mellifluous than" the expected UCPOLUC, would become a mantra for users of UC libraries MELVYL has evolved from noun to adjective, from catalog to system, from tool to paradigm, and many of our professional lives have evolved with it, few more symbiotically or parasitically than my own West Valley College in March 1981 already, precociously, had its own online catalog, a glittering new medium with a modestly familiar message Its simple menu of choices reassuringly echoed the search options of the card, allowing the novice to search for books by author, title, or subject But when Klemperer, with Mike Berger, another member of the UC DLA team, gave her presentation on teaching the new UC catalog, the audience was instantly energized, not just by persuasive pedagogy but by another kind of power Their teaching tool was a primitive Texas Instruments terminal with no screen (This was, after all, very early in the year that saw the birth not just of the MELVYL system but of the IBM PC) However, we became suddenly aware of a message that fully exploited the online medium We were astonished to discover that online technology fostered a new breed of catalog, one that allowed virtually every word in the record to be retrieved, separately or in Boolean combination, that provided two modes of searching according to the user's experience, that delivered user assistance directly to the terminal through well-written help screens, that unified the holdings of all nine UC campuses, and that potentially standardized the most important tool for bibliographic access in UC libraries As an instruction librarian, I was utterly exhilarated by the prospect Suddenly there was much more to learn, much more to teach, and much more with which to teach COOPERATIVE ENTERPRISE: "NOT AN EVENT BUT A PROCESS" I was even more enthralled by the prospect of a productively cooperative enterprise between the evidently approachable designers of the MELVYL catalog and its users--professional and nonprofessional, scholarly and casual--and among the librarians of the far flung outposts of the university …

Journal Article
TL;DR: The present strategy reaffirms the "one University, one library" objective of the Plan for Development, 1978--1988, the ten-year plan that led to the development of the MELVYL catalog and describes options for furthering the role of the SOTA system in a continually evolving environment.
Abstract: Planning for the next five years of the MELVYL system is described in the context of University of California information system planning. The planning environment is outlined from which are derived the objectives for the continued growth of the MELVYL system. The technical evolution of the MELVYL system necessary to meet the objectives is also reviewed. Envisioned in this technical evolution is the conversion of the MELVYL system to a client/server architecture that includes a graphical interface. Future plans for the MELVYL system provide a basis for tackling the problems of fragmented databases and information overload. Four initiatives to alleviate these problems arc briefly described. Over the last ten years, the MELVYL system has evolved into a key element in the university's plan for access to information. The University of California's strategy for library automation stresses the need for university-wide access through the continued deployment of University of California networks, the development of campus-integrated library automation systems, and the expansion of the MELVYL system into an information utility. The present strategy reaffirms the "one University, one library" objective of the Plan for Development, 1978--1988, the ten-year plan that led to the development of the MELVYL catalog. (1) The location of information resources, once constrained by technical limitations, can now be determined by the needs of users, the quality of services, the economies of procuring the resources, and the strength of networks. The MELVYL system is only one component of the university's effort to coordinate access to scholarly information, a role it shares with the other components--campus libraries, computer centers, and departments. Future growth depends on coordination and cooperation among all components. This paper describes options for furthering the role of the MELVYL system in a continually evolving environment. First, the current environment and planning assumptions about the future are reviewed, placing the growth of the MELVYL system in the context of the overall planning for the development of university automated information systems. Next, the objectives underling the continued development of the MELVYL system are proposed. These objectives suggest a shift of emphasis by giving the MELVYL system a more active role in mounting databases, making more effective use of network resources, and incorporating new technologies for access and the display of information. Then, the technical evolution of the MELVYL system necessary to achieve its role in the University's information structure is described. Finally, challenges beyond the next five years are suggested. CURRENT ENVIRONMENT AND PLANNING ASSUMPTIONS At the University of California, users access information resources through a mosaic of loosely connected systems. The university's strategy for access to information differentiates where information is stored from where it is accessed. Though the physical storage of information may be centralized within the MELVYL system, at the campus, or at the national level, access should be universal, limited only by user affiliation as appropriate for some restricted data such as personal files. Within this framework, the MELVYL system will continue to provide access to the monographs and periodicals holdings of the university and a broad spectrum of abstracting and indexing (A&I) databases covering the major disciplines of interest to the university community. In addition, access may also be provided to nonbibliographic information sources, such as source material in electronic form, electronic journals, and selected scientific and image databases of general universitywide interest. The strategy for coordinated access to information resources maximizes the university's investment in existing systems by drawing together the various aspects of the present environment into a coordinated system. …

Journal Article
TL;DR: The responses of vendors of automated library systems to questions regarding their attitudes about and plans to accommodate the new format are characterized, and the capabilities of their systems for storing and providing access to machine-readable community information records are described.
Abstract: The representation of community information, or information and referral (I & R), records in machine-readable form in online electronic databases was a concern for vendors and users of automated library systems for several years prior to the development of the MARC format for community information. This article characterizes the responses of vendors of automated library systems to questions regarding their attitudes about and plans to accommodate the new format, and describes the capabilities of their systems for storing and providing access to machine-readable community information records. A brief discussion of some issues related to the use of online community information is also included. BACKGROUND When the Public Library Association (PLA) Community Information Section (CIS) Technologies Committee began in the late 1980s to consider the development of a set of common data elements for I & R records, there was little consistency in the ways libraries and other institutions dealt with online I & R. Libraries that provided outreach services or were active referral agents were probably most interested in maintaining online I & R databases, and most of those databases were probably represented on microcomputer-based systems utilizing locally developed software. Individual libraries determined the data elements that constituted online I & R records. As the use of automated library systems, especially turnkey integrated systems (those sold by vendors as a package including hardware and software), began to increase, the number of libraries interested in online I & R also began to increase. In addition to libraries and other institutions that emphasized the referral side of I & R, there were now those interested in representing and providing access to files and records of local agencies, clubs, organizations, etc.--the Rolodex files on the desk of the reference department. For a library with an automated system and its online bibliographic database accessible by staff and patrons, it was logical to attempt to convert manual I & R files to machine-readable form, even though they had to be "shoehorned" into the database record format designed for bibliographic information. Each library still decided on an individual basis on the data elements to be contained in the records and the bibliographic record fields that contained the data. It was with these issues and developments in mind that the CIS Technologies Committee began to develop a set of common data elements for I & R records. With the encouragement of AVIAC, the ad hoe Automation Vendor Information Advisory Committee, the Technologies Committee submitted the set of data elements to LC's Network Development and MARC Standards Office and asked that a new MARC (Z39.2) format incorporating the data elements be developed. SYSTEM VENDORS AND I & R Automated systems and system vendors have played a continuing role in the development and use of online I & R records, but it has not been until recently that the role has been active. Representatives of automated system vendors participated in the final development of the format by the CIS Technologies Committee, and the role of AVIAC was mentioned above. As the new record format was being developed, an interest was expressed in how vendors deal with I & R functions and records, and what they thought about the new format. The program for the PLA third national conference in March 1991 included a presentation on the new format, "New Directions for I & R in the 90s," in which the views of automated system vendors were characterized. Information for the presentation was obtained by issuing a request for information (RFI) to all system vendors. The RFI was reissued in January 1992, and the results reported in a presentation at the Illinois Library Association annual conference in March 1992. Those same results were incorporated for the June 1992 ALA presentation, on which this article is based. …