scispace - formally typeset
Search or ask a question

Showing papers in "Information Technology and Libraries in 2003"


Journal Article
TL;DR: Several methods will be outlined by which libraries and librarians can insert themselves into the courseware domain, which will exclusively be on courseware used to enhance traditional classes.
Abstract: Course management systems and software (courseware) are increasingly being used to enhance traditional college courses, yet library resources and services are noticeably missing from this venue. Libraries risk being bypassed by this technology and losing relevance to students and faculty if they do not establish their presence in courseware. Librarians need to be proactive in inserting links to resources and to library assistance within the courseware domain in order to retain visibility, increase relevance with students, and strengthen relationships with faculty. ********** Courseware such as Blackboard, WebCT, and others are increasingly being used by college and university faculty across the country to augment their traditional classroom courses. (1) According to the 2001 National Survey of Information Technology in U.S. Higher Education, nearly one out of every five college courses now makes use of courseware. Also, approximately 70 percent of private universities and 80 percent of public four-year colleges participating in the survey responded that their institution has purchased courseware. (2) Cohen notes that "Though course-management software is generally considered in connection with Web courses and distributed education, such software is actually used most often in traditional courses, to make them Web-assisted." (3) For the most part, unfortunately, academic libraries have been all too absent in the design, development, and implementation of courseware. (4) As a result, faculty do not think of integrating library resources directly into their courseware-enhanced courses. It is possible that faculty (who increasingly make course-related resources--reading and research assignments--available to students through courseware) and students (who use courseware in conjunction with the Web to search for and obtain their course-related materials) may not see the library as the first or even a relevant place to obtain the scholarly resources needed for their courses. Librarians may well find themselves and their services being ignored in a world where library services and resources are not included in the courseware domain. Academic libraries across the country are becoming increasingly aware that they must be included in the courseware domain in order to further assist faculty and students in locating and accessing appropriate library resources. (5) Long asserts that librarians" ... need to think hard about what services they wish to deliver to online environments and clearly articulate how they might be accessed from courseware systems." (6) Currently, there is little consensus on how to fit the library into courseware. In this article, several methods will be outlined by which libraries and librarians can insert themselves into the courseware domain. The focus will exclusively be on courseware used to enhance traditional classes. One broadly defined method, titled Macro-Level Library Courseware Involvement (MaLLCI), entails working with the developers and programmers of courseware to integrate a generic, global library presence into the software. Another method, titled Micro-Level Library Courseware Involvement (MiLLCI), involves individual librarians teaming up with faculty as consultants to participate in developing a customized library instruction and resource component for the courseware-enhanced courses. Benefits of Courseware It is important to understand some of the tools and benefits that courseware offers to the faculty and students who use it. This will enable a better understanding of the methods by which libraries can insert their presence and services into the courseware domain. There are many different types of course-management software systems available. However, most courseware share a certain core set of basic features, including powerful resource sharing, communication, and assessment tools. Dabbage, Bannan-Ritland, and Silc assert that the "use of these features can promote collaborative learning, enhance critical thinking skills, and give every student an equal opportunity to participate in classroom discussions. …

115 citations


Journal Article
TL;DR: The bibliomining process consists of determining areas of focus; identifying internal and external data sources; collecting, cleaning, and anonymizing the data into a data warehouse; selecting appropriate analysis tools; and discovering patterns through data mining and creation of reports with traditional analytical tools.
Abstract: Bibliomining, or data mining for libraries, is the application of data mining and bibliometric tools to data produced from library services. This article outlines the bibliomining process with emphasis on data warehousing issues. Methods for cleaning and anonymizing library data are presented with examples. *********** One of the challenges put forth to the library profession by Michael Buckland is to gain a better understanding of library user communities. (1) Most current library evaluation techniques focus on frequencies and aggregate measures; these statistics hide underlying patterns. Discovering these patterns is key in understanding the communities that use library services. In order to tailor services to meet the needs of different user groups, library decision-makers can use the bibliomining process to uncover patterns of data-based artifacts of use. The bibliomining process, which consists of data warehousing and data mining, will be explored in this brief article. The term bibliomining was first used by Nicholson and Stanton in discussing data mining for libraries. (2) In the research literature, most works that contain the terms library and data mining are not talking about traditional library data, but rather using library in the context of software libraries, as data mining is the application of techniques from a large library of tools. In order to make it more conducive for those concerned with data mining in a library setting to locate other works and other researchers, the term bibliomining was created. The term pays homage to bibliometrics, which is the science of pattern discovery in scientific communication. Bibliomining is the application of statistical and pattern-recognition tools to large amounts of data associated with library systems in order to aid decision-making or justify services. The bibliomining process consists of: * determining areas of focus; * identifying internal and external data sources; * collecting, cleaning, and anonymizing the data into a data warehouse; * selecting appropriate analysis tools; * discovery of patterns through data mining and creation of reports with traditional analytical tools; and * analyzing and implementing the results. The process is cyclical in nature: as patterns are discovered, more questions will be raised that will start the process again. As additional areas of the library are explored, the data warehouse will become more complete, which will make the exploration of other issues much easier. Determining Areas of Focus The first step in the bibliomining process is to determine the area of focus. This area might come from a specific problem in the library or may be a general area requiring exploration and decision-making. The first decision is whether to conduct directed or undirected data mining. (3) Directed data mining is problem-focused: there is a specific problem that drives the exploration--for example, "Budget cuts have reduced the staff time for contacting patrons about delinquent materials. Is there a way to predict the chance patrons will return material once it is one week late in order to prioritize our calling lists?" Undirected data mining is used when a library manager wants to get a better idea of a general topical area; one possible area of focus would be: "How are different departments and types of patrons using the electronic journals?" There are several dangers with undirected data mining, as this type of exploration involves the use of many tools repeatedly to locate any available patterns. Many data-mining techniques are based on probability, and therefore there is a small chance that each pattern found is incorrect: the more times this type of tool is used, the greater the chance of finding an invalid pattern. Another problem arises with data-mining tools that locate the most frequently occurring patterns: in a complex data set, the most frequent pattern may only occur a few times. …

49 citations


Journal Article
TL;DR: HistCite is a system that generates chronological maps of subject (topical) collections resulting from searches of the Institute for Scientific Information Web of Science (WoS) or Science Citation Index, Social Sciences Citation Index and Arts and Humanities Citation Index on CD-ROM, which uses a visual data-mining method based on the analysis of citation links between various documents in an academic library.
Abstract: HistCite[TM] is a system that generates chronological maps of subject (topical) collections resulting from searches of the Institute for Scientific Information Web of Science (WoS) or Science Citation Index, Social Sciences Citation Index, and Arts and Humanities Citation Index on CD-ROM WoS export files are created in which all cited references for source documents are captured These bibliographic collections are processed by HistCite, which generates chronological tables as well as historiographs that highlight the most-cited works in and outside the collection Articles citing the 1953 primordial Watson-Crick paper on the structure of DNA will be used as a demonstration Real-time dynamic genealogical historiographs will be shown HistCite also includes a module for detecting and editing errors or variations in cited references Export Files of five thousand or more records are processed in minutes on a PC Ideally the system will be used to help the searcher quickly identify the most significant work on a topic and enable the searcher to trace its year-by-year historical development ********** The HistCite system resulted from a long-term-needs assessment of users of bibliographic databases Librarians and users need to identify the key works on a particular subject, while scholars and editors desire rapid historical reviews of new topics HistCite is designed to satisfy both of these requirements, whether for reference service or in writing review articles or historical introductions to new manuscripts It uses a visual data-mining method based on the analysis of citation links between various documents in an academic library, making it appropriate as a topic for bibliomining History Even before the advent of the Science Citation Index (SCI), the use of citation data was discussed in the 1964 report "The Use of Citation Data in Writing the History of Science" This included a historiograph that sketched the history of DNA from Gregor Mendel in 1865 to Marshall Nirenberg in 1961, through various stages including Avery-McCleod-McCarty in 1944, through Watson-Crick in 1953 Flow charts of the papers were created manually, based solely on the references cited in a set of core source papers identified in a book by Isaac Asimov on the genetic code This gave rise to speculation on the potential use of citation indexes for historiography (2) Similar maps were later created by Tony Cawkell (3) In an information retrieval course taught at the University of Pennsylvania Moore School of Electrical Engineering, students were required to create similar topical historiographs It was believed that these historiographs would aid in studying the contemporary history of science Since history and bibliography were intimately linked, the term "historiobibliography" was coined (4) A frequent topic of discussion during the DNA mapping project was the idea of writing computer programs that would create such maps directly from the electronic files of SCI There was a possibility that this would require random access to ISI's massive files so that cited and citing documents could be retrieved in real time In the 1960s, however, low-cost gigabyte memories were still a dream The implementation of real-time mapping had to wait for the time when computer memories were large and cheap enough to handle retrospective files covering many decades of literature Though the PC had not yet come along, online searches were possible in the 1970s; mapping in real time, however, was still not feasible Only very recently, when PCs could handle the output of a completely linked large file of thousands of records, did the creation of historiographs In real time become feasible There have been many different types of mapping exercises performed on a relatively small scale In the past, co-citation clustering required mainframe computers and, in most cases, still does (5) These ideas were later extended to creating small cluster maps online as in the SciMap system developed by Small at ISI …

47 citations


Journal Article
TL;DR: The impact of the use of IT in libraries on job requirements and qualifications for catalogers is assessed by analyzing job advertisements published in C&RL News and AL over a two-year period (2000 and 2001).
Abstract: Information technology (IT) encompassing an integrated library system, computer hardware and software, CD-ROM, Internet, and other domains, including MARC 21 formats, CORC, and metadata standards (Dublin Core, TEI, XML, RDF) has produced far-reaching changes in the job functions of catalogers. Libraries are now coming up with a new set of recruiting requirements for these positions. This paper aims to review job advertisements published in American Libraries (AL) and College and Research Libraries News (C&RL NEWS) to assess the impact of the use of IT in libraries on job requirements and qualifications for catalogers. ********** Three major developments in library automation and IT have brought sweeping changes in cataloging during the last four decades. The first was the development of the MARC format by the Library of Congress (LC) in the early 1960s. It formed the basis of library automation systems and led to the creation of bibliographic utilities in the 1970s, the use of which not only freed catalogers from clerical aspects of their duties, but also increased cataloging productivity. Professional catalogers were then able to concentrate on original cataloging and even took up difficult materials, such as theses and dissertations, technical reports, and nonbook materials, which they were unable to catalog before. The effect of bibliographic utilities on cataloging became evident in job advertisements in which experience with utility was either required or desired. Phrases like "experience with OCLC operations" and "familiarity with RLIN or similar systems" are listed as qualifications for catalogers and managers alike, revealing the impact of the new technology on practice and workflow from top to bottom. (1) The second important development, which took place in the early to mid 1980s was the introduction of microcomputer and optical disc technologies. Bibliographic utilities and vendors of MARC records started distributing records on CD-ROM, thus allowing even smaller libraries, which cannot afford expensive online access to OCLC and other utilities, to install CD-ROM-based bibliographic databases on local area networks for copy cataloging. The success of these CD-ROM databases encouraged the LC and other agencies responsible for developing and distributing various cataloging tools to issue them on CD-ROM. Catalogers found CDMARC (discontinued in 1997), CatCD, Classification Plus, Dewey for Windows, Cataloger's Desktop, and others easier to store and more up-to-date than the print version. However, to install and use these products effectively required catalogers to have knowledge of such topics as computing, desktop applications, and network-based tools. The job advertisements, therefore, required computer skills for catalog librarians, including knowledge of PC-based applications and bibliographic utilities as well as CD-ROM experience. The emergence of Internet technologies in the 1990s, markup languages, and non-MARC standards are the third group of developments that have impacted cataloging jobs. As a result, some cataloging positions now require proficiency with computer applications (Internet, integrated library system [ILS], e-mail, and PC software packages), knowledge of markup languages (HTML, SGML, and XML), and experience or familiarity with emerging metadata schemes and tools (Dublin Core, CORC, EAD, TEI, RDF). This paper aims to trace the impact of all these developments in library automation and IT on position titles, degree requirements, and required skills of catalogers by analyzing job advertisements published in C&RL News and AL over a two-year period (2000 and 2001). * Literature Review Several articles have appeared during the last ten years or more discussing the changing and evolving roles of catalogers and the impact of automation on job requirements and qualifications for catalogers. Furuta's study revealed that bibliographic utilities in the 1970s produced far-reaching changes in cataloging departments by allowing the bulk of the material to be processed more quickly and cost effectively by nonprofessionals. …

37 citations


Journal Article
TL;DR: While the concept of computer literacy has existed for some time, the name has certainly changed and the skills used to define computer literacy are now called computer competency or possibly one of a host of terms that have been used for more than two decades.
Abstract: While the concept of computer literacy has existed for some time, the name has certainly changed. Whatever the name, the concept of computer literacy still has merit. By looking at the history of the computer literacy movement for grounding, we can build a definition for the next century and affirm that learning computer basics is a good thing for library staff to do. The term computer literacy seems to have faded from library literature, but has the belief that the general populace should possess a basic computer-skill level faded as well? Have we already achieved this nebulous goal, or has the goal been redefined into something else? Are the skills we used to define computer literacy now called computer competency or possibly one of a host of terms, such as digital literacy, computer skills, Internet literacy, Informatics, computer proficiency, and others that have been used for more than two decades? Whatever the name, the concept of computer literacy still has merit. By looking at the history of the computer literacy movement for grounding, we can build a definition for the next century and know that learning computer basics is a good thing for library staff to do. History of the Computer Literacy Movement People have been trying to define computer literacy for some time. As early as 1968, the National Science Foundation (NSF), at the urging of President Nixon and Congress, took a leadership role by adding the study of computers to the science curriculum of the United States. NSF held a 1980 conference that gathered computer scientists and classroom teachers to make the first attempts at defining computer literacy, as well as indicating that it was a multifaceted idea. (1) Another component in the rise of the computer literacy movement was the marketing of desktop computers to both businesses and individuals in the early 1980s. The general populace was just being introduced to the idea of owning their own computers, corresponding with the introduction of the IBM and Macintosh Apple PCs to much fanfare. Time Magazine even named the computer its Man of the Year in 1982. (2) The eighties brought the computer out of laboratories and into homes, setting the stage for a new era of thinking about these machines. A brief look at the number of articles indexed under the heading "Computers--Study and Teaching," the subject heading most closely related to computer literacy in the Reader's Guide to Periodical Literature, shows a dramatic increase in the mid-1980s (see figure 1). In 1984 Donald Norman said: [FIGURE 1 OMITTED] Computer literacy is a common catch phrase, a popular slogan that whets the appetite of politicians and academics. But what does it mean? How would we produce it? Computer literacy can mean a hundred different things; there is not just a single concept involved, but a large variety of them. (3) At this time he also proposed a scheme for four levels of computer literacy. The first level consisted of mastering what Norman believed to be basic, general concepts, to which the understanding of algorithms, architecture, and databases was key. The second level required an understanding of how to use a computer and accomplish something useful with it. The third level of computer literacy was the ability to program and the fourth level was the understanding of the science of computation, or "where the professional resides." Norman opined that everyone should achieve at least the second level of his computer literacy scale. (4) Almost a decade later Howard Besser noted: Anyone involved in discussions around the development of a computer literacy curriculum in the 1980s recognizes the ambiguity of the term. Courses in programming, word processing, and even in explanations of basic components (such as how to use a floppy disk) all were termed computer literacy. (5) He also made the observation that most hardware and software being used train people would be obsolete in the future, just like Apple IIs and Wordstar are obsolete now, so it is better to teach computer concepts instead of specifics. …

34 citations


Journal Article
TL;DR: In this article, the authors focus on the benefits and drawbacks of using WebCT for such a library instruction program, and the support provided to the instructors of the courses using the module.
Abstract: Rising enrollments at Oakland University (OU) have required librarians to decrease instruction time with each basic writing class in order to preserve contact with all sections. As a result, the faculty at Kresge Library developed an online instruction module to familiarize students with library research. Using WebCT course, management software, the librarians are able to introduce students to basic library skills so that in-class time can be used to teach more advanced research techniques. This article focuses on the benefits and drawbacks of using WebCT for such a library instruction program, and the support provided to the instructors of the courses using the module. ********** Oakland University (OU), a public institution located in Rochester, Michigan, about thirty miles north of Detroit, has experienced steady growth for the past six years, bringing fall 2001 enrollment to 15,875 students. (1) Oakland's administration, faculty, and staff are dedicated to continually strengthening the educational experience of students. With this goal in mind, a two-day "Teaching with Technology" seminar for faculty members was held on campus in the winter 2001 semester, in great part to embolden instructors to reach beyond traditional teaching methods in the classroom. The OU administration also began promoting the use of WebCT course management software that semester, even offering financial bonuses to faculty members who developed WebCT-based instructional programs and courses over the summer of 2001. Coinciding with this push from the administration was an effort within the library to reorganize the information literacy instruction program offered to the university's freshman writing course, Rhetoric 160 (RHT160). The library faculty at OU typically provided multiday instruction to each section of RHT160, but rising enrollments had increased the number of sections of this class, making it difficult for a small library faculty to maintain that standard. In order to preserve contact with all sections, the librarians had to reduce face-to-face teaching time for each class from three hours to two hours. WebCT provided a platform for organizing the decreased amount of face-to-face time efficiently and effectively. To develop an online instruction module using WebCT, a three-person design team was created: a senior member of the library faculty would assist with content and course design; this author, then the most-junior faculty member, would manage the technical aspects, including any hand-coding, file uploading, and actual construction of the course; and the then-interim associate dean of the library would advise on the project. Academic libraries of all types have begun to implement similar online instruction programs in order to reach large-scale student populations, and some have determined that an online instruction project positions them strategically on campus. (2) The goal of the OU library design team was to create an instructional module that would both familiarize students with library services and also teach them the basics of library research before ever coming into the building for in-class instruction. This would allow the librarians time to present other types of material face-to-face in the subsequent class sessions, and also place Kresge Library among the leaders in instructional technology on campus. The WebCT module ultimately was divided into three tutorials, each of which was followed by a short quiz. The online course also included a pretest, to be taken at the outset of the course, and an identical post-test which followed the last tutorial to measure the students' improvement. The first of the three modules was entitled Library Basics, where students learned about the library's layout, building and reference desk hours, interlibrary loans, and one-on-one research consultations. The second tutorial introduced Kresge Library's online catalog and showed students how to perform title, author, and keyword searches. …

30 citations


Journal Article
TL;DR: An architecture for distributed recommender services based on a stochastic purchase incidence model is presented and a strategy to overcome obstacles with behavior-based recommendations that can be efficiently generated from anonymous session data on off-the-shelf PC systems is presented.
Abstract: Library systems are a very promising application area for behavior-based recommender services. By utilizing lending and searching log files from online public access catalogs through data mining, customer-oriented service portals in the style of Amazon.com could easily be developed. Reductions in the search and evaluation costs of documents for readers, as well as an improvement in customer support and collection management for the librarians, are some of the possible benefits. In this article, an architecture for distributed recommender services based on a stochastic purchase incidence model is presented. Experiences with a recommender service that has been operational within the scientific library system of the Universitat Karlsruhe since June 2002 are described. ********** Almost all scientific libraries feature electronic library management systems. With their online public access catalogs (OPACs), they possess all the requirements in almost the same manner as digital libraries for electronic value-added services. A very promising add-on for traditional libraries are recommender systems, the necessity for which arises from the need of scientists and students for efficient literature research, as shown by the survey of Klatt et al. (1) Due to--among other things--information overload and difficult quality assessment, information seekers are more and more incapable of compiling relevant literature from conventional database-oriented catalog systems in a time-efficient manner. Therefore, as the survey reveals, they rely heavily on peers for recommendations. Considering the tight schedule of many students, university teachers, and researchers, it is worth the effort to free up the valuable time consumed in steering each other to the standard literature of their fields, which could be done easily by behavior-based expert advice services. Moreover, in this scenario, they can also profit from the combined knowledge of all library users in contrast to the more restricted knowledge within their personal networks. Consumer acceptance and convenience of recommender systems are shown by the huge success of the broad variety of different services offered at commercial bookstore sites (such as Amazon.com). People are getting used to these services and appreciate them. So the question to ask is: Why are these services not offered on a broader scale within scientific libraries? Discussing this question with librarians and computer scientists, the following reasons were discovered: * Privacy. Librarians are very considerate of the privacy of their patrons. Transaction-level data as well as reading histories must be protected. * Budget restrictions. Public libraries in general run under tight budget restrictions. New electronic services for millions of users might require prohibitively high additional information technology (IT)-investments. * Data size. The number of documents contained in many public or academic library systems is at least one order of magnitude higher than in most commercial organizations. This implies that transaction-level data is scattered on more documents. While one would expect that more data implies a better chance for finding meaningful patterns, it becomes increasingly difficult to detect these patterns due to their sparsity, and because the computational complexity of counting such association rules is exponential in the number of objects. Standard association-rule algorithms reduce the complexity by deleting all objects that do not receive sufficient support. In a library context, the sparsity of the data, unfortunately, makes this approach not feasible. Increasing the support threshold to reduce the computational complexity will lead to pruning all meaningful but weak association patterns that may be below the support threshold, but that are still statistically significant. This article presents a strategy to overcome these obstacles with behavior-based recommendations that can be efficiently generated from anonymous session data on off-the-shelf PC systems. …

29 citations


Journal Article
TL;DR: The article examines several reasons for the lack of portal development in libraries and concludes with a set of Web portal development guidelines for academic libraries.
Abstract: This article studies the history of Web portals widely used in business-to-business and business-to-consumer Web applications in the late 1990s. Web portals originated from Web search engines in the early 1990s and evolved through Web push technology in mid-1990s to its mature model in the late 1990s. This article also compares Web portals with other popular media, such as radios and televisions, for their audience base and content broadness. As of January 2003, only a few libraries had adopted Web portal technology despite the widespread use of my.yahoo.com-type Web portals in the business sector. The article examines several reasons for the lack of portal development in libraries and concludes with a set of Web portal development guidelines for academic libraries. Some of the pioneer library portals are also discussed, as well as the California State Government, the first government portal to offer customization and financial transactions for individuals and business. This article concludes by probing a more fundamental question about general information storage and retrieval processes. In the last several hundred years, libraries primarily built hierarchical data structures and librarians provided information service without any search engines. In the past ten years, Web business communities have primarily worked on developing fast search engines for information retrieval without paying much attention to data structure. Now with the exponential growth of data on the Web, it is time that librarians and computer engineers work together to improve both search mechanisms and data structures for a more effective and efficient information service. What is a portal? "Portal" has been the buzzword of the networked age since 1997. Portals were so popular in business-to-business (B2B) and business-to-consumers (B2C) applications that the business world borrowed an old jingle: "I'm a portal, he's a portal, she's a portal, we're a portal, wouldn't you like to be a portal, too?" Portal derives from the medieval Latin word portale, meaning "city gate." American Heritage Dictionary defines a portal as "a doorway or an entrance, or a gate, especially one that is large and imposing." New definitions for portals in the networked environment can be found on many Web sites. A synthesis of these new definitions is as follows: a Web portal is a doorway that can be customized by individual users to automatically filter information from the Web. It typically offers a search engine and links to useful pages, such as news, weather, travel, and stock quotes. A portal can also be defined as a customizable Web search engine to reflect the MY trend in current Web development. The platform for a portal Web site is a search engine, but a portal is different from a general search engine in that it can be customized by individuals for automatic, constant search for specific information, and it can deliver the results to individuals in a predefined way. A customizable search engine is unique to the user; it is different from anyone else's. The very early history of portals used by librarians can be traced back to the 1960s, when the first digital version of Index Medicus was created. (1) Some science librarians may still remember the customized weekly search in Medline for medical researchers and in INSPEC for physicists. This kind of canned search was predefined offline first by scientists and librarians together with a set of criteria. The canned search was performed by librarians against the weekly updated database tapes on IBM mainframes. Finally, the search result was delivered to scientists for the most recent developments in related fields. In the business community, CEOs often had various Executive Information Systems (EIS) before the Web came into existence in 1992. EIS was developed to provide top decision makers with broad, diverse content according to previously defined criteria. Both librarians' canned searches and the EIS service can be seen as human-controlled portals as they provided customized information in a timely manner through human mediation. …

28 citations


Journal Article
TL;DR: The direct observation process typically used in library usability studies is reviewed and an alternative method--remote observation is introduced, along with examined software tools.
Abstract: Observation is the cornerstone of usability testing and an important strategy in evaluating library Web sites. Traditionally, test administrators have directly observed test users as they interact with the Web site interface. Remote observation offers an alternative that may facilitate the testing process and offer additional capabilities. Usability testing during the California State University San Marcos (CSUSM) Library Web site redesign used a simple remote observation strategy to view the test user's screen on another computer removed from the test location. The library investigated Timbuktu, NetMeeting, and Camtasia as potential software tools to assist in remote observation. ********** Usability testing has become an important component of Web site development for many libraries. Libraries are user-centered organizations. They provide an entire service--reference--just to help users find information. It is important that their Web sites also meet their patrons' information needs in a user-friendly fashion. The best way to improve a library Web site's usability is to observe users interacting with it and then incorporate their feedback into the site's design. Norlin and Winters state, "the objective of usability testing is to evaluate the Web site from the user's perspective." (1) Usability testing uses a variety of methods to evaluate a Web site. Battleson, Booth, and Weintrop divide usability testing into three categories: (1) inquiry, which includes focus groups and questionnaires; (2) inspection, which includes heuristic evaluation (comparison of site elements with a list of usability design principles); and (3) formal usability testing, also known as formal observation. (2) Of the various usability testing techniques commonly used to evaluate Web interfaces, only a few, such as heuristic evaluation, solely depend on the Web developer's expertise. Most usability tests directly involve users in evaluating the interface. For example, card sorting asks users how they would organize the site; matching tests check if users can correctly associate the intended meanings with their icons; and questionnaires and focus groups solicit feedback on users' needs. (3) The best-known usability test is the formal observation of the user interacting with the product to be tested. It is the classic usability test. It is so central that the term "usability test" is often synonymous with user observation. It is so important that organizations are willing to hire full-time usability experts and build special laboratory environments to facilitate the observation process. Most usability experts value the feedback from observation more highly than that of other usability tests--so highly that they are willing to cut comers just to make sure it is done. Krug expresses it most passionately: "Testing one user is 100 percent better than testing none. Testing always works. Even the worst test with the wrong user will show you things you can do that will improve your site." (4) Libraries do not have special observation rooms or full-time experts, so they must make do with existing facilities and personnel to conduct usability observations. Doing usability testing on a budget has been a theme in the usability community since Nielsen's 1989 paper, "Usability Engineering at a Discount." (5) With the proliferation of networking technology and the advent of the Web, new ways of conducting usability observations have become possible. Just because most libraries' usability efforts are on a budget does not mean they don't have access to powerful tools to facilitate and enhance user observations. This paper reviews the direct observation process typically used in library usability studies and introduces an alternative method--remote observation. How the California State University San Marcos (CSUSM) Library applied remote observation to usability testing is described, along with examined software tools. …

26 citations


Journal Article
TL;DR: The history of the project, the function of each of its main application modules, and the various security roles required for administration of the application are detailed as well as similar initiatives and other activities within the library profession to streamline and automate the management of electronic resources.
Abstract: This article describes a project undertaken by the Johns Hopkins University libraries to develop a systemwide, Web-based application to facilitate the selection, procurement, implementation, and management of electronic resources and their licenses. The authors detail the history of the project, the function of each of its main application modules, and the various security roles required for administration of the application as well as note similar initiatives and other activities within the library profession to streamline and automate the management of electronic resources. ********** The Johns Hopkins University (JHU) libraries are nearing completion of a university-wide electronic resource management system. This application, the Hopkins Electronic Resource Management System (HERMES), will provide an easy and time-saving means for patrons to identify and access the electronic resources of JHU libraries, as well as facilitate the process of selecting, purchasing, and managing these resources. The scope of resources managed by HERMES will include all electronic resources to which JHU libraries provide access. * Description and Functional Requirements Early in the project, the following functional requirements were identified: * provide a full workflow and approvals process to support the selection, procurement, and implementation of e-resources; * enable dynamic generation of e-resource information for public display; * provide automatic notification to appropriate staff of changes of status and scope in e-resource ordering and licensing (including such items as when to renew and action required after a certain number of days); * provide for link management for e-resources, including the automatic updating of URLs in the backend database, on the campus proxy server, on library Web sites, and so on; * provide staff with a unified, Web-based means for viewing, updating, reporting, and administering e-resources, including custom report generation; * document and maintain information about e-resources; * be accessible to all staff and patrons in JHU libraries with appropriate restrictions for use of administrative modules; and * be interoperable with existing and future systems, including the integrated library system, the campus proxy server, and Web sites of the various campus libraries. * History of the Project In the spring of 1999, the electronic resources librarian at JHU's Milton S. Eisenhower Library approached the Web application developer and related her need for an easier way to manage the hundreds of links to both licensed and unlicensed electronic resources on the library's Web site. Her problem is probably familiar to anyone who must manage a significant number of Web pages: if a URL changes somewhere on the Web site, the administrator must then track down and update that URL wherever it appears throughout the entire site. This process was tedious and error-prone. The creation of a simple database-enabled Web application was proposed that would allow the administrator to create a bibliographic record for a particular e-resource and to subject-index the record using multiple subject headings. The public display for this application would pull data directly from the database and would dynamically generate subject-specific lists of e-resources, along with their corresponding URLs. And because the URL, as part of the bibliographic record, would be stored in a single place, any updates to that URL would automatically appear within each of the individual subject lists on the site. In this way, the need for locating and updating multiple instances of the same URL whenever a change to that URL occurred would be eliminated. An application was constructed and tested that accomplished this task, and a student was hired to perform data entry to populate the backend database. …

16 citations


Journal Article
TL;DR: The origin and evolution of Penn's evolving management information system (MIS) program, known as the data farm, is traced, which addresses problems pertaining to the collection, storage, anonymization, and normalization of data, and looks at current work on a database-driven model for future MIS functions.
Abstract: For the past three years, the University of Pennsylvania (Penn) library has been building a data repository and developing computer functionality to support management information needs. This article traces the origin and evolution of Penn's evolving management information system (MIS) program, known as the data farm. It addresses problems pertaining to the collection, storage, anonymization, and normalization of data, and looks at current work on a database-driven model for future MIS functions. ********** In recent years, the range of academic library services powered by relational databases and servers has increased substantially. Our customers access electronic journals and search article indexes on the Web. They wand barcodes to charge-out books, use debit cards to copy or print documents, swipe ID badges to enter library buildings ... and the list goes on. These interactions with the library give rise to a flood of transaction data recorded by the databases and computers that drive service. The machines register when and where services are used and contain information about individuals and the resources people access. Occasionally they even capture clues about an individual's work environment; the traces of a cookie or a referring URL can provide such indicators; the location of a photocopier may offer others. The powerful tools of library service can support equally powerful tools of library assessment. In the right organizational culture, those assessment tools can inform and thus improve planning and decision-making. They can serve less benign purposes as well absent policies and procedures designed to safeguard identities Three years ago, motivated by the need to improve the measurement of electronic resource use, the Penn library began working with various kinds of transaction data, and in particular Web server logs, as sources of management information. Though narrowly defined in its early stage, this effort at log analysis is beginning to resemble the form and function of an MIS. The following is a brief survey of the embryonic MIS at the Penn library, an aggregation of data files and databases, form interfaces, and Web pages called the library data farm. Initial Challenges By 1998, the Penn library offered a large collection of online article indexes, full-text databases, and electronic journals through its Web site. As this quickly evolving, costly set of services expanded, frustrations mounted over the difficulty of measuring its use and impact. The lack of good management information, if only in the form of frequencies or other descriptive statistics, was viewed as a serious shortcoming, one that could hamper the library's accountability to the university's schools and impede planning and budgeting. For several years, the library staff has worked diligently to compile the few statistics that vendors provide and trace the spiral of Web activity based on Web-site visits and page counts. But these attempts at measurement have had obvious drawbacks. Vendor statistics have been meager, erratic, poorly defined, and incompatible across products. Web measures have provided information about machine load, but have revealed nothing about resource use. Together, the external and internal sources have contributed little clarity to the picture of digital information use. In addition, the compilation of these crudest of measures has proved too time consuming and labor intensive to carry out with any regularity. Given the absence of third-party solutions or working models, any approach to the management information problem would require a lot of experimentation with local data sources and systems. The approach would have to be independent of information providers and sufficiently robust to generate useful statistics with a modicum of labor. In short, it would have to provide a means of: 1. increasing the resolution of library statistics, especially with regard to demographics and cost analysis; 2. …

Journal Article
TL;DR: The social and economic issues that need to be considered to bridge this digital divide between rural and urban populations in India are discussed in order to ensure sustainable development of the country.
Abstract: A digital library is a capital- and technology-intensive project. It is a great challenge to develop digital libraries in countries like India in light of the problems of shrinking budget; high initial and recurring expenditure; and social and economic problems of illiteracy, population growth, poor health conditions, inadequacy of resources for development programs, and weak infrastructure. While access to digital information is possible in more than two hundred cities in India, "voice telephony" is still a major medium of information transfer in villages. This paper discusses the social and economic issues that need to be considered to bridge this digital divide between rural and urban populations in order to ensure sustainable development of the country. ********** The digital revolution has altered the way society functions at global, local, and personal levels. It has led to changes in the collection, storage, processing, and transmission of information, changes that have resulted in the evolution of libraries into digital libraries. Many definitions of digital libraries have been put forth, and terms such as digital libraries, electronic libraries, and virtual libraries are being widely used. While digital libraries and electronic libraries contain digitized information along with print-based publications, and may not necessarily be networked, virtual libraries are often defined as "libraries without walls," spread across the globe through communication networks. India has a population of 1.02 billion (as of March 2001), with an average literacy rate of 65.38 percent. However, the literacy rate varies from 47.53 percent in Bihar State to 96.92 percent in Kerala State. (1) On average, at present, 18.2 percent of rural and 36.9 percent of urban populations have completed at least ten years of schooling, are able to read and write English, and hence, are capable of using information and communication technologies, including digital libraries. (2) While 36 percent of the world population has access to the Internet, 2.3 million Indian users, or a mere 0.37 percent of Internet users, accessed the Internet in 2000. (3) The majority of these users were professionals from the corporate segment (43 percent) and students (38 percent). (4) The National Association of Software and Service Companies (NASSCOM) estimates that Internet users in India would increase to 23 million by 2003. This, of course, is provided the projections are transformed into reality by extending Internet coverage from its almost urban, English-centric setting to rural and Indian-language-speaking populations; improvement in bandwidth; and penetration of the Internet through the use of personal computers and cable television. (5) Most villagers still depend upon personal interaction to share information within their community. (6) This disparity in the use of digital information can be bridged to ensure sustainable development of India by addressing the social and economic issues discussed in this paper If these inequalities are not dealt with at their roots, then technological access and use will be limited to urban areas and the computer literates, further marginalizing the rural population. This digital divide can be narrowed through digital literacy and training, content development in local languages, development of multimedia products that even those who are illiterate can understand, and utilization of digital technology at the community level. Hence, the objectives of a digital library in the Indian context should be: * bringing the benefits of IT-based services to people; * making available the latest information communication technologies (ICTs) to both the business and non-business populations; * creating awareness in the underserved populations about their basic value; * offering services which are beneficial to them; and * making these services and facilities self-sustaining in the long run. …

Journal Article
TL;DR: In this article, the authors analyze the queries posed to a digital library and recorded into the Z39.50 session log files, and construct communities of users with common interests, using data-mining techniques.
Abstract: The interest in the analysis of library user behavior has been increasing rapidly since the advent of digital libraries and the Internet. In this context, the authors analyze the queries posed to a digital library and recorded into the Z39.50 session log files, and construct communities of users with common interests, using data-mining techniques. One of the main concerns of this study is the construction of meaningful communities that can be used for improving information access. Analysis of the results brings to the surface some of the important properties of the task, suggesting the feasibility of a common methodology. ********** Information services aim to satisfy the needs of their users in a way that ensures precision and effectiveness. Many of these services use intelligent information retrieval and filtering techniques to personalize and customize their content to the users' interests and preferences. (1) Several information providers exploit user modeling techniques to understand and evaluate the usage of their services. The study of user behavior has become a crucial point in a number of digital library projects, and specifies a number of critical factors during the design and development process of a digital library. (2) Data mining offers powerful techniques for discovering nontrivial and useful patterns in voluminous datasets. Many such techniques have been applied to information services and especially to the Web, offering personalization and improving information access. (3) The application of such techniques in library systems comprises the bibliomining domain, which aims to upgrade almost all the decision-making processes concerning library and information management. The goal of this paper is to show how the administrator of a digital library can analyze user behavior and extract the data necessary for improving information access. In particular, the authors are interested in formulating community models, which represent patterns of usage of digital libraries and can be associated with different types of users. The authors' main concern is the association of the digital library queries. Similar queries, recorded into Z39.50 session log tiles, are grouped into clusters. The clusters map user community models that represent the users' demands and querying habits. Such models could be used in a query-expansion process contributing to efficient retrieval. Generally, the query analysis can be beneficial to both the digital library and its users in many ways: * Service optimization--helping the administrators reorganize the digital library content, authorities, and user interfaces, making them more suitable for different user groups * Decision support--helping the administrator form an effective query expansion strategy for the digital library * Personalization--helping the users identify information of interest to them by recommending similar subjects This project is motivated by the need to improve the querying mechanisms provided by the digital library of the Hellenic National Documentation Center (NDC) (http://theses.ndc.gr), which is one of the most significant in Greece, consisting of many collections that are unique world wide and of international interest. The digital library of NDC is targeted to a number of user groups, mainly in Greece and from a variety of scientific domains, including students, researchers, professionals, and librarians. In the following section the problem of creating user communities is described together with the methods followed. In addition, the digital collections of NDC, their characteristics, the targeted user groups they refer to, and the functionality of the available operations by the system are discussed. In the Experimental Results section, the authors' methods are applied to two different collections of the NDC. Finally, a number of interesting issues derived from this work are presented for further research. …

Journal Article
TL;DR: The planning and initial implementation process of the SFX server at the University of Iowa Libraries is described, and a number of phase 2 services currently in development are identified.
Abstract: In January 2002, the University of Iowa Libraries introduced its link server--linking related content from one information provider to another--using Ex Libris SFX software. Three basic services appeared on day 1 of the link server's implementation: (1) citation reference linking to full-text electronic journal articles; (2) linking to holdings in the local catalog; and (3) persistent linking to an electronic reference service. The system is now integrated with more than seventy-five licensed databases and includes links to more than 16,000 fulltext journal subscriptions. New developments beyond citation reference linking include links to Journal Citation Reports, Ulrichsweb, and interlibrary loan. This article describes the planning and initial implementation process of the SFX server. By providing library patrons with the ability to link easily from citation to full text, link servers hold the promise of improving the integration of electronic information sources and are quickly becoming a mainstream service in academic libraries throughout the country. Many libraries are now in the evaluation stage or have already selected a link server product and are moving to the implementation stage. The University of Iowa Libraries (UIL) introduced its link server, which uses Ex Libris SFX software, in January 2002. This article describes the planning and initial implementation process at UIL, and identifies a number of phase 2 services currently in development. Put briefly, link servers provide libraries with the ability to link from one electronic resource to another. In its simplest application, a link server can provide citation reference linking--the ability to link from a citation, typically in an abstract and index database, to the full-text article. While a variety of licensed databases have embedded citation reference linking, they typically assume a one-to-one relationship between source and target. In reality, multiple copies of electronic journal articles often exist, and the most appropriate copy often depends on the location or affiliation of the library user. As Beit-Arie et al. note, "The 'appropriate copy' problem, then, is essentially the issue of where and how to insert localization into the linking process." (1) Libraries are thus motivated to install and configure their own link servers to address the issue of appropriate copy for their clientele. Pre-Implementation UIL started to investigate link server technology in fall 2000. Having recently implemented Ex Libris's integrated library system, Aleph, they were aware of Ex Libris's acquisition of the SFX technology initially developed by Herbert Van de Sompel at the University of Ghent. (2) In September 2000, UIL had their first demonstration of SFX (and MetaLib, Ex Libris's federated search and library portal product). UIL closely followed the development of SFX through its beta testing and were poised to be an early implementer by the following fall. The decision to use SFX was simple. At the time of UIL's implementation, it was the only deliverable product on the market. Most link servers today, including SFX, SIRSI's OpenURL Resolver, Endeavor's LinkFinder Plus, and 1Cat, rely on OpenURL architecture. (3) SFX not only was built upon the OpenURL framework, it was actually the technology for which OpenURL was originally defined. (4) This was the deciding factor for UIL, as the development of OpenURL was seen as an example of Ex Libris's commitment to open standards and transparent technology. In summer 2001, UIL reached a critical point in managing their electronic resources. Information about UIL's licensed resources existed in a variety of locations, some more hidden than others. For some products, the library catalog had the most accurate and complete information. For other products, the locally developed gateway database was the most authoritative resource. In a few situations, the only access to a product was squirreled away on branch library Web pages or in other elusive places. …

Journal Article
TL;DR: This article describes two examples using relational databases to streamline the creation and management of active, Web-based subject bibliographies, contrasting in-house versus outsourcing approaches, an independent database versus one built from OPAC, and open source versus proprietary software.
Abstract: This article describes two examples using relational databases to streamline the creation and management of active, Web-based subject bibliographies. Before the database approach, library staff expended considerable time and effort compiling subject Web-resource pages to guide users to high-quality resources. The process of producing subject guides was tedious, repetitive, and labor intensive, requiring librarians to become proficient at the intricate task of Web-page creation. Since identical resources, descriptions, and links frequently appear on several different pages, there was considerable duplication of information. Wesleyan University and the Tri-College Consortium each, independently, sought to solve this problem by creating a database of resource information and a process for mapping guide pages. This report compares their different approaches, contrasting in-house versus outsourcing approaches, an independent database versus one built from OPAC, and open source versus proprietary software. ********** Wesleyan University Library (WUL) in Middletown, Connecticut, and the Tri-College Consortium (TCC) of Bryn Mawr, Haverford, and Swarthmore Colleges near Philadelphia faced a challenge common to many libraries in their need to create subject-specific Web pages for library users. Creation of these pages by appropriate subject specialists required that they either learn to manipulate HTML coding, or to use a Web composition software program. Although the latter option is easier than direct coding, it still requires mastery of a new software application for the sole purpose, in all likelihood, of producing these subject guides. The subject specialist must spend considerable time formatting pages, keying descriptive data about the resources, and troubleshooting unexpected problems with online displays. Not all librarians are equally comfortable with writing Web pages, and individual comfort levels discourage and delay both creation and timely updates of existing pages. A small college library staff is simply unable to develop and maintain Web research guides in this complicated manner. Many of the same resources, such as major reference works, indexes, journals, and meta Web sites are duplicated on different subject pages. As the Web resources change their URLs, coverage, or other characteristics, each occurrence of the identical data in different pages needs first to be located, and then appropriately updated. Subject guide pages tend to quickly become outdated. Since several different librarians write the pages, they may even be unaware of updates needed, and of descriptions previously written by their colleagues. This factor contributes to a considerable duplication of effort. The goal of WUL and TCC was to find efficient ways for librarians to create and update subject research guides. Both institutions addressed this issue independently in 2000, unaware of the other's involvement with it. In January 2001, through a chance discussion, it was discovered that both institutions were working toward similar solutions to the same problem, although with very different approaches. * The Two Approaches The solutions employed by WUL and TCC are based on the creation of relational databases for tracking resources and building page content to enable the dynamic generation of research guides. The goal is to enable librarians, instead of writing separate and static research guides, to quickly enter or select resources and arrange them on a page through a simple staff interface. Updates of URLs and other resource information can be made once on the database record. Since page displays are created dynamically from the database, the updates take effect immediately on all relevant pages. Outdated resources can be quickly deleted from the database and thus from the pages. It is also possible to compile electronic guides for users on demand by allowing them to search the database itself. …

Journal Article
TL;DR: The history of text and media digitization at Northwestern University can be traced back to the early 1990s, when the Oyez Project was first introduced as mentioned in this paper and the first all-streaming version was released.
Abstract: The Northwestern University Library has been a pioneer in text and media digitization. From early efforts primarily focused on enhancing access to reserve material to current projects involving vast quantities of streaming media, in great part these projects have been the result of close collaboration between the library and other units on campus, particularly Academic Technologies. As the depth and breadth of digitization efforts have increased, so have the technological and organizational issues. This article examines the history of digitization efforts at Northwestern University as a context for exploring the emerging issues most libraries face as digitization enters a new era. ********** Northwestern University Library was an early pioneer in text electronic reserves, and has had a fully functioning service to digitize articles and book chapters for classroom use since 1995. The library also has an active digital library program, through which unique or rare pieces from the collection are digitized for delivery to the wide world of scholars. The Siege and Commune of Paris photograph digitization project, completed in 1995, is the earliest example, and the spectacular collection of Edward Curtis's early-1900s photographs, The North American Indian, is the latest. Northwestern University was also a pioneering user of streaming media. Political Science professor Jerry Goldman, whose Oyez Project is now the authoritative site for Supreme Court oral argument audio materials, began using Real Audio when it was first introduced in the mid1990s, and released the first all-streaming version of Oyez in January 1996. Other faculty projects soon followed, including fellow Political Science professor Ken Janda's Videopaths Through U.S. Politics. Janda's project was built around news archive footage from the Video Encyclopedia of the Twentieth Century, and was designed to give his American government class first-hand exposure to important historical events such as the Watergate scandal and Nixon resignation, struggles of the 1950s and 60s civil rights movement, and the Vietnam war. Building on the success of the Goldman and Janda projects, Northwestern secured permission in 1999 to digitize the entire Video Encyclopedia, and now serves all eighty-three hours of that important resource freely to the campus community as streamed MPEG-1. Northwestern University Information Technology (NUIT) has been an active partner with the library on many technology projects and has been instrumental in assembling the systems and infrastructure to sustain their growth from experimental to production status. One of the most visible collaborations was the offering of a faculty boot camp, Technology in Learning and Teaching (TiLT), four times a year between 1993 and 2000. The four-day TiLT program introduced faculty to the technology to build instructional tools for their courses and provided a forum in which to discuss effective uses of technology in the classroom with their colleagues, campus library and technology specialists, and outside experts. As the years passed and the Internet became ubiquitous, Web-based instructional technologies became more prevalent. As a result, the focus of TiLT shifted to course Web-site development. Eventually, however, the need for this type of training decreased due to two factors: the increasing sophistication of the development tools, which made them easier to use, and the greater sophistication of new faculty in using information technologies. With the introduction of the Blackboard CourseInfo system, the focus of training shifted from being primarily technical to concentrating on the integration of course materials. This shift from individually crafted course Web pages to a Web-based course management system allowed faculty, librarians, and information technologists to emphasize the content of these courses rather than the mechanics of building sites. This renewed the importance of Electronic Reserve, which expanded its services to include Blackboard delivery of scanned material and providing links to full-text articles in databases and journals that the library subscribes to electronically. …

Journal Article
TL;DR: For instance, the authors found that studio and survey faculty were satisfied to continue teaching using tried and true instruction methodologies that required little or no technological skill or knowledge: group lecture and critique sessions, analog slides and in-library audio listening assignments, library print collections, and term papers printed on good, old-fashioned twenty-pound white stock.
Abstract: Where students once came into higher learning equipped with pencils and protractors, paintbrushes and easels, scores and record player, today's art student arrives armed with laptop, speakers, and wireless card. Just as academic institutions must adapt and restructure instruction modules around the twenty-first-century student, so must university libraries provide new services to support studio and survey faculty as they change teaching methodologies and pedagogies. At Carnegie Mellon University Libraries, services to support technology in education include digitization workstations, creating and maintaining digital image collections, and implementing audio e-reserves. ********** At Carnegie Mellon University, the libraries acknowledge that instructional methods are becoming increasingly technological. To that end, each of the three facilities that comprise the University Libraries (Hunt Library, Engineering and Science Library, and Mellon Institute Library) has been supplying electronic databases and online resources for the campus community for some time. For the College of Fine Arts studio and survey courses, however, it became clear that in order for faculty to be successful using technology in the classroom, additional library services were going to be needed beyond changing subscription formats from print to electronic resources. Providing these new services for the art faculty are the staff in the Arts and Special Collections department in Hunt Library, whose areas of specialization and collection responsibilities support the arts disciplines. Canvas versus Computer Screen Until recently, studio and survey faculty in the College of Fine Arts at Carnegie Mellon were satisfied to continue teaching using tried and true instruction methodologies that required little or no technological skill or knowledge: group lecture and critique sessions, analog slides and in-library audio listening assignments, library print collections, and term papers printed on good, old-fashioned twenty-pound white stock. It was obvious in isolated conversations that some faculty were proud of their luddite tendencies, but when informal polling by library staff began in earnest, another side of the story began to emerge: most studio and survey faculty were simply uncertain as to how the library could help them to begin using technology for teaching, and up until this juncture, it didn't really seem to matter. Students seemed content learning in the old, precomputer ways--but were they? Students Want Digital In 2000 and 2001, Carnegie Mellon University was ranked number one by Yahoo! Internet Life magazine in its annual survey of the one hundred "most wired" colleges and universities in the United States. (1) It's no surprise then that within the College of Fine Arts, we found students have their feet planted in both old and new learning environments. Not a day goes by when you won't find students sitting cross-legged on the floor in front of some section of the library stacks, leafing through page after page of books old and new, searching for inspirations for their own work. When it comes to classroom presentations and learning styles, however, most students we talked to preferred the digital world as opposed to analog presentation. Library staff soon began to notice that students were often converting print images or sound files to digital for projects. Immediate access to resources also seemed a primary concern when it came to writing a paper or presenting supporting materials: time is clearly a critical factor in preparation and often the students turned first to online resources. From our observations it is clear that these students are well-equipped to use technology for gathering information and representing points of view. Over the past couple of years, faculty have begun to notice an increase in the number of students requesting the use of classroom facilities equipped with computer projection to present materials. …

Journal Article
TL;DR: For example, the Dorothea June Grossbart Historic Costume Collection (HCC) at Wayne State University (WSU) as mentioned in this paper is a collection of five hundred pieces of Western dress and accessories, ethnic garments, and historic textiles.
Abstract: Many libraries assist faculty in the development of digital materials for instruction, with services ranging from scanning documents for electronic course reserves to providing digital production centers for faculty use. But what types of services are best offered by librarians when the development of instructional materials takes the form of formal, more complex digitization projects? This article describes one such collaborative project, the Dorothea June Grossbart Historic Costume Collection (HCC) at Wayne State University (WSU), and examines how building this digital resource has offered new opportunities for librarians to expand their partnerships with faculty and meet shared educational goals. ********** Digitization projects are now commonplace in much of the library world. A search of the library literature and the Web will show that local digital collections, both large and small, have proliferated and are well documented. But for the Wayne State University Library System (WSULS) in Detroit, and perhaps other research and academic libraries, the collection-development answers to the question, Why digitize? must be compelling enough to compete with other expensive library and information technology initiatives. Digitization--beyond a few demonstration projects and temporary online exhibitions--is hard pressed to win such a difficult competition in an era of stagnant or shrinking budgets. Yet, there are many reasons to support the management of selected collaborative digitization projects as natural extensions of the library's existing instructional support alliances with faculty. Some libraries, including WSULS, are exploring the potential of digitization partnerships to improve teaching and learning and are providing digital imaging centers and digital media services for faculty. As Rockman asserts, digitization partnerships with faculty are opportunities "outside of the traditional teaching and learning arena," which can lead to improved library involvement and visibility. (1) Libraries that are relative newcomers to digitizing educational materials can look to the experience and expertise of a growing number of institutions and libraries that are developing digital repositories of instructional, cultural, and scholarly materials for educational purposes. Digitizing local resources improves scholarly use by helping preserve fragile artifacts and increasing access to the materials, among other benefits, but it also represents a collaborative process that requires closer, more sustained relationships with faculty than some librarians may have experienced in the past. While working together on the Dorothea June Grossbart Historic Costume Collection (HCC), for example, WSULS librarians and faculty combined their strength to develop grant proposals, explore copyright issues, devise project goals, create databases and metadata, configure searches and interfaces, integrate the new digital images and related library materials into a course management system, and perform many other tasks over the period of one year They are now designing evaluation tools and promotional programs together to ensure that the collection reaches its full research and educational-use potential. Project Background HCC is maintained by the Fashion Design and Merchandising Department of WSU's College of Fine, Performing, and Communication Arts (CFPCA). The collection consists of five hundred pieces of Western dress and accessories, ethnic garments, and historic textiles. Some highlights from the collection include nineteenth- and early-twentieth-century clothing previously owned by historic Detroit figures, unique beaded garments, and various examples of designer wear. The collection was begun by former CFPCA professor Dorothea June Grossbart in 1982 after she received several donated pieces from the Chicago Historical Society, but it is now under the care of Jane Hooper and Rayneld Rolak Johnson, retired and current CFPCA professors respectively, and CFPCA faculty member Susan Widawski. …

Journal Article
TL;DR: The OpenURL syntax is designed to enable transportation of metadata and identifiers about referenced works and their context from any information resource to a local link server that enables the delivery of context-sensitive linking in and across their collections.
Abstract: This paper describes the development of the OpenURL standard and how it will impact librarians and information technologists. This article is based on information provided via email inquires sent to members of the National Information Standards Organization (NISO) Committee AX responsible for producing this standard. The OpenURL syntax is designed to enable transportation of metadata and identifiers about referenced works and their context from any information resource to a local link server. This allows libraries to create locally-controlled and managed link servers that enable the delivery of context-sensitive linking in and across their collections. OpenURL is a developing standard that provides a mechanism for transporting bibliographic metadata about objects between information services via the Internet. (1) The standard is based on the idea that links should lead a user to appropriate resources. Currently, Web links do not take into account the identity of the user as they link to the same Web page. When more than one institution provides access to copies of the same electronic article, the link from the citation to the full-text article should point to a copy that is available to the user. Since different users have access to different resources, the link should resolve who gets what. The link must be able to package metadata and identifiers describing the information object, and send this package to a server that resolves the link. The resolver should take into account the user's identity when resolving the metadata into specific articles. In the OpenURL framework, information resources allow for open linking by including a hook, a programmer-defined customization, along with each metadata description that they present to users. This hook presents itself in the user's browser as a clickable link called an OpenURL. (2) This article describes the development of the OpenURL standard and how it will impact librarians and information technologists. Much of the information provided here was obtained via e-mail inquiries sent to members of the National Information Standards Organization (NISO) Committee AX responsible for producing the OpenURL standard. See appendix A for the list of questions submitted to the committee members. This standards committee, formally designated NISO AX, consists of seventeen members and four observers from diverse backgrounds and workplaces (libraries, publishers, and service providers.) See appendix B for complete biographical information on the respondents. History The OpenURL concept evolved from research by Herbert Van de Sompel and his team at the University of Ghent in Belgium. (3) In 1998 they began to explore the role of local link servers for libraries to facilitate context-sensitive linking between heterogeneous scholarly resources. (4) As a result of this work, the first context-sensitive link server, titled SFX, was developed. Ex Libris was one of the technology partners involved in this experimental work, and Oren Beit-Arie, Ex Libris Group vice president for research, was assigned to the project. In February 2000, Ex Libris purchased all rights to develop and market the SFX technology. (5) SFX-URL was developed as a protocol for transporting metadata from sources to the SFX server. In March 2000, Van de Sompel, Hochsteinbach, and Beit-Arie began work on a general framework to enable a standardized infrastructure for open and context-sensitive linking. This led to the creation of the first draft of the OpenURL standard, which was posted publicly in April 2000 at www.sfxit.com/openurl.html, and they subsequently published an article summarizing the OpenURL standard in D-Lib Magazine. (6) In December 2000, Van de Sompel and Beit-Arie submitted the OpenURL specifications to NISO for its official standardization. The OpenURL standard was accepted as a fast track work item. (7) NISO is a nonprofit association accredited by the American National Standards Institute (ANSI). …

Journal Article
TL;DR: Methods for testing the usefulness of bibliometric methods for the evaluation of information resources located at subject portals targeting one of the selected institutions, Gothenburg University's Department of Political Sciences are presented.
Abstract: This article presents methods for testing the usefulness of bibliometric methods for the evaluation of information resources located at subject portals. Two subject portals for social sciences have been selected as objects for the study: SamWebb at Gothenburg University Library in Sweden and Bisigate at the Aarhus Business School Library, Denmark. To show how to capture the local users' views and requirements in the development of portals, this article explores the results of the analyses targeting one of the selected institutions, Gothenburg University's Department of Political Sciences. The study produced various types of lists as well as maps for monitoring the research and publication pattern of the department. These reports allow exploration and visualization of the research results of the institution in a form that is easy to read and understand for portal users. The content of the lists and maps was designed to provide information about which journals are relevant for the ongoing research activities in the department, and to identify useful links to professional institutions, organizations, persons, most cited publications, and authors. The study gathered quantitative data to measure how well the information resources of the portals match the research profile of the institutions. ********** Within the frame of the current digital libraries program in the Nordic countries, several subject portals have been set up for the purpose of making the collective information resources of digital materials available to users on a nationwide basis. Besides providing access, these portals present new services in libraries and the reallocation of human resources. This study was designed to test the usefulness of bibliometric methods for the evaluation of information resources located at the subject portals, and for raising interest in portals in the local research community. SamWebb at Gothenburg University Library in Sweden and Bisigate at the Aarhus Business School Library, Denmark, both subject portals for social sciences, were selected for the study. To capture the local users' views and requirements of the portals, the analyses targeted two selected institutions: Gothenburg University's Department of Political Sciences, and Aarhus School of Business, department of organization and management. Librarians, Web managers, and a group of faculty members have been involved in the discussions and interactions in every step of the evaluation of results. This article presents however the results of the study related to only one department, the department of political sciences at Gothenburg University. The study has been carried out at the Center for Informetric Studies, Swedish School of Library and Information Studies in Boras, and focuses on the usability of various bibliometric methods in the management of information resources in electronic libraries.' Method A bibliometric approach was chosen to collect evidence leading to journals, publications, researchers, and institutions of potential interest for those using portals. The bibliometric analyses were based mainly on the internal information received from the selected departments in electronic and printed form: these consisted of lists of publications and annual reports with various categories of publications listed for 1999 to 2000. Starting with these lists, relevant authors and publications have been identified using international bibliographic databases. The citation database, Social Science Citation Index (SSCI), File 7 in Dialog, provided the source articles and the citation analyses. To trace the publications of the departments--also outside citation databases--the analyses were extended to other core databases within the field: Sociological Abstracts, File 37, and Gale Group Business ARTS (Applied Research, Theory, and Scholarship), File 88, a scholarly business database designed to help identify current research, analysis, trends, and expert opinions in such disciplines as management, science, humanities, and social sciences. …

Journal Article
TL;DR: The possibility of using Microsoft Share Point Team Services, a team Web site solution, for easier and more centralized management of library committees is explored.
Abstract: Library committees work for the improvement and technological advancement of library services. Managing these committees is not an easy task, especially when there are subcommittees within a larger committee. Inefficient management often leads to the disorganization of information and ultimately affects the objectives of the committee. This article explores the possibility of using Microsoft Share Point Team Services, a team Web site solution, for easier and more centralized management of library committees. Library staff work in groups to improve library services and ensure the best service for their patrons. These groups, also known as committees, have goals to realize and deadlines to meet. They are involved in regular meetings, training and discussion sessions, and many other developmental activities. Due to the heterogeneous nature of this type of committee, where members are both within and outside the organization, library committee management demands organization. Working as a graduate assistant to the University of Illinois at Urbana-Champaign library webmaster enabled the author to become aware of various issues related to committee management. An opportunity to work with a team Web site solution, namely, Microsoft Share Point Team Services (STS) for a class project, also presented the possibility of using STS for easier committee management. This article is the result of such an investigation. Committee Management Committee management depends heavily on the activity and people involved. The three domains of committee management are as follows: * Information management--information produced from committee activities requires organization to make efficient retrieval and perusal of this information possible. This information should be readily accessible to all members of the committee irrespective of whether they are within or outside the library organization. * Time management--library staff are busy individuals irrespective of their position and tasks. Time management is critical when deciding the time and date for a meeting or a committee activity. * Communication management--smooth communication between committee members helps the committee solve problems and eliminate various hindrances. Channels of communication should always be open and conveniently accessible to all committee members. What does a committee do? How does it function? What are its activities? The following items are some of the prominent tasks committees perform: * Committee administration--one of the most important and well-elaborated features of any committee is its administrative capability. Committees normally begin by formulating their policies, rules and regulation, aims and objectives, ethics, standards, and so on. Budgeting is also perceived as another significant administrative effort on the part of committee management. * Member profiling--committee members and their contact and background information are profiled for easy communication. Most importantly, member profiling helps evaluate the members' backgrounds, so that their expertise is of benefit to the committee. * Record publishing--publishing reports and meeting minutes are a committee's two most common publishing activities. Publication can be in electronic or print form. These publications are then made accessible to the members of the committee for approval. * Record storage--information gathered during the operation of the committee is summarized and stored for future reference. This information is usually a collection of subreports or one large report that discusses all the decisions taken during the lifespan of the committee. * Tracking and reviewing committee activities--the person or group managing the committee keep track of the committee's development and make necessary reviews from time to time. This helps the committee to remain focused on its goals. …


Journal Article
TL;DR: The Digital Projects Department at Northern Illinois University (NIU) has developed a series of multimedia web sites dedicated to Illinois history as discussed by the authors, which can be used to access slides of artwork via the web.
Abstract: This article discusses the provenance of a partnership between the Digital Projects Department (DPD) at Northern Illinois University (NIU) Libraries and NIU's Art History Department that seeks to improve art education at NIU. Academic librarians and other library personnel have unique skills, which along With providing traditional library services, should be utilized to meet instructional and educational challenges. Since DPD has a history of providing access to multimedia content via the Internet, it seemed natural to partner with the art history department to create a tool for accessing slides of artwork via the Web. ********** In an age when students and faculty underutilize library services, librarians need to better market their skills in order to remain relevant on today's campuses. Many articles routinely cite the need for library-faculty collaboration in the pursuit of this goal, but these calls generally describe programs of traditional library instruction and information literacy. (1) While these are important objectives, today's librarian can offer much more. Academic librarians in particular have a technical skill set that can be use not only for providing access to materials, but also for developing tools for instructors to be used in the classroom. (2) As an example, the Digital Projects department (DPD) at Northern Illinois University (NIU) Libraries is currently working with the Art History department to offer image slides via the Web that can be searched and integrated into classes. This paper reviews how this library-faculty collaboration emerged, and how all parties are working together to make this a reality. DPD Experience The DPD at Northern Illinois University Libraries (NIUL) has produced a series of multimedia Web sites dedicated to Illinois history. (3) The sites provide searchable databases of primary documents and images, historians' video and textual evaluations of important events, interactive maps that show demographic and voting information for the United States and Illinois from the years 1820-1860, and lesson plans that integrate these materials for use in the classroom. The success of the DPD comes from its collaboration with other departments on campus and with other institutions throughout the state of Illinois. DPD staff works with partner institutions to provide the technology and primary sources that make up the Web sites. The database and search scripts come from a partnership with the University of Chicago. Other partner institutions provide content, including the Newberry Library, the Illinois State Archives, and Illinois State University. In addition to working with other institutions, DPD has worked with departments on campus to develop digital resources that incorporate their unique skills and knowledge, the Communication department, with its experience in film production, assisted DPD in creating original video and sound files. The Faculty Development Office trained project staff in Adobe Premiere and Real Producer to offer these files on project Web sites. The response of these departments showed that the university community is a supportive, collaborative environment. Within the library, many people and departments contributed to make projects successful. The systems department offered technical support. Much of the material came from Rare Books and Special Collections. Art librarian Charles Larry created graphics and assisted in the design and layout of the Web site. The diverse skill set and material resources found within the library illustrated how all library departments can contribute to the success of the whole. The experience suggested that this type of technological collaboration with the rest of the university might be successful as well. Problem and Possible Options The opportunity for testing the hypothesis came about when a new art history professor told the art librarian about an e-reserves collection of images at her former institution, and how she would like to see that offered at NIU. …

Journal Article
TL;DR: Morris Library's Instructional Support Services provides instructors with technical advice and access to current hardware, software, and multimedia techniques to meet their teaching objectives, and is a one-stop shop for instructors who want to add technology to their courses.
Abstract: Institutions want courses that incorporate the latest instructional technology. Instructors cannot take advantage of new technology if they are unfamiliar with the tools and have limited experience with online learning pedagogies. It has been said that it takes a village to build a curriculum in the information age. Morris Library's Instructional Support Services (ISS) is such a village. ISS provides instructors with technical advice and access to current hardware, software, and multimedia techniques to meet their teaching objectives. Offering instructional designers, Web programmers, and video and graphics professionals, ISS is a one-stop shop for instructors who want to add technology to their courses. ********** Many colleges and universities want to offer courses that take advantage of the benefits of technology, yet often their faculty are unfamiliar with that technology. Having grown up with the Web, today's students expect more than a traditional lecture format from their classes. (1) Many facets of new technology can enrich course content, engage students, and encourage interaction among them. (2) Institutions must find ways to help faculty members use new technologies to create new courses and redesign existing ones. Instructors will need help using new skills, a hospitable environment for innovation, and a reliable infrastructure to support the endeavor. Few faculty members have knowledge of the pedagogical issues related to online learning. Someone has to help them decide which courses would benefit from integrating Web-based components such as online course syllabus; schedules; content; and intracourse communications, like e-mail, bulletin boards, and chat sessions. The instructional support system that best aids faculty combines an understanding of the components of sound instructional design with expertise in applying the latest instructional technology. (3) For the last five years, the national Campus Computing Project, a group made up of senior information technology (IT) officials from academic institutions, has made helping faculty integrate technology with their instruction a top priority. In many cases, instructors are offered limited opportunities to consult with IT staff, which usually focuses on issues of hardware and software. That level of assistance can work well if the instructor already has a good understanding of course-related technology or if the project is of limited scope. Instructors with limited knowledge or with larger, more complex projects will need a much higher level of support. Many resources are required to help instructors incorporate technology into their classrooms and courses, but the final number of specialists an instructor will need depends on the size and scope of each project. Examples of specialists include instructional designers, Web programmers, graphic artists, photographers, and video producers who may be found in the IT department; the campus center for teaching; and various academic departments such as art and design, photography, journalism, radio/television, and computer science. The specialists at the library help instructors by identifying resources, securing copyright permissions, and helping to create online material that will aid instruction. According to Altschuler and McClure, "to build a curriculum in the information age, it takes a village." (4) At the Morris Library of Southern Illinois University Carbondale (SIUC), that village is called Instructional Support Services (ISS). Library-based Instructional Support Institutions have tried many approaches to promote technology in the classroom. Some have developed incentive programs to increase the use of technology in teaching. When Johns Hopkins University determined that faculty needed help from a support staff with expertise in both technology and pedagogy, they established minigrants to encourage faculty to explore technological solutions to instructional problems. …

Journal Article
TL;DR: The Instructional and Information Support Services (IISS) division at North Seattle (Wash.) Community College brings together the college's Library, Media Services, and Distance Learning (DL) units, and the Teaching and Learning Center to support instruction campus-wide under a dean with a required MLS as mentioned in this paper.
Abstract: The Instructional and Information Support Services (IISS) division at North Seattle (Wash.) Community College brings together the college's Library, Media Services, and Distance Learning (DL) units, and the Teaching and Learning Center to support instruction campus-wide under a dean with a required MLS. With its active instructional focus, the Library is integral to the division. IISS is also the administrative home of Interdisciplinary Studies. This organizational model promotes interaction, collaboration, and innovation among disparate units that have the same overall goal of fostering teaching excellence and student success. A connection to Internet II and a campus gigabit backbone make possible a variety of advanced technological options to enhance instruction. ********** One of the ways North Seattle (Wash.) Community College strives to achieve its mission of being "a supportive, responsive teaching and learning environment distinguished by its commitment to openness, innovation, and excellence in education" is through a newly structured division. (1) The Instructional and Information Support Services (IISS) division brings together administratively a variety of units with campus-wide instructional support responsibilities. Included are the college's Library, Media Services, and Distance Learning (DL) units, and the Teaching and Learning Center (TLC), which provides professional development for faculty and staff. (2) The managers of these units and all the librarians report to the IISS dean, who reports to the vice president for instruction. During the 2002 reorganization of divisions, it was hoped that the campus-wide, subject-neutral focus of IISS would also provide fertile ground for growth of interdisciplinary studies programs at the college, and it became the administrative home of Integrated Studies, U.S. Cultures, and Global Studies programs. This is also a good fit because instructors involved with the Integrated Studies program, a National Learning Communities Project, are very actively engaged with other elements of the division including the library, DL, and TLC. The Integrated Studies program has pioneered the use of online components to enhance or fully deliver team-taught courses. Participating faculty have received accolades from colleagues at other institutions for this forward-looking work (see figure 1). [FIGURE 1 OMITTED] The paramount objective of fostering student success is deeply rooted in this community college's culture. The challenges around achieving it are typical for the type of institution it is. Students' educational needs fluctuate greatly with the economy and the job market. For example, North is experiencing a surge of interest in academic transfer versus professional/technical courses, and is now attracting younger, full-time students wanting to take courses during the day. Instructional support mechanisms must be able to respond quickly to changing needs of the student population. Organization of the division to include campus-wide instructional support elements enhances collaboration and facilitates building and strengthening relationships among disparate units with the same ultimate objective. There are many ways that the organizational model influences relationships and collaboration, but the focus of this article is how it enhances instructional support for integrating technology in the classroom. Making Use of Special Opportunities Seattle was the first community college district nationally to be connected to the Pacific Northwest Gigapop regional data transfer center (GIGAPOP Internet II). With its connection to this network, North upgraded the campus to a gigabit backbone. The college has utilized its Internet capability to offer online courses that feature video streaming (video-on-demand) and other multimedia materials developed by telecourse companies or by the college's own faculty. The Seattle Community College District was given its own television cable channel by the City of Seattle as part of an arrangement with AT&T Broadband. …

Journal Article
TL;DR: This paper explores the relationships between the use of the information resources available through the Carlos III University of Madrid Library's Online Database Service and the results of scientific endeavor within the institution.
Abstract: The identification of variables affecting university research is one of the chief factors in the evaluation of these institutions. This paper explores the relationships between the use of the information resources available through the Carlos III University of Madrid Library's Online Database Service and the results of scientific endeavor within the institution. A two-dimensional analysis was performed, combining the number of database accesses as identified in the monthly activity records furnished by the IRIS CD-ROM database management module with the level of research activity represented by an aggregate index of the results of scientific endeavor, calculated by principal components methods. ********** University research is conditioned by a number of factors, both objective and subjective. Such factors have a direct impact on the development of the research culture underlying the intellectual dynamics needed for dialogue and collaboration among research scholars. The factors range from inherent institution and department features--number of research scientists, size of budget--to the motivations intrinsically related to the research endeavor itself, such as the pursuit of professional repute, the chance to work with other colleagues, and the opportunity to contribute to the training of new scientists. (1) In this context, studies on the impact of the university library on the results of scientific endeavor acquire increasing importance. They serve as a measure of how useful the information resources and services provided are for the university community. In this regard and given the importance in the academic domain of the publication of scientific papers, managers responsible for university resources, especially those dealing with information resources, should focus attention on the factors that support research and affect faculty research productivity. (2) According to Kyrillidou, this is a novel approach, since, generally speaking, libraries have not undertaken the systematic evaluation of their services from the standpoint of the effect that their use has on the results of user activities. (3) Moreover, the prevalence of new technologies has led to a heightened interest in studies on the usage and quality of electronic information services and their impact on research conducted in universities. (4) The purpose of this paper is to conduct a quantitative study of the relations between the results of scientific endeavor in Carlos ill University of Madrid departments and the extent to which they use the online databases available in the university library. Method The data used for the study were taken from the Research Activity Information System (SINAI) developed by the Carlos III University of Madrid's Research Results Transfer Bureau (OTRI). The SINAI information is published yearly in the university's Research Memorandum and is likewise available on the Internet. (5) The information respecting online database service was provided by the Infoware integrated electronic information resource management module incorporated in the information technology (IT) system presently in place in the university library. This module is based on Citrix Systems, Inc., Citrix Independent Computing Architecture (ICA), and Citrix MetaFrame technologies. The daily use records stored in this system can be used to quantify the total number of database accesses. In addition, they can identify the respective area or department from which the resource is accessed from the terminal IP address and the user's system administrator-assigned password. In this respect, faculty activity has been clearly differentiated from student activity. Students can only access the databases from computers located in the library, while faculty use their own computers in the departments. Both can be fully identified by filtering IP addresses. The study covered all the university's areas of knowledge that offer diplomas to date, although aggregate categories were created to adjust the departmental structure for easier comprehension of the results. …

Journal Article
TL;DR: It is hoped that this essay can help guide library IT strategy and assist in prioritizing purchases, suggesting what may be eliminated and helping focus on present and upcoming essentials.
Abstract: If the morning weather report predicted rain, would you carry an umbrella? The annual Technology Exchange Week (TECHXNY) conference predicts gales and calms in the information technology (IT) industry, and we in library IT who share the climate should listen. Whether you choose to carry an umbrella is up to you--and your patrons. ********** TECHXNY, with its emphasis on innovative business-related technology, commands a position to suggest IT trends. TECHXNY suggests what's hot, what's in, and what's on its way out. How might knowledge of IT trends help in the library? It might prevent us from planning--or worse, investing in--products that are too new or too old. Ironically, it is fortunate that library budgets do not in general allow us to sit at technology's cutting edge where we might be more likely to take a costly electronic risk. Our library missteps are perhaps more likely to lead us down the path of purchasing obsolescent equipment or omitting security features we ought not be without. Presented are some industry trends extracted from exhibitions, talks, trade publications, and promotional literature that flowed freely at the TECHXNY conference. These trends can, in turn, help set the agenda for the library. It is hoped that this essay can help guide library IT strategy and assist in prioritizing purchases, suggesting what may be eliminated and helping focus on present and upcoming essentials. TECHXNY, in its twentieth season, welcomes thousands of attendees annually. This year, exhibitors from over three hundred of the world's leading IT companies participated, including Compaq, HP, IBM, Intel, Iomega, Microsoft, Palm, Samsung, Sony, ViewSonic, and Xerox, to name a few. Microsoft, IBM, and others provided keynote speakers. Special sessions were held on topics such as Win2000, Linux, start-ups, security, storage and high availability, and IT infrastructure. Discussed following are a few of the conference themes that have particular bearing in a library setting: hardware (such as physical objects), telecommunications (for example, communication over a distance), and IT in the organization (that is, overall functioning of the system). The relevance of each theme is considered first for the industry in general and then specifically for a library setting. These observations are less recommendations than what are intended to be taken as provocative ideas. (1) Hardware at TECHXNY In a departure from conventional usage, the exposition publicity proclaims that "PC" now stands for "Pervasive Computing," in that the desktop system is no longer the sole platform for productivity. Instances of pervasive computing represented at the conference included laptops, notebooks, and PDAs as well as desktop computers. Other products shown were digital video creators, speech-enabled server appliances, credit-card-sized digital cameras, color print/scan/fax copiers, and 3D-Album photo presentation software, to name a few. One of today's prime IT marketing conceits is to describe a product as a solution. In point of fact, many of the latest solutions have yet to settle on problems significant enough to warrant purchase of the gadget. It is difficult to arrive at any conclusions about the range of devices represented because a conference presence of some gadget does not necessarily reflect its acceptance or share in the market. Take DVD, for instance. The Digital Video Expo was described in the conference program as "the largest professional video event on the East Coast focused exclusively on digital video tools and technology," and the extensive DVD sessions were open to all conference attendees free of charge. Representation of DVD at the conference, however, does not reflect its use at large. Show director Christina Condos mentioned to me that the CMP DV Media Group had paid for this promotion. Not represented at the conference was holographic storage technology that may soon compete with DVD. …

Journal Article
TL;DR: This special issue is to raise awareness of some current and forthcoming methods and tools useful in understanding library use and the introduction of digital library services through the Internet has greatly increased the amount of data available.
Abstract: Libraries are under siege. Material costs continue to rise, forcing more selective collection development; funding agencies demand data-based justification for services; and the Internet is replacing the library as the primary source for information. In addition, legal threats to the privacy of patron records are pushing librarians to take extremely destructive action to their institutional records. One solution to many of the problems is the bibliomining process. Libraries can use a data warehouse to capture information about their materials without associating personally identifiable information with their use. Statistical and data-mining techniques can then be used with the data warehouse to understand patterns of use. Understanding these patterns allows for: * better decisions about collection development; * thorough data-based justification of library services; * customized library services to compete with the Internet; and * a more complete understanding of how the library is used. These methods and tools are heavily used in the corporate sector to systematically capture, clean, and customize services for users. The significant difference between corporate use and the use of these tools in libraries is that of user privacy; libraries are more interested in patterns of use exhibited by groups than in behaviors exhibited by individuals. By definition, a pattern is something done by a group of people; therefore, the goal is only to discover patterns of behavior. During the data-cleansing process, the connections are broken so that these patterns cannot be traced back to individual users. This is not the first special issue on this topic; in 1996, Library Administration and Management had a themed issue on the mining of library data. However, most of the data came from automation systems. While those systems are still a valuable resource for data, the introduction of digital library services through the Internet has greatly increased the amount of data available. As the percentage of library budgets spent on electronic materials grows, so does the need for updated management tools to understand the use of those materials. The goal of this special issue is to raise awareness of some current and forthcoming methods and tools useful in understanding library use. Two articles in this issue deal primarily with data warehousing. The first, "The Bibliomining Process: Data Warehousing and Data Mining for Library Decision Making," presents an overview of the entire process with some discussion of methods that can be employed to create the data warehouse. …

Journal Article
TL;DR: This case study was performed at Rutgers University to evaluate computer-supported serials management in an academic library context and learned that serial staff and librarians are essentially satisfied with the centralized information distribution.
Abstract: This case study was performed at Rutgers University to evaluate computer-supported serials management in an academic library context. Information in an ExtInfo folder within each serial control record of an integrated library system was manually restructured to a standard language and formatted to establish a central location to perform collaborative serials management. From interviews and questionnaires, the authors learned that serial staff and librarians are essentially satisfied with the centralized information distribution. From their perspective, the standards applied to the ExtInfo folder reduce errors in serial management, improve and streamline routine work, and require little learning effort to master. Serials management is considered to be one of the most complex but important functions in a library. It is comprised mainly of two parts, collection and technical services and public services. While public services (such as reference, circulation) directly interface with library users, collection and technical services support public services through a series of behind-the-scenes operations. The responsibilities of collection services include acquisitions, check-in, claiming, binding, shelving, and their related workflows for serials processing. In a collection services department, each serial staff member performs one particular function. All of the operational functions are interwoven. (1) Weaknesses in the workflow in one function results in extra work in another, and consequently lowers the overall quality of serials processing. Each serial staff member gathers individualized information through his or her work experience that can be shared among serial staff. (2) Not only is the processing of that information necessary to his or her own work, but it is also essential to the other processing steps of the serial management workflow. For example, if the check-in staff acquires first-hand information about a change in the publication frequency of a given title, then the claiming staff needs that information to determine the appropriate schedule for claiming an issue that has not been received. The bindery staff also needs that information to determine how many issues should be bound together. In comparison to other publication types (such as monographs), serial information may be unpredictable and vague, but it is critical because processing is ongoing for the term of the subscription. A serial may change title or publication frequency, merge with a different title, publish supplements, or suspend publication during the period of a subscription. The libraries may change fund allocation, subscription period, shelving location, or vendor during that same period. Serial staff must make corresponding managerial adjustments to respond to any of these changes. Figure 1 shows the general serial collection functions and the information needed for each process. The figure's rectangles represent the serial processing steps and the cylinders indicate the information needed for each step. By evaluating the texts in the cylinders, one can see that each piece of information is used for more than one processing step. * Background The serials processing operations of the New Brunswick Libraries Collection Services Department at Rutgers University was the object of this case study. Until 2000, serial subscriptions were received in each library on the New Brunswick campus of Rutgers University. Staff in each library processed their respective collections separately. Three years ago, the research libraries took over the serials management of the unit libraries. Alexander Library (AL), a large research library for humanities and social sciences, manages the serial processing of the four smaller humanities and social science libraries. The Library of Science and Medicine (LSM), a large research library for the sciences, is responsible for the serial processing of the four science libraries. …