scispace - formally typeset
Search or ask a question
Author

Lyn Condron

Bio: Lyn Condron is an academic researcher. The author has contributed to research in topics: Library management & Cataloging. The author has an hindex of 3, co-authored 8 publications receiving 114 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The IFLA FRBR study was begun in 1992 in a context of much questioning about how bibliographic records and catalogs would work in changing technology, questions that continue to be relevant even now as technology continues to evolve and reveal new possibilities.
Abstract: Pat Riva is chair of the FRBR Review Group and a member of the IFLA Cataloguing Section Standing Committee. She is also coordonatrice, section des monographies, direction du traitement documentaire de la collection patrimoniale at the Bibliothèque et Archives nationales du Québec in Montreal, Québec, Canada. She can be reached by email at patricia.rivabanq.qc.ca T he Functional Requirements for Bibliographic Records (FRBR) study [1] was published by the International Federation of Library Associations (IFLA) in 1998, the final report of a study group reporting to the Cataloguing Section. Much more has been written on the origins and context for the study [2]. The IFLA FRBR study was begun in 1992 in a context of much questioning about how bibliographic records and catalogs would work in changing technology, questions that continue to be relevant even now as technology continues to evolve and reveal new possibilities. The concept of defining functional requirements is user-focused at its center; knowledge of the uses (and users) of the information system to be designed provides a basis for making informed decisions on design options. In daily work this reasoning is often implicit; the FRBR study sought to make these considerations explicit. When applied to bibliographic records, this functional requirements concept emphasizes the importance of understanding the function of the data elements being recorded and how these elements each contribute to meeting user needs. Once the fundamental question \" Why? \" has been answered, there is a sound and principled basis for making recommendations on what should be implemented and how. Users of bibliographic systems include both the end-users of information retrieval systems and the information workers who assist end-users and maintain the databases. The needs of both groups were considered by the FRBR study group as they worked to understand how resource discovery systems are used. Uses which may seem infinitely varied on the surface do have common elements. The IFLA Study Group on the functional requirements for bibliographic records (1998) concluded that, in their most general form, there are four basic user tasks: I to find entities that correspond to the user's stated search criteria (i.e., to locate either a single entity or a set of entities in a file or database as the result of a search using an attribute or relationship of the entity); I to identify an entity (i.e., to confirm that the entity described corresponds to the entity sought or to …

110 citations

Journal ArticleDOI
TL;DR: Managers must not only encourage self-responsibility but also set expectations and empower both individuals and teams with the capability to take responsibility for and manage as much of their work life as possible.
Abstract: SUMMARY Focused and limited management theories generally do not cover many important aspects of staff members' and teams' working lives. While most managers implement specific tools that they find helpful from one theory or another, an overriding philosophy that has proven consistently effective for our team is that of self-responsibility by the manager, by the individuals, and by the team as a group. Managers must not only encourage self-responsibility but also set expectations and empower both individuals and teams with the capability to take responsibility for and manage as much of their work life as possible.

3 citations

Journal ArticleDOI
TL;DR: This is a collection of web addresses for essential resources in the area of metadata standards to assist library catalogers and a starting point for those of us who have embarked on educating ourselves and the authors' staff in these tools.
Abstract: In this, our third ERC Column to appear in CCQ, we present a collection of web addresses for essential resources in the area of metadata standards to assist library catalogers. As we’ve mentioned in other columns, production timelines create inevitable challenges in ensuring the timeliness of the information presented here. Metadata standards are an emerging field, where rapid change and development is continuous. By the time you read this, new standards or practices may have been added to, or have superseded, the list we provide here. Our hope is that this list is a starting point for those of us who have embarked on educating ourselves and our staff in these tools. Why are new and emerging metadata standards of interest to catalogers? Alternative methods for describing digital objects and storing/transmitting data about these objects are useful to explore for a number of reasons. Why metadata? Rapid record production: With the increased development of digital library efforts in many of our libraries, the breadth and variety of “collections,” whether actual or virtual, that catalogers are charged to organize and control is growing quickly. In the digital arena, aggregations of content, whether e-journals, visual images, sound files or e-texts, present

2 citations

Journal ArticleDOI
TL;DR: A survey was designed for new department heads to use in facilitating introductions with a new staff, focused on eliciting information about individuals' skills, training needs, and job satisfaction.
Abstract: A survey was designed for new cataloging department heads to use in facilitating introductions with a new staff (in this article: cataloging department). Questions focused on eliciting information about individuals' skills, training needs, and job satisfaction. Background, lessons learned, and the survey with results are included. The author presented this paper at ALCTS/CCS Heads of Cataloging Departments Discussion Group, ALA Midwinter, 1999.

2 citations


Cited by
More filters
01 Jan 2008
TL;DR: This ontology of risk-relevance (henceforth known as the ORR) is a tool for both data extraction professionals and risk-assessment professionals that allows new entries to be added easily when the need for additional information arises.
Abstract: This paper describes the organization of extracted risk-relevant data in a relational database created at Regulatory Data Corporation for use by security professionals. The initial effort involved creating sets of data-extraction variables around a set of “risk relevant” keywords. The keywords clustered around events rather than entities and the data extraction variables that were developed centered on semantic roles of event participants. To facilitate future data extraction efforts in this genre, we organized events, participants, keywords and grammatical forms into an ontology. This ontology of risk-relevance (henceforth known as the ORR) is a tool for both data extraction professionals and risk-assessment professionals that allows new entries to be added easily when the need for additional information arises.

415 citations

Journal ArticleDOI
TL;DR: This work provides an overview and categorization of existing metadata interoperability techniques, and explicitly shows that metadata mapping is the appropriate technique in integration scenarios where an agreement on a certain metadata standard is not possible.
Abstract: Achieving uniform access to media objects in heterogeneous media repositories requires dealing with the problem of metadata interoperability. Currently there exist many interoperability techniques, with quite varying potential for resolving the structural and semantic heterogeneities that can exist between metadata stored in distinct repositories. Besides giving a general overview of the field of metadata interoperability, we provide a categorization of existing interoperability techniques, describe their characteristics, and compare their quality by analyzing their potential for resolving various types of heterogeneities. Based on our work, domain experts and technicians get an overview and categorization of existing metadata interoperability techniques and can select the appropriate approach for their specific metadata integration scenarios. Our analysis explicitly shows that metadata mapping is the appropriate technique in integration scenarios where an agreement on a certain metadata standard is not possible.

179 citations

Journal ArticleDOI
TL;DR: A framework for the classification of image descriptions by users is developed, based on various classification methods from the literature, which suggests that users prefer general descriptions as opposed to specific or abstract descriptions.
Abstract: In order to resolve the mismatch between user needs and current image retrieval techniques, we conducted a study to get more information about what users look for in images. First, we developed a framework for the classification of image descriptions by users, based on various classification methods from the literature. The classification framework distinguishes three related viewpoints on images, namely nonvisual metadata, perceptual descriptions and conceptual descriptions. For every viewpoint a set of descriptive classes and relations is specified. We used the framework in an empirical study, in which image descriptions were formulated by 30 participants. The resulting descriptions were split into fragments and categorized in the framework. The results suggest that users prefer general descriptions as opposed to specific or abstract descriptions. Frequently used categories were objects, events and relations between objects in the image.

160 citations

Book
Eero Hyvönen1
19 Oct 2012
TL;DR: This book gives an overview on why, when, and how Linked (Open) Data and Semantic Web technologies can be employed in practice in publishing CH collections and other content on the Web, and motivates and presents a general semantic portal model and publishing framework as a solution approach to distributed semantic content creation, based on an ontology infrastructure.
Abstract: Cultural Heritage (CH) data is syntactically and semantically heterogeneous, multilingual, semantically rich, and highly interlinked. It is produced in a distributed, open fashion by museums, libraries, archives, and media organizations, as well as individual persons. Managing publication of such richness and variety of content on the Web, and at the same time supporting distributed, interoperable content creation processes, poses challenges where traditional publication approaches need to be re-thought. Application of the principles and technologies of Linked Data and the Semantic Web is a new, promising approach to address these problems. This development is leading to the creation of large national and international CH portals, such as Europeana, to large open data repositories, such as the Linked Open Data Cloud, and massive publications of linked library data in the U.S., Europe, and Asia. Cultural Heritage has become one of the most successful application domains of Linked Data nd Semantic Web technologies. This book gives an overview on why, when, and how Linked (Open) Data and Semantic Web technologies can be employed in practice in publishing CH collections and other content on the Web. The text first motivates and presents a general semantic portal model and publishing framework as a solution approach to distributed semantic content creation, based on an ontology infrastructure. On the Semantic Web, such an infrastructure includes shared metadata models, ontologies, and logical reasoning, and is supported by shared ontology and other Web services alleviating the use of the new technology and linked data in legacy cataloging systems. The goal of all this is to provide layman users and researchers with new, more intelligent and usable Web applications that can be utilized by other Web applications, too, via well-defined Application Programming Interfaces (API). At the same time, it is possible to provide publishing organizations with more cost-efficient so utions for content creation and publication. This book is targeted to computer scientists, museum curators, librarians, archivists, and other CH professionals interested in Linked Data and CH applications on the Semantic Web. The text is focused on practice and applications, making it suitable to students, researchers, and practitioners developing Web services and applications of CH, as well as to CH managers willing to understand the technical issues and challenges involved in linked data publication. Table of Contents: Cultural Heritage on the Semantic Web / Portal Model for Collaborative CH Publishing / Requirements for Publishing Linked Data / Metadata Schemas / Domain Vocabularies and Ontologies / Logic Rules for Cultural Heritage / Cultural Content Creation / Semantic Services for Human and Machine Users / Conclusions

155 citations

Journal ArticleDOI
TL;DR: The conclusion is that integrating extraction of harvesting methods will be the best approach to creating optimal metadata, and more research is needed to identify when to apply which method.
Abstract: This research explores the capabilities of two Dublin Core automatic metadata generation applications, Klarity and DC-dot. The top level Web page for each resource, from a sample of 29 resources obtained from National Institute of Environmental Health Sciences (NIEHS), was submitted to both generators. Results indicate that extraction processing algorithms can contribute to useful automatic metadata generation. Results also indicate that harvesting metadata from META tags created by humans can have a positive impact on automatic metadata generation. The study identifies several ways in which automatic metadata generation applications can be improved and highlights several important areas of research. The conclusion is that integrating extraction of harvesting methods will be the best approach to creating optimal metadata, and more research is needed to identify when to apply which method.

112 citations