scispace - formally typeset
Search or ask a question
Author

Jane Greenberg

Bio: Jane Greenberg is an academic researcher from Drexel University. The author has contributed to research in topics: Metadata & Meta Data Services. The author has an hindex of 26, co-authored 141 publications receiving 2163 citations. Previous affiliations of Jane Greenberg include University of North Carolina at Chapel Hill.


Papers
More filters
Journal ArticleDOI
TL;DR: The conclusion is that integrating extraction of harvesting methods will be the best approach to creating optimal metadata, and more research is needed to identify when to apply which method.
Abstract: This research explores the capabilities of two Dublin Core automatic metadata generation applications, Klarity and DC-dot. The top level Web page for each resource, from a sample of 29 resources obtained from National Institute of Environmental Health Sciences (NIEHS), was submitted to both generators. Results indicate that extraction processing algorithms can contribute to useful automatic metadata generation. Results also indicate that harvesting metadata from META tags created by humans can have a positive impact on automatic metadata generation. The study identifies several ways in which automatic metadata generation applications can be improved and highlights several important areas of research. The conclusion is that integrating extraction of harvesting methods will be the best approach to creating optimal metadata, and more research is needed to identify when to apply which method.

112 citations

Proceedings Article
01 Jan 2001
TL;DR: The results indicate that authors can create good quality metadata when working with the Dublin Core, and in some cases they may be able to create metadata that is of better quality than a metadata professional can produce.
Abstract: This paper reports on a study that examined the ability of resource authors to create acceptable metadata in an organizational setting. The results indicate that authors can create good quality metadata when working with the Dublin Core, and that is in some cases they may be able to create metadata that is of better quality than what a metadata professional can produce. This research suggests that authors think metadata is valuable for resource discovery, that it should be created for web resources, and that they, as authors, should be involved in metadata production for their works. The study also indicates that a simple web form, with textual guidance and selective use of features (e.g., popup windows, drop-down menus, etc.) can assist authors in generating good quality metadata.

111 citations

Journal ArticleDOI
TL;DR: The MODAL (Metadata Objectives and principles, Domains, and Architectural Layout) framework is introduced as an approach for studying metadata schemes, including different types of metadata schemes.
Abstract: SUMMARY Although the development and implementation of metadata schemes over the last decade has been extensive, research examining the sum of these activities is limited. This limitation is likely due to the massive scope of the topic. A framework is needed to study the full extent of, and functionalities supported by, metadata schemes. Metadata schemes developed for information resources are analyzed. To begin, the author presents a review of the definition of metadata, metadata functions, and several metadata typologies. Next, a conceptualization for metadata schemes is presented. The emphasis is on semantic container-like metadata schemes (data structures). The last part of this paper introduces the MODAL (Metadata Objectives and principles, Domains, and Architectural Layout) framework as an approach for studying metadata schemes. The paper concludes with a brief discussion on the value of frameworks for examining metadata schemes, including different types of metadata schemes.

107 citations

Journal ArticleDOI
Jane Greenberg1
TL;DR: Their goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles and the authors have got a considerable collection of totally free of expense Book for people from every single stroll of life.
Abstract: Data has become a major focus in the field of information library science (ILS), motivated, to a large degree, by national data sharing policies, open access, and the knowledge that underlies the d

107 citations

Journal ArticleDOI
TL;DR: This paper reports on the automatic metadata generation applications (AMeGA) project's metadata expert survey, which finds participants anticipate greater accuracy with automatic techniques for technical metadata compared to metadata requiring intellectual discretion.
Abstract: This paper reports on the automatic metadata generation applications (AMeGA) project's metadata expert survey. Automatic metadata generation research is reviewed and the study's methods, key findings and conclusions are presented. Participants anticipate greater accuracy with automatic techniques for technical metadata (e.g., ID, language, and format metadata) compared to metadata requiring intellectual discretion (e.g., subject and description metadata). Support for implementing automatic techniques paralleled anticipated accuracy results. Metadata experts are in favour of using automatic techniques, although they are generally not in favour of eliminating human evaluation or production for the more intellectually demanding metadata. Results are incorporated into Version 1.0 of the Recommended Functionalities for automatic metadata generation applications (Appendix A).

91 citations


Cited by
More filters
01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Journal ArticleDOI
TL;DR: Reading a book as this basics of qualitative research grounded theory procedures and techniques and other references can enrich your life quality.

13,415 citations

Journal ArticleDOI
TL;DR: The FAIR Data Principles as mentioned in this paper are a set of data reuse principles that focus on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.
Abstract: There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.

7,602 citations

Book
01 Jan 2008
TL;DR: Nonaka and Takeuchi as discussed by the authors argue that there are two types of knowledge: explicit knowledge, contained in manuals and procedures, and tacit knowledge, learned only by experience, and communicated only indirectly, through metaphor and analogy.
Abstract: How have Japanese companies become world leaders in the automotive and electronics industries, among others? What is the secret of their success? Two leading Japanese business experts, Ikujiro Nonaka and Hirotaka Takeuchi, are the first to tie the success of Japanese companies to their ability to create new knowledge and use it to produce successful products and technologies. In The Knowledge-Creating Company, Nonaka and Takeuchi provide an inside look at how Japanese companies go about creating this new knowledge organizationally. The authors point out that there are two types of knowledge: explicit knowledge, contained in manuals and procedures, and tacit knowledge, learned only by experience, and communicated only indirectly, through metaphor and analogy. U.S. managers focus on explicit knowledge. The Japanese, on the other hand, focus on tacit knowledge. And this, the authors argue, is the key to their success--the Japanese have learned how to transform tacit into explicit knowledge. To explain how this is done--and illuminate Japanese business practices as they do so--the authors range from Greek philosophy to Zen Buddhism, from classical economists to modern management gurus, illustrating the theory of organizational knowledge creation with case studies drawn from such firms as Honda, Canon, Matsushita, NEC, Nissan, 3M, GE, and even the U.S. Marines. For instance, using Matsushita's development of the Home Bakery (the world's first fully automated bread-baking machine for home use), they show how tacit knowledge can be converted to explicit knowledge: when the designers couldn't perfect the dough kneading mechanism, a software programmer apprenticed herself withthe master baker at Osaka International Hotel, gained a tacit understanding of kneading, and then conveyed this information to the engineers. In addition, the authors show that, to create knowledge, the best management style is neither top-down nor bottom-up, but rather what they call "middle-up-down," in which the middle managers form a bridge between the ideals of top management and the chaotic realities of the frontline. As we make the turn into the 21st century, a new society is emerging. Peter Drucker calls it the "knowledge society," one that is drastically different from the "industrial society," and one in which acquiring and applying knowledge will become key competitive factors. Nonaka and Takeuchi go a step further, arguing that creating knowledge will become the key to sustaining a competitive advantage in the future. Because the competitive environment and customer preferences changes constantly, knowledge perishes quickly. With The Knowledge-Creating Company, managers have at their fingertips years of insight from Japanese firms that reveal how to create knowledge continuously, and how to exploit it to make successful new products, services, and systems.

3,668 citations

Journal ArticleDOI
TL;DR: In this article, the authors make a preliminary exploration of the economics of open source software and highlight the extent to which labor economics, especially the literature on "career concerns" and industrial organization theory can explain many of these projects' features.
Abstract: There has been a recent surge of interest in open source software development, which involves developers at many different locations and organizations sharing code to develop and refine programs. To an economist, the behavior of individual programmers and commercial companies engaged in open source projects is initially startling. This paper makes a preliminary exploration of the economics of open source software. We highlight the extent to which labor economics, especially the literature on ‘career concerns’, and industrial organization theory can explain many of these projects’ features. We conclude by listing interesting research questions related to open source software.

1,809 citations