scispace - formally typeset
Search or ask a question

Showing papers on "Meta Data Services published in 2012"


Patent
02 Mar 2012
TL;DR: In this paper, a metabase formed from metadata can be used for various data management operations, such as enhanced data management, enhanced data identification, enhanced storage operations, data classification for organizing and storing the metadata, cataloging of metadata for the stored metadata, and/or user interfaces for managing data.
Abstract: Systems and methods for managing electronic data are disclosed. Various data management operations can be performed based on a metabase formed from metadata. Such metadata can be identified from an index of data interactions generated by a journaling module, and obtained from their associated data objects stored in one or more storage devices. In various embodiments, such processing of the index and storing of the metadata can facilitate, for example, enhanced data management operations, enhanced data identification operations, enhanced storage operations, data classification for organizing and storing the metadata, cataloging of metadata for the stored metadata, and/or user interfaces for managing data. In various embodiments, the metabase can be configured in different ways. For example, the metabase can be stored separately from the data objects so as to allow obtaining of information about the data objects without accessing the data objects or a data structure used by a file system.

238 citations


Patent
05 Nov 2012
TL;DR: In this paper, a method and system for processing network metadata is described, where metadata may be processed by dynamically instantiated executable software modules which make policy-based decisions about the character of the network metadata and about presentation of the metadata to consumers.
Abstract: A method and system for processing network metadata is described. Network metadata may be processed by dynamically instantiated executable software modules which make policy-based decisions about the character of the network metadata and about presentation of the network metadata to consumers of the information carried by the network metadata. The network metadata may be type classified and each subclass within a type may be mapped to a definition by a unique fingerprint value. The fingerprint value may be used for matching the network metadata subclasses against relevant policies and transformation rules. For template-based network metadata such as NetFlow v9, an embodiment of the invention can constantly monitor network traffic for unknown templates, capture template definitions, and informs administrators about templates for which custom policies and conversion rules do not exist. Conversion modules can efficiently convert selected types and/or subclasses of network metadata into alternative metadata formats.

157 citations


Patent
02 Feb 2012
TL;DR: In this paper, a system and methods are provided that enable a data and information repository with a semantic engine that enables users to easily capture information in various formats from various devices along with rich metadata relating to that information.
Abstract: System and methods are provided that enable a data and information repository with a semantic engine that enables users to easily capture information in various formats from various devices along with rich metadata relating to that information. The information repository can be configured to query the captured information and any metadata to extrapolate new meaning, including semantic meaning, and to perform various tasks, including but not limited to sharing of the information and metadata. In some embodiments, the information repository is configured to generate recommendations to users based on analysis of the captured information.

107 citations


Journal ArticleDOI
TL;DR: Six key recommendations for libraries and standards agencies are provided, including rising to the challenges and embracing the opportunities presented by current technological trends, adopting minimal requirements of Linked Data principles, developing ontologies, deciding on what needs to be retained from current library models, becoming part of the Linked data cloud, and developing mixed-metadata approaches.
Abstract: Contemporary metadata principles and standards tended to result in document-centric rather than data-centric; human-readable rather than machine-processable metadata. In order for libraries to create and harness shareable, mashable and re-usable metadata, a conceptual shift can be achieved by adjusting current library models such as Resource Description and Access (RDA) and Functional Requirements for Bibliographic Records (FRBR) to models based on Linked Data principles. In relation to technical formats, libraries can leapfrog to Linked Data technical formats such as the Resource Description Framework (RDF), without disrupting current library metadata operations. This paper provides six key recommendations for libraries and standards agencies. These include rising to the challenges and embracing the opportunities presented by current technological trends, adopting minimal requirements of Linked Data principles, developing ontologies, deciding on what needs to be retained from current library models, becoming part of the Linked Data cloud, and developing mixed-metadata (standards-based and socially-constructed) approaches. Finally, the paper concludes by identifying and discussing five major benefits of such metadata re-conceptualisation. The benefits include metadata openness and sharing, serendipitous discovery of information resources, identification of zeitgeist and emergent metadata, facet-based navigation and metadata enriched with links.

94 citations


Journal ArticleDOI
TL;DR: The paper shows that using metadata with the appropriate metadata architecture can yield considerable benefits for LOD publication and use, including improving find ability, accessibility, storing, preservation, analysing, comparing, reproducing, finding inconsistencies, correct interpretation, visualizing, linking data, assessing and ranking the quality of data and avoiding unnecessary duplication of data.
Abstract: Public and private organizations increasingly release their data to gain benefits such as transparency and economic growth. The use of these open data can be supported and stimulated by providing considerable metadata (data about the data), including discovery, contextual and detailed metadata. In this paper we argue that metadata are key enablers for the effective use of Linked Open Data (LOD). We illustrate the potential of metadata by 1) presenting an overview of advantages and disadvantages of metadata derived from literature, 2) presenting metadata requirements for LOD architectures derived from literature, workshops and a questionnaire, 3) describing a LOD metadata architecture that meets the requirements and 4) showing examples of the application of this architecture in the ENGAGE project. The paper shows that using metadata with the appropriate metadata architecture can yield considerable benefits for LOD publication and use, including improving find ability, accessibility, storing, preservation, analysing, comparing, reproducing, finding inconsistencies, correct interpretation, visualizing, linking data, assessing and ranking the quality of data and avoiding unnecessary duplication of data. The Common European Research Information Format (CERIF) can be used to build the metadata architecture and achieve the advantages.

87 citations


Patent
Ruth M. Amaru1, Joshua Fox1, Benjamin Halberstadt1, Boris Melamed1, Zvi Schreiber1 
15 Mar 2012
TL;DR: In this article, a metadata management system for importing, integrating and federating metadata, including a configurable metamodel, a metadata repository for storing metadata whose structure reflects the meta-schema, at least one external metadata source, which is able to persist metadata in accordance with the structure of a meta schema, a mapping module for mapping the meta schema to the metamode, and a transformation module, operatively coupled to the metadata mapping module.
Abstract: A metadata management system for importing, integrating and federating metadata, including a configurable metamodel, a metadata repository for storing metadata whose structure reflects the metamodel, at least one external metadata source, which is able to persist metadata in accordance with the structure of a meta-schema, a mapping module for mapping the meta-schema to the metamodel, and a transformation module, operatively coupled to the metadata mapping module, for translating specific metadata from the at least one external metadata source to the metadata repository, for use in import, export or synchronization of metadata between the external metadata source and the metadata repository. A method and a computer-readable storage medium are also described.

47 citations


Proceedings Article
01 May 2012
TL;DR: This paper presents a metadata model for the description of language resources proposed in the framework of the META-SHARE infrastructure, aiming to cover both datasets and tools/technologies used for their processing.
Abstract: This paper presents a metadata model for the description of language resources proposed in the framework of the META-SHARE infrastructure, aiming to cover both datasets and tools/technologies used for their processing. It places the model in the overall framework of metadata models, describes the basic principles and features of the model, elaborates on the distinction between minimal and maximal versions thereof, briefly presents the integrated environment supporting the LRs description and search and retrieval processes and concludes with work to be done in the future for the improvement of the model.

46 citations


Patent
Yohsuke Ishii1, Shoji Kodama1
13 Jul 2012
TL;DR: In this article, a search device receives a search request, extracts at least one of an alias or a metadata name from the search request and converts the alias to metadata name by referring to metadata schema management information for managing in an inclusive manner a namespace alias and metadata name for the retrieval device to identify a metadata schema definition defining the structure of a retrieval-target file that includes metadata.
Abstract: A search device receives a search request, extracts at least one of an alias or a metadata name from the search request, converts the alias to metadata name by referring to metadata schema management information for managing in an inclusive manner a namespace alias and a metadata name for the retrieval device to identify a metadata schema definition defining the structure of a retrieval-target file that includes metadata, and specifies a field name from the metadata name by referring to schema mapping management information for managing the corresponding relationship between a metadata name of metadata schema definition information and a field name of the retrieval index schema definition.

40 citations


Patent
Andrey Komarov1
05 Jun 2012

30 citations


Proceedings Article
03 Sep 2012
TL;DR: System requirements that are essential for metadata supporting the discovery and management of scientific data, and a base-model with three chief principles: principle of least effort, infrastructure service, and portability are explored.
Abstract: The tremendous growth in digital data has led to an increase in metadata initiatives for different types of scientific data, as evident in Ball's survey (2009). Although individual communities have specific needs, there are shared goals that need to be recognized if systems are to effectively support data sharing within and across all domains. This paper considers this need, and explores systems requirements that are essential for metadata supporting the discovery and management of scientific data. The paper begins with an introduction and a review of selected research specific to metadata modeling in the sciences. Next, the paper's goals are stated, followed by the presentation of valuable systems requirements. The results include a base-model with three chief principles: principle of least effort, infrastructure service, and portability. The principles are intended to support "data user" tasks. Results also include a set of defined user tasks and functions, and applications scenarios.

29 citations


Journal ArticleDOI
01 Jan 2012
TL;DR: This study represents geoscience data sets as an ontology based on an existing metadata description and on the nature of the data set to showcase how forecast data can be represented in ontology by using the existing metadata information.
Abstract: With the increasing amount of data generated in geoscience research, it becomes critical to describe data sets in meaningful ways A large number of described data sets are described using XML metadata, which has proved a useful means of expressing data characteristics An ontological representation is another way of representing data sets with the benefit of providing rich semantics, convenient linkage to other data sets, and good interoperability with other data This study represents geoscience data sets as an ontology based on an existing metadata description and on the nature of the data set It takes the case of Vortex2 data, a regional weather forecast data set collected in Summer 2010, to showcase how forecast data can be represented in ontology by using the existing metadata information It supplies another type of representation of the data set with added semantics and potential functionalities compared to the previous metadata representation

Patent
06 Nov 2012
TL;DR: In this paper, a repository receives metadata from databases associated with different service providers and converts the received metadata to a common format, such as MPEG7, and stores the converted metadata in a central database.
Abstract: A repository receives metadata from databases associated with different service providers. The repository converts the received metadata to a common format, such as MPEG7, and stores the converted metadata in a central database. The repository can also receive a query from a client device. The repository retrieves metadata associated with the query from the central database and provides it to the requesting client device. The repository can also convert the provided metadata to an appropriate format for the requesting device. Because the metadata is stored at a common location in a common format, content from different providers can be efficiently identified.

Proceedings Article
01 May 2012
TL;DR: The status of the standardization efforts of a Component Metadata approach for describing Language Resources with metadata is described and information about uptake and plans of the use of component metadata within the three mentioned linguistic and L&T communities is presented.
Abstract: This paper describes the status of the standardization efforts of a Component Metadata approach for describing Language Resources with metadata. Different linguistic and Language & Technology communities as CLARIN, META-SHARE and NaLiDa use this component approach and see its standardization of as a matter for cooperation that has the possibility to create a large interoperable domain of joint metadata. Starting with an overview of the component metadata approach together with the related semantic interoperability tools and services as the ISOcat data category registry and the relation registry we explain the standardization plan and efforts for component metadata within ISO TC37/SC4. Finally, we present information about uptake and plans of the use of component metadata within the three mentioned linguistic and L&T communities.

Proceedings ArticleDOI
05 Nov 2012
TL;DR: Simulation results show that the traces generated by Mimesis mimic the original workload and can be used in place of the real trace providing accurate results.
Abstract: Efficient namespace metadata management is increasingly important as next-generation file systems are designed for peta and exascales. New schemes have been proposed, however, their evaluation has been insufficient due to a lack of appropriate namespace metadata traces. Specifically, no Big Data storage system metadata trace is publicly available and existing ones are a poor replacement. We studied publicly available traces and one Big Data trace from Yahoo! and note some of the differences and their implications to metadata management studies. We discuss the insufficiency of existing evaluation approaches and present a first step towards a statistical metadata workload model that can capture the relevant characteristics of a workload and is suitable for synthetic workload generation. We describe Mimesis, a synthetic workload generator, and evaluate its usefulness through a case study in a least recently used metadata cache for the Hadoop Distributed File System. Simulation results show that the traces generated by Mimesis mimic the original workload and can be used in place of the real trace providing accurate results.

Journal ArticleDOI
06 Aug 2012
TL;DR: Adoption of an ETL (extracttransform-load) metadata model for the data warehouse that makes subject area refreshes metadata-driven, loads observation timestamps and other useful parameters, and minimizes consumption of database systems resources is proposed.
Abstract: Metadata is essential for understanding information stored in data warehouses. It helps increase levels of adoption and usage of data warehouse data by knowledge workers and decision makers. A metadata model is important to the implementation of a data warehouse, the lack of a metadata model can lead to quality concerns about the data warehouse. A highly successful data warehouse implementation depends on consistent metadata. This article proposes adoption of an ETL (extracttransform-load) metadata model for the data warehouse that makes subject area refreshes metadata-driven, loads observation timestamps and other useful parameters, and minimizes consumption of database systems resources. The ETL metadata model provides developers with a set of ETL development tools and delivers a user-friendly batch cycle refresh monitoring tool for the production support team.

Journal ArticleDOI
TL;DR: A novel automatic metadata extraction framework, which is based on a novel fuzzy based method for automatic cognitive metadata generation and uses different document parsing algorithms to extract rich metadata from multilingual enterprise content using the newly developed DocBook, Resource Type and Topic ontologies.


Proceedings ArticleDOI
04 May 2012
TL;DR: The paper presents the new approaches to create, update and improve the content of metadata in an automated fashion to facilitate metadata management and introduces a new synchronization approach to automate the spatial metadata updating process.
Abstract: Metadata is a vital tool for any spatially enabling platform, and helps the users to share, discover, assess and access data and services. However, the spatial industry faces different issues and challenges regarding metadata generation, updating and improvement; which could affect the quality of this crucial component of any sharing platform. The main issue is the lack of an appropriate approach to the automated metadata generating and updating process. Metadata and related spatial data are often managed and maintained separately. This issue involves different aspects, including the lack of proper methodologies to integrate metadata and spatial data in a common environment, the generation and updating of metadata outside the spatial dataset lifecycle and the dependency of metadata creation on the metadata authors’ knowledge of the dataset. In addition, the current data discovery services are not user-friendly and sufficiently efficient to serve the end users to easily find the most appropriate datasets and services to meet their needs in a spatially enabling platform. In response to these issues, this paper presents the new approaches to create, update and improve the content of metadata in an automated fashion to facilitate metadata management. The first approach relates to process-based metadata entry which aims at creating the ISO 19115:2003 metadata elements in parallel with the dataset lifecycle. This approach has the potential to overcome the problem of missing or incomplete metadata through identifying the stage to generate and update metadata within the dataset lifecycle. Also, the paper introduces a new synchronization approach to automate the spatial metadata updating process. This approach would aid the data custodians to update metadata on the fly whenever the dataset is modified. The synchronization is based on the GML technology to couple dataset and metadata and to exchange them over the Web. The paper also presents and discusses the prototype system implemented and based on the conceptual design of the automatic metadata updating. This system has been integrated with the GeoNetwork opensource and is now up and running. Finally, the paper demonstrates the prototype systems which have been designed and developed following the automatic metadata enrichment approach. This approach is based on Web 2.0 and Folksonomy concept and involves improving the content of descriptive keyword (as a metadata element) which is the first gateway for discovering existing datasets in a sharing platform. The prototype systems have been implemented within two different environments: Model Information Knowledge Environment (MIKE) and GeoNetwork opensource.

Patent
06 Nov 2012
TL;DR: In this article, a method and system for processing network metadata is described, where metadata may be processed by dynamically instantiated executable software modules which make policy-based decisions about the character of the network metadata and about presentation of the metadata to consumers.
Abstract: A method and system for processing network metadata is described. Network metadata may be processed by dynamically instantiated executable software modules which make policy- based decisions about the character of the network metadata and about presentation of the network metadata to consumers of the information carried by the network metadata. The network metadata may be type classified and each subclass within a type may be mapped to a definition by a unique fingerprint value. The fingerprint value may be used for matching the network metadata subclasses against relevant policies and transformation rules. For template- based network metadata such as NetFlow v9, an embodiment of the invention can constantly monitor network traffic for unknown templates, capture template definitions, and informs administrators about templates for which custom policies and conversion rules do not exist. Conversion modules can efficiently convert selected types and/or subclasses of network metadata into alternative metadata formats.

Proceedings ArticleDOI
10 Sep 2012
TL;DR: The feasibility of the main ideas presented in this paper for providing high availability metadata service with only a slight overhead effect on I/O performance are shown.
Abstract: This paper presents PARTE, a prototype parallel file system with active/standby configured metadata servers (MDSs). PARTE replicates and distributes a part of files' metadata to the corresponding metadata stripes on the storage servers (OSTs) with a per-file granularity, meanwhile the client file system (client) keeps certain sent metadata requests. If the active MDS has crashed for some reason, these client backup requests will be replayed by the standby MDS to restore the lost metadata. In case one or more backup requests are lost due to network problems or dead clients, the latest metadata saved in the associated metadata stripes will be used to construct consistent and up-to-date metadata on the standby MDS. Moreover, the clients and OSTs can work in both normal mode and recovery mode in the PARTE file system. This differs from conventional active/standby configured MDSs parallel file systems, which hang all I/O requests and metadata requests during restoration of the lost metadata. In the PARTE file system, previously connected clients can continue to perform I/O operations and relevant metadata operations, because OSTs work as temporary MDSs during that period by using the replicated metadata in the relevant metadata stripes. Through examination of experimental results, we show the feasibility of the main ideas presented in this paper for providing high availability metadata service with only a slight overhead effect on I/O performance. Furthermore, since previously connected clients are never hanged during metadata recovery, in contrast to conventional systems, a better overall I/O data throughput can be achieved with PARTE.

Proceedings Article
15 Mar 2012
TL;DR: Analyzing the data quality information distributed by the GEOSS Clearinghouse concludes that still extra work can be done to provide complete quality information in the metadata catalogues.
Abstract: The Global Earth Observation System of Systems (GEOSS) Clearinghouse is part of the GEOSS Common Infrastructure (GCI) that supports the discovery of the data made available by the Group on Earth Observations (GEO) members and participant organizations in GEOSS. It also acts as a unified metadata catalogue that stores complete metadata records, not only about datasets but also for other kinds of components and services. By exploring these records, users often try to find the fit-for-use data. Quality indicators and provenance are included in the metadata and are potentially useful variables that allow users to make an informed decision avoiding to download and to assess the data themselves. However, no previous studies have been made on the completeness and correctness of the metadata records in the Clearinghouse. The objective of this paper is to analyze the data quality information distributed by the GEOSS Clearinghouse. The aim is to quantify its completeness and to provide clues on how the current status of the Clearinghouse could be improved and how useful quality aware tools could be. The methodology used in the current analysis consists in first harvesting of the Clearinghouse and then quantify the quality information found in 97203 metadata records, by using a semi-automatic approach. The results reveal that the inclusion of quality information on metadata records is not rare: 19.66% of the metadata records contain some quality element. However, this is not general enough and several aspects could be improved. For instance, 77.78% of quantitative measures lack measure units. When quality indicators are not sufficient, the lineage metadata information could be used to mitigate this situation by analysing the process steps and sources used to create a dataset. However, even though lineage is reported in 15.55% of the records, only 1.27% of the cases return a complete list of process steps with sources. This paper also provides indications on what is lacking in the current producer metadata model and, detected a gap in usage or user feedback metadata in GEOSS. Moreover, information extracted from GeoViQua interviews with users indicates that they value informal comments and user feedback on datasets as a complement of the more formal producer-oriented metadata description of the data. Although, many efforts within the scientific community and the Quality Assurance Framework for Earth Observation (QA4EO) group have been invested in describing how to parameterize data quality and uncertainty, we conclude that still extra work can be done to provide complete quality information in the metadata catalogues. In brief, since the GEOSS Clearinghouse references data from the most important agencies and research organizations, the results presented in this paper provide a perspective on how well quality is disseminated in the Earth observation community in general.

Proceedings ArticleDOI
02 Jun 2012
TL;DR: This paper presents metadata invariants, a new abstraction that codifies various naming and typing relationships between metadata and the main source code of a program, and reify this abstraction as a domain-specific language.
Abstract: As the prevailing programming model of enterprise applications is becoming more declarative, programmers are spending an increasing amount of their time and efforts writing and maintaining metadata, such as XML or annotations. Although metadata is a cornerstone of modern software, automatic bug finding tools cannot ensure that metadata maintains its correctness during refactoring and enhancement. To address this shortcoming, this paper presents metadata invariants, a new abstraction that codifies various naming and typing relationships between metadata and the main source code of a program. We reify this abstraction as a domain-specific language. We also introduce algorithms to infer likely metadata invariants and to apply them to check metadata correctness in the presence of program evolution. We demonstrate how metadata invariant checking can help ensure that metadata remains consistent and correct during program evolution; it finds metadata-related inconsistencies and recommends how they should be corrected. Similar to static bug finding tools, a metadata invariant checker identifies metadata-related bugs as a program is being refactored and enhanced. Because metadata is omnipresent in modern software applications, our approach can help ensure the overall consistency and correctness of software as it evolves.

Journal ArticleDOI
TL;DR: A step-by-step alignment method is developed that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application.
Abstract: The M3O abstracts from the existing metadata standards and formats and provides generic modeling solutions for annotations, decompositions, and provenance of metadata. Being a generic modeling framework, the M3O aims at integrating the existing metadata standards and metadata formats rather than replacing them. This is in particular useful as today's multimedia applications often need to combine and use more than one existing metadata standard or metadata format at the same time. However, applying and specializing the abstract and powerful M3O modeling framework in concrete application domains and integrating it with existing metadata formats and metadata standards is not always straightforward. Thus, we have developed a step-by-step alignment method that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application. We demonstrate our alignment method by integrating seven different existing metadata standards and metadata formats with the M3O and describe the experiences made during the integration process.

Proceedings ArticleDOI
26 Mar 2012
TL;DR: This work invests the use of Conditional Random Fields and Support Vector Machines, implemented in two state-of-the-art real-world systems, namely ParsCit and the Mendeley Desktop, for automatically extracting bibliographic metadata.
Abstract: Social research networks such as Mendeley and CiteULike offer various services for collaboratively managing bibliographic metadata and uploading textual artifacts. One core problem thereby is the extraction of bibliographic metadata from the textual artifacts. Our work investiages the use of Conditional Random Fields and Support Vector Machines, implemented in two state-of-the-art real-world systems, namely ParsCit and the Mendeley Desktop, for automatically extracting bibliographic metadata. We compare the systems' accuracy on two newly created real-world data sets gathered from Mendeley and Linked-Open-Data repositories. Our analysis shows that two-stage SVMs provide reasonable performance in solving the challenge of metadata extraction from user-provided textual artifacts.

Journal ArticleDOI
TL;DR: The pilot of an ongoing digital library metadata audit project that was collaboratively launched by library school interns and full-time staff to alleviate poor recall, poor precision and metadata inconsistencies across digital collections currently published in the University of Houston Digital Library is discussed.
Abstract: As digital library collections grow in size, metadata issues such as inconsistencies, incompleteness and quality become increasingly difficult to manage over time. Unfortunately, successful user search and discoverability of digital collections relies almost entirely on the accuracy and robustness of metadata. This paper discusses the pilot of an ongoing digital library metadata audit project that was collaboratively launched by library school interns and full-time staff to alleviate poor recall, poor precision and metadata inconsistencies across digital collections currently published in the University of Houston Digital Library. Interns and staff designed a multi-step project that included metadata review of sample items from each collection, systematic revision of previously published metadata and recommendations for future metadata procedures and ongoing metadata audit initiatives. No such metadata audit efforts had been conducted on the UH Digital Library and the project yielded data that provided staff with the opportunity to significantly improve the overall quality and consistency of metadata for collections published over the nearly three year life of the repository. This article also contains lessons learned and suggestions on how a similar metadata audit project could be implemented in other libraries hosting digital collections.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: This paper makes a case for a scalable metadata service middleware that layers on existing cluster file system deployments and distributes file system metadata, including the namespace tree, small directories and large directories, across many servers.
Abstract: Lack of a highly scalable and parallel metadata service is the Achilles heel for many cluster file system deployments in both the HPC world and the Internet services world. This is because most cluster file systems have focused on scaling the data path, i.e. providing high bandwidth parallel I/O to files that are gigabytes in size. But with proliferation of massively parallel applications that produce metadata-intensive workloads, such as large number of simultaneous file creates and large-scale storage management, cluster file systems also need to scale metadata performance. To realize these goals, this paper makes a case for a scalable metadata service middleware that layers on existing cluster file system deployments and distributes file system metadata, including the namespace tree, small directories and large directories, across many servers. Our key idea is to effectively synthesize a concurrent indexing technique to distribute metadata with a tabular, on-disk representation of all file system metadata.

Patent
15 Jun 2012
TL;DR: In this paper, a mixed-mode authorization metadata manager for cloud computing environments is described, which includes a plurality of service managers coordinating respective distributed multitenant services, and a metadata manager.
Abstract: Methods and apparatus for a mixed-mode authorization metadata manager for cloud computing environments are disclosed. A system includes a plurality of service managers coordinating respective distributed multitenant services, and a metadata manager. In response to a metadata request for an authorization entity, the metadata manager identifies a first and a second service manager coordinating services in use by a client account with which the authorization entity is affiliated. The first and second service managers implement respective authorization APIs. The metadata manager provides composite authorization metadata of the authorization entity based at least in part on (a) service authorization metadata provided by each of the first and second service managers and (b) identity authorization metadata provided by an identity manager.


Patent
29 Mar 2012
TL;DR: In this paper, infrastructure metadata information is collected about a virtual resource of a virtual cloud and a storage of infrastructure metadata is updated using the collected information, and the updating of the storage of metadata is performed with the collected metadata.
Abstract: Processing infrastructure metadata information about a virtual resource of a virtual cloud is disclosed. Infrastructure metadata information is collected. The collected metadata information is about a virtual resource of a virtual cloud. A storage of infrastructure metadata information is updated. The updating of the storage of infrastructure metadata information is performed using the collected information.

Book
31 May 2012
TL;DR: Information Resource Description is a vital book for information professionals learning to apply the most current metadata tools and skills in practice and is also essential reading for LIS students taking information organization courses.
Abstract: The introduction of RDA and the semantic web are significantly changing information organization. Keep your skills and services up to date with this start-to-finish primer. This new resource covers traditional, domain-specific practices as well as metadata's broader approach. Key topics include: information resource attributes; metadata for information retrieval; metadata sources and quality; knowledge organization systems; the semantic web; books and e-books, websites and audiovisual resources; business and government documents; learning resources. Author Philip Hider also examines the introduction of RDA, the integration of library cataloging with the semantic web, and pays specific attention to increasingly prevalent digital practices. Information Resource Description is a vital book for information professionals learning to apply the most current metadata tools and skills in practice. It is also essential reading for LIS students taking information organization courses.