scispace - formally typeset
Search or ask a question

Showing papers on "Meta Data Services published in 1999"


Patent
01 Oct 1999
TL;DR: In this paper, the authors propose an extensible framework for the automatic extraction and transformation of metadata into logical annotations, where metadata imbedding within a media file is extracted by a type-specific parsing module which is loaded and executed based on the mimetype of the media file being described.
Abstract: An extensible framework for the automatic extraction and transformation of metadata into logical annotations. Metadata imbedded within a media file is extracted by a type-specific parsing module which is loaded and executed based on the mimetype of the media file being described. A content processor extracts information, typically in the form of time-based samples, from the media content. An auxiliary processing step is performed to collect additional metadata describing the media file from sources external to the file. All of the metadata thus collected is combined into a set of logical annotations, which may be supplemented by summary data generated from the metadata already collected. The annotations are then formatted into a standardized form, preferably XML, which is then mapped into a database schema. The database object also stores the source XML data as well as the original media file in addition to the annotation metadata. The system provides unified metadata repositories, which can then be used for indexing and searching.

343 citations


Journal ArticleDOI
TL;DR: The programming interface and implementation of the repository engine and the Open Information Model are described, which implements a set of object-oriented interfaces on top of a SQL database system.

113 citations


Patent
01 Jun 1999
TL;DR: In this paper, the format agent fulfills the portions of the client's request regarding metadata attributes included in the associated format of the file system, and the requested metadata attribute data is then returned to the client.
Abstract: In a computer, a system and a method handle requests from a client for accessing metadata attributes from at least one file system having an associated format containing specific metadata attributes. A format agent manages the file system. A client's request is received at an interface and forwarded to a dispatcher. The dispatcher routes the request to the format agent. The format agent fulfills the portions of the client's request regarding metadata attributes included in the associated format of the file system. If the client's request contains a metadata attribute that is not part of the file system's associated format, the format agent accesses a metadata attribute store to retrieve the metadata attribute data needed to fulfill the request. The requested metadata attribute data is then returned to the client. Multiple instances of the metadata attribute data are accessible by the client, the instances selected and/or assigned by the client and/or the system.

95 citations



01 Jan 1999
TL;DR: This model uses a uniform representation approach based on the Uniform Modeling Language (UML) to integrate technical and semantic metadata and their interdependencies.
Abstract: Due to the increasing complexity of data warehouses , a centralized and declarative management of metadata is essential for data warehouse administration, maintenance and usage. Metadata are usually divided into technical and semantic metadata. Typically, current approaches only support subsets of these metadata types, such as data movement meta-data or multidimensional metadata for OLAP. In particular, the interdependencies between technical and semantic metadata have not yet been investigated sufficiently. The representation of these interdependencies form an important prerequisite for the translation of queries formulated at the business concept level to executable queries on physical data. Therefore, we suggest a uniform and integrative model for data warehouse metadata. This model uses a uniform representation approach based on the Uniform Modeling Language (UML) to integrate technical and semantic metadata and their interdependencies.

68 citations


Journal Article
TL;DR: This paper describes facilities that are integrated with Microsoft SQL Server 7.0, particularly as they relate to meta-data and information models in its shared repository, and discusses the use of XML to move meta- data between tools.
Abstract: Data warehousing requires facilities for moving and transf orming data and meta-data. This paper describes such facilities that are integrated with Microsoft SQL Server 7.0, particularly as they relate to meta-data and information models in its shared repository. It also discusses the use of XML to move meta-data between tools.

59 citations


Journal ArticleDOI
01 Oct 1999
TL;DR: This work introduces metadata implantation and stepwise evolution techniques to interrelate database elements in different databases, and to resolve conflicts on the structure and semantics of database elements (classes, attributes, and individual instances).
Abstract: A key aspect of interoperation among data-intensive systems involves the mediation of metadata and ontologies across database boundaries. One way to achieve such mediation between a local database and a remote database is to fold remote metadata into the local metadata, thereby creating a common platform through which information sharing and exchange becomes possible. Schema implantation and semantic evolution, our approach to the metadata folding problem, is a partial database integration scheme in which remote and local (meta)data are integrated in a stepwise manner over time. We introduce metadata implantation and stepwise evolution techniques to interrelate database elements in different databases, and to resolve conflicts on the structure and semantics of database elements (classes, attributes, and individual instances). We employ a semantically rich canonical data model, and an incremental integration and semantic heterogeneity resolution scheme. In our approach, relationships between local and remote information units are determined whenever enough knowledge about their semantics is acquired. The metadata folding problem is solved by implanting remote database elements into the local database, a process that imports remote database elements into the local database environment, hypothesizes the relevance of local and remote classes, and customizes the organization of remote metadata. We have implemented a prototype system and demonstrated its use in an experimental neuroscience environment.

57 citations


Book
30 Sep 1999
TL;DR: The concept of metadata common factors affecting data quality data flexibility and responsive to business change active information management metadata entity types introduction to the enterprise metamodel.
Abstract: The concept of metadata common factors affecting data quality data flexibility and responsive to business change active information management metadata entity types introduction to the enterprise metamodel the challenges of information management recognizing the fallacy of software engineering distinguishing between data and information Occam's dilemma - recognizing necessary complexity establishing a common basic metamodel metadata in business managing the metadata the role of metadata in application development and support metadata in data warehousing and business intelligence the role of metadata on the Internet the basics of metamodelling design and management of metadatabases interaction between metamodels.

53 citations


Proceedings Article
01 Jan 1999
TL;DR: The key components of the SmartPush architecture have been implemented, and the focus in the project is shifting towards a pilot implementation and testing the ideas in practice.
Abstract: In the SmartPush project professional editors add semantic metadata to information flow when the content is created. This metadata is used to filter the information flow to provide the end users with a personalized news service. Personalization and delivery process is modeled as software agents, to whom the user delegates the task of sifting through incoming information. The key components of the SmartPush architecture have been implemented, and the focus in the project is shifting towards a pilot implementation and testing the ideas in practice.

48 citations


Journal ArticleDOI
TL;DR: A set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents to facilitate document retrieval by Internet search engines is defined.

47 citations


Patent
29 Dec 1999
TL;DR: In this paper, the authors present a system that retrieves metadata from a memory within a server, so that the server does not have to access a database in order to retrieve the metadata.
Abstract: One embodiment of the present invention provides a system that retrieves metadata from a memory within a server, so that the server does not have to access a database in order to retrieve the metadata. The system operates by receiving a request from a client, which causes an operation to be performed on data within the database. In response to the request, the system retrieves the metadata through a metadata object, which retrieves the metadata from a random access memory in the server. Note that this metadata specifies how the data is stored within the database. The system then performs the operation on the data within the database by using the metadata to determine how the data is stored within the database. Note that this metadata object can be used to service requests from a plurality of clients. Hence, client sessions can share the same metadata, which can greatly reduce the amount of memory used by client sessions. In one embodiment of the present invention, the metadata object contains static metadata specifying how tables and views are organized within the database. In one embodiment of the present invention, the system accesses the metadata object through a generic object on the server.

30 Aug 1999
TL;DR: The most important standards for representation and interchange of metadata, commercial products and research projects are presented and discussed for both, the general case and the particular case of data warehousing.
Abstract: This report gives an overview of metadata management in general (Part I) and on the role of metadata for data warehousing (Part II) Because of the complexity and extensive applicability of metadata, a compact, precise definition of the notion may hardly be provided Therefore, we explain metadata by illustrating the use and the forms it may take within various application areas In the case of data warehousing, we present a classification of metadata along certain dimensions and we discuss significant aspects of metadata management that have to be considered for the construction of a data warehouse system Furthermore, this report provides a comprehensive survey and analysis of the state of the art of metadata management in industry and research The most important standards for representation and interchange of metadata, commercial products and research projects are presented and discussed (as far as the available information allows) for both, the general case and the particular case of data warehousing

Proceedings ArticleDOI
01 Nov 1999
TL;DR: In this paper, the authors examine metadata and data-structure issues for the Historical Newspaper Digital Library and propose a framework for the logical structure and physical layout of metadata relevant to the image processing and to the historians who will use this collection.
Abstract: We examine metadata and data-structure issues for the Historical Newspaper Digital Library. This project proposes to digitize and then do OCR and linguisting processing on several years worth of historical newspapers. Newspapers are very complex information objects so developing a rich description of their content is challenging. In addition to frameworks for the logical structure and physical layout, we propose metadata relevant to the image processing and to the historians who will use this collection. Finally, we consider how the metadata infrastructure might be managed as it evolves with improved text processing capabilities and how an infrastructure might be developed to support a community of users.

Patent
02 Feb 1999
TL;DR: In this paper, the authors describe a mechanism for associating metadata with network resources and for locating the network resources in a language-independent manner, including a natural language name of the network resource, its location, its language, its region or intended audience, and other descriptive information.
Abstract: Mechanisms for associating metadata with network resources, and for locating the network resources in a language-independent manner are disclosed. The metadata may include a natural language name of the network resource, its location, its language, its region or intended audience, and other descriptive information. The owners register the metadata in a registry (10). A copy of the metadata is stored on a server (60) associated with a group of the network resources and in a registry that is indexed at a central location (32). A crawler service (24) periodically updates the registry by polling the information on each server associated with registered metadata. To locate a selected network resource, a client (70) provides the name of the network resource to a resolver process. The resolver process provides to the client the network resource location corresponding to the network resource name. Multiple metadata mappings can be established for the same network resource.

Journal Article
01 Jan 1999-Online


Journal ArticleDOI
TL;DR: The context of the Australian SPIRT1 Recordkeeping Metadata Project, and the conceptual models developed by the SPIRT Research Team as a framework for standardising and defining recordkeeping metadata are explored.
Abstract: In July 1999 the Australian Recordkeeping Metadata Schema (RKMS) was approved by its academic and industry steering group. The RKMS has inherited elements from and built on many other metadata standards associated with information management. It has also contributed to the development of subsequent sector specific recordkeeping metadata sets. The importance of the RKMS as a framework for mapping or reading other sets, and also as a standardised set of metadata available for adoption in diverse implementation environments, is now emerging. This paper explores the context of the Australian SPIRT1 Recordkeeping Metadata Project, and the conceptual models developed by the SPIRT Research Team as a framework for standardising and defining recordkeeping metadata. It then introduces the elements of the SPIRT Recordkeeping Metadata Schema and explores its functionality, before discussing implementation issues and future directions

11 Nov 1999
TL;DR: It turns out that an overall solution for managing all metadata in a central or federated repository is still missing regarding a global metadata schema as well as system aspects and interoperability among involved tools producing metadata.
Abstract: Metadata has been identified as a key success factor in data warehouse projects. It captures all kinds of information necessary to extract, transform and load data from source systems into the data warehouse, and afterwards to use and interpret the data warehouse contents. This paper gives an overview about the role metadata plays for data warehousing and reviews existing standards, commercial solutions and research actions relevant to metadata management. It turns out that an overall solution for managing all metadata in a central or federated repository is still missing regarding a global metadata schema as well as system aspects and interoperability among involved tools producing metadata. The divergence of proposed standards will probably prevent a breakthrough within the near future.

Proceedings ArticleDOI
19 Mar 1999
TL;DR: A Web based software tool is developed that will allow specialists with the requisite domain knowledge to populate the ontology independently and handles all the logistical issues of storage, maintenance and distribution.
Abstract: The objective of the Poseidon Coastal Zone Management System project is to develop a distributed data and software architecture to locate, retrieve, utilize, and visualize information about the coastal ocean environment. The article focuses on the issues related to efficiently identifying and creating the data needed for a given task. Since scientific data sets often do not contain information about the environment in which the data was obtained, we need a method of distinguishing data based on external information, or metadata. This metadata needs to be standardized in order to facilitate searching. While many metadata standards exist, none of them are adequate for representing all the information needed for coastal zone management. We have therefore implemented existing metadata standards in an expandable object oriented structure known as the Warwick Framework. Furthermore, since metadata is expensive and tedious to produce, we have developed a Web based software tool that simplifies the process and reduces the storage of redundant information. While we can search the metadata for the information we need, we still need a common vocabulary, or ontology, to ensure that we can identify data unambiguously. Since no existing vocabulary encapsulates all aspects of the ocean sciences and ocean systems management, we have facilitated the production of such a resource by creating a Web based tool that will allow specialists with the requisite domain knowledge to populate the ontology independently. The tool handles all the logistical issues of storage, maintenance and distribution.

Journal ArticleDOI
TL;DR: A method for a scalable and decentralized system of interoperable digital libraries and data repositories is described, which includes transportable metadata format, a persistent naming convention for arbitrary digital objects and a protocol for the asynchronous distribution of metadata.

Proceedings Article
01 Oct 1999
TL;DR: A generic query tool that enables an end user to query a metadata store through filters that impose search criteria on attributes, specifically developed to query educational metadata stored in the ARIADNE Knowledge Pool System is discussed.
Abstract: This paper discusses a generic query tool that enables an end user to query a metadata store through filters that impose search criteria on attributes. The Metadata Query Tool (MQT) is generic in the sense that it dynamically creates its user interface, based on configuration files that define the metadata scheme and the query functionalities. Although, in principle, the tool can be used to query any relational database, it has been specifically developed to query educational metadata stored in the ARIADNE Knowledge Pool System. The first section of the paper is an introduction, followed by a section that describes the approach to filter-based queries. The third section explains the configurable aspect of MQT, both with respect to the metadata scheme, as well as with respect to the graphical user interface. Some details on the queries generated by MQT are presented in the fourth section. The fifth section covers some user interface aspects. The sixth section gives an overview of related research and tools. The current status is presented in the seventh section. (Contains 11 references.) (MES) Reproductions supplied by EDRS are the best that can be made from the original document. U.S. DEPARTMENT OF EDUCATION Office of Educational Research and Improvement EDUCATIONAL RESOURCES INFORMATION CENTER (ERIC) This document has been reproduced as received from the person or organization originating it. Minor changes have been made to improve reproduction quality. Points of view or opinions stated in this document do not necessarily represent official OERI position or policy. A Generic Metadata Query Tool B. Verhoeven, E. Duval[11, H. Olivie Dept. Computer Science Katholieke Universiteit Leuven (B) Celestijnenlaan 200A, B-3001 Heverlee. Belgium E-mail: {Bart.Verhoeven,Erik.Duval,Henk.Olivie}@cs.kuleuven.ac.be 1 PERMISSION TO REPRODUCE AND DISSEMINATE THIS MATERIAL HAS BEEN GRANTED BY

Journal ArticleDOI
Michael Day1
01 Apr 1999-Vine
TL;DR: The UK Office for Library and Information Networking are engaged in a wide range of work in the area of metadata, in cooperation with various partners, which point to the continuing need for something like traditional library services to organise, access and preserve networked information.
Abstract: The UK Office for Library and Information Networking are engaged in a wide range of work in the area of metadata, in cooperation with various partners. Projects on metadata for Internet resource discovery, interoperability and digital preservation all point to the continuing need for something like traditional library services to organise, access and preserve networked information.

01 Jan 1999
TL;DR: In this article, the authors present metadata management strategies that promote reuse and describe a repository-based vision for the development of interoperable systems, by gathering relevant metadata only once and representing it in ways that are amenable to reuse.
Abstract: Data integration is expensive, largely due to the difficulty of gathering relevant metadata. We observe that a database may participate in multiple integration efforts, and that much of the same metadata is required, regardless of which kind of effort is being undertaken. This presents a tremendous opportunity to reduce the long-term development and maintenance cost of interoperable systems, by gathering relevant metadata only once and representing it in ways that are amenable to reuse. We present metadata management strategies that promote reuse and describe a repository-based vision for the development of interoperable systems.

Journal ArticleDOI
01 Jun 1999
TL;DR: The version and workspace features of Microsoft Repository, a layer that implements fine-grained objects and relationships on top of Microsoft SQL Server, are described.
Abstract: This paper describes the version and workspace features of Microsoft Repository, a layer that implements fine-grained objects and relationships on top of Microsoft SQL Server. It supports branching and merging of versions, delta storage, checkout-checkin, and single-version views for version-unaware applications.



Proceedings ArticleDOI
07 Nov 1999
TL;DR: The paper examines the research problems arising from the creation of a metadata propagation framework: a theory for inferring and reconciling metadata on views, rules for each property type and data derivation operator, efficient implementation, and ways to coordinate metadata administration across multiple schemas.
Abstract: Enterprise databases comprise multiple local databases that exchange information. The component databases will rarely have the same native form, so one must map between the native interfaces of the data suppliers and the recipients. An SQL view is a convenient and powerful way to define this map, because it provides not just an evaluation mechanism, but also query, and (to some degree) update and trigger capabilities. However, SQL views do not map the critical metadata (e.g., security, source attribution, and quality information) between data suppliers and their recipients. The paper examines the research problems arising from the creation of a metadata propagation framework: a theory for inferring and reconciling metadata on views, rules for each property type and data derivation operator, efficient implementation, and ways to coordinate metadata administration across multiple schemas.

Journal Article
TL;DR: Some of the issues that arise when migrating legacy systems are discussed and how repository technology can be used to address these issues are examined.
Abstract: Migrating legacy systems involves replacing (either wholly or partially) existing systems and databases, and complex transformations between old and new data, processes and systems. Correctly performing these activities depends on descriptions of data, and other aspects of the legacy and new systems, and the relationships between them, i.e., metadata. Metadata repositories provide tools for capturing, transforming, storing, and manipulating metadata. They can also store information for managing the migration process itself and for (re)use in other migrations. This paper discusses some of the issues that arise when migrating legacy systems and examines how repository technology can be used to address these issues.


Book
01 May 1999
TL;DR: A comprehensive statistic information system approach is proposed which thoroughly reengineers statistical information management.
Abstract: From the Publisher: By means of a formal computerized processing of information about data metadata management enables, for the first time, a computer-aided integration of distributed heterogeneous statistical data in a semantically coherent way. In connection with a specifically designed semantic data model, a comprehensive statistic information system approach is proposed which, in fact, thoroughly reengineers statistical information management.