scispace - formally typeset
Search or ask a question
Author

Lothar Schmitz

Bio: Lothar Schmitz is an academic researcher from Bundeswehr University Munich. The author has contributed to research in topics: Document management system & Data management. The author has an hindex of 6, co-authored 12 publications receiving 135 citations.

Papers
More filters
Book
07 Jul 2006
TL;DR: This book discusses four main approaches to long-term preservation: Emulation, Migration, Preservation Processes in Emulation Approaches, and Migration in the VERS Project.
Abstract: Table of Contents (new sections and chapters wrt the German version of this book are printed in bold face) Part I: Approaches to Long-Term Preservation (approx 155 pages) 1 Long-term Preservation of Digital Documents (approx 23 pages) 11 Blessing and Curse of Digital Documents 12 Challenges, Terms, Concepts 13 Preserving Byte Streams 14 Technical Approaches to Long-term Preservation 15 Legal and Social Issues 2 The OAIS Reference Model and the DSEP Process Model (approx 13 pages) 21 The OAIS Reference Model 211 Background Information 212 Information Model 213 Process Model 22 The DSEP Process Model for Libraries 3 Migration (approx 23 pages) 31 Migration: Notions and Goals 32 Migration as a Means for Long-term Preservation 321 Data Formats as Migration Targets 322 Migration via Changing Media 323 Migration via Changing Logical Structure 33 Preservation Processes in Migration Approaches 34 Migration: Pros and Cons 4 Emulation (approx 27 pages) 41 Emulation: Notions and Goals 42 Emulation as a Means for Long-term Preservation 421 What exactly means Emulation? 422 Variants of Emulation 423 Exploiting Virtual Machines 43 Preservation Processes in Emulation Approaches 44 Emulation: Pros and Cons 5 Document Markup (approx 25 pages) 51 An Example 52 Different Forms of Markup 521 Procedural, Structural, Semantic Markup 522 Embedded Markup Considered Harmful 523 Levels of Markup 53 Exploiting Markup for Long-term Preservation 531 Requirements for Long-term Preservation 532 Bibliographic Requirements 54 Persistence is a Virtue 541 Uniform Resource Identifier, -Name, -Locator 542 Referencing Documents 543 Handles and Digital Object Identifiers 544 Summary 6 Standard Markup Languages (approx 26 pages) 61 Standards for Syntactic Document Markup 611 Tagged Image File Format (TIFF) 612 Portable Document Format (PDF) 613 HyperText Markup Language (HTML) 614 eXtensible Markup Language (XML) 62 Standards for Semantic Document Markup 621 Resource Description Framework (RDF) 622 Topic Maps 623 Ontologies 63 Vision: The Semantic Web 7 Discussion (approx 11 pages) 71 Why do You Need to Act NOW? 72 What do We Know already, What Remains to be Done? 73 Facing Reality 74 A Combined Approach Part II: Recent Preservation Initiatives (approx 157 pages) (Projects are subject to change) 8 Markup: Current Research and Development (approx 50 pages) 81 The Dublin Core Metadata Initiative (DCMI) 82 The Metadata Encoding and Transmission Standard (METS) 83 The Victorian Electronic Records Strategy (VERS) 83 The Text Encoding Initiative (TEI) 84 The Research Libraries Group (RLG) 85 The Pandora Project 9 Migration: Current Research and Development (approx 50 pages) 91 Migration in the VERS Project 92 Preserving the Whole 93 Risk Management of Digital Information 94 Database Migration 941 Motivation 942 Overview of the Architecture 943 Detaching Digital Objects from Physical Media 944 Services of Database Management Systems 945 Experiments 946 Discussion 10 Emulation: Current Research and Development (approx 16 pages) 101 Emulation Experiments by Rothenber

58 citations

Proceedings ArticleDOI
28 Oct 2004
TL;DR: This paper proposes liberal use of formal consistency rules which permits inconsistencies and deriving repairs for inconsistencies from DAGs and not from documents directly, so the repository is locked during DAG generation only which is performed incrementally.
Abstract: Whenever a group of authors collaboratively edits interrelated documents semantic consistency is a major goal. Current document management systems (DMS) lack adequate consistency management facilities. We propose liberal use of formal consistency rules which permits inconsistencies. In this paper we focus on deriving repairs for inconsistencies. Our major contributions are: (1) deriving (common) repairs for multiple rules (2) resolving conflicts between repairs (3)prioritizing repairs and (4) support for partial inconsistency resolution which resolves the most troubling inconsistencies and leaves less important inconsistencies for a later handling. The novel aspect of our approach is that we derive repairs from DAGs (directed acyclic graphs) and not from documents directly. That way the repository is locked during DAG generation only which is performed incrementally.

27 citations

Journal ArticleDOI
TL;DR: Salespoint, a Java-based framework for creating business applications, that has been jointly developed and maintained in Dresden and Munich since 1997 is used, a technical overview of its architecture, an example application built with Salespoint, and some lessons learned so far are presented.

14 citations

Proceedings ArticleDOI
20 Nov 2003
TL;DR: In this article, the authors propose to use explicit formal consistency rules for heterogeneous repositories that are managed by traditional document management systems (DMS) to improve consistency in document engineering processes.
Abstract: When a group of authors collaboratively edits interrelated documents, consistency problems occur almost immediately. Current document management systems (DMS) provide useful mechanisms such as document locking and version control, but often lack consistency management facilities.If at all, consistency is "defined" via informal guidelines, which do not support automatic consistency checks.In this paper, we propose to use explicit formal consistency rules for heterogeneous repositories that are managed by traditional DMS. Rules are formalized in a variant of first-order temporal logic. Functions and predicates, implemented in a full programming language, provide complex (even higher-order) functionality. A static type system supports rule formalization, where types also define (formal) document models. In the presence of types, the challenge is to smoothly combine a first-order logic with a useful type system including subtyping. In implementing a tolerant view of consistency, we do not expect that repositories satisfy consistency rules. Instead, a novel semantics precisely pinpoints inconsistent document parts and indicates when, where, and why a repository is inconsistent.Our major contributions are (1) the use of explicit formal rules giving a precise (and still comprehensible) notion of consistency, (2) a static type system securing the formalization process, (3) a novel semantics pinpointing inconsistent document (parts) precisely, and (4) a design of how to automatically check consistency for document engineering projects that use existing DMS. We have implemented a prototype of a consistency checker. Applied to real world content, it shows that our contributions can significantly improve consistency in document engineering processes.

13 citations

Journal ArticleDOI
TL;DR: Jaccie is a compiler-compiler that generates the scanner, parser, and attribute evaluator components of a compiler and presents them in a visual debugging environment and offers a number of alternative parser generators producing both top-down and bottom-up parsers of the LR variety, thus allowing users to experiment with different parsing strategies and to get a ''feel'' for their relative pros and cons.

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This article investigates digital preservation and its foundational role if digital libraries are to have long-term viability at the center of the global information society.
Abstract: Digital libraries, whether commercial, public, or personal, lie at the heart of the information society. Yet, research into their long-term viability and the meaningful accessibility of their contents remains in its infancy. In general, as we have pointed out elsewhere, “after more than twenty years of research in digital curation and preservation the actual theories, methods and technologies that can either foster or ensure digital longevity remain startlingly limited.” Research led by DigitalPreservationEurope DPE and the Digital Preservation Cluster of DELOS has allowed us to refine the key research challenges—theoretical, methodological and technological—that need attention by researchers in digital libraries during the coming five to ten years, if we are to ensure that the materials held in our emerging digital libraries are to remain sustainable, authentic, accessible and understandable over time. Building on this work and taking the theoretical framework of archival science as bedrock, this article investigates digital preservation and its foundational role if digital libraries are to have long-term viability at the center of the global information society.

107 citations

Proceedings ArticleDOI
02 Jun 2012
TL;DR: This paper proposes a novel concept, range fix, for software configuration, which specifies the options to change and the ranges of values for these options, and designs an algorithm that automatically generates range fixes for a violated constraint.
Abstract: To prevent ill-formed configurations, highly configurable software often allows defining constraints over the available options. As these constraints can be complex, fixing a configuration that violates one or more constraints can be challenging. Although several fix-generation approaches exist, their applicability is limited because (1) they typically generate only one fix, failing to cover the solution that the user wants; and (2) they do not fully support non-Boolean constraints, which contain arithmetic, inequality, and string operators. This paper proposes a novel concept, range fix, for software configuration. A range fix specifies the options to change and the ranges of values for these options. We also design an algorithm that automatically generates range fixes for a violated constraint. We have evaluated our approach with three different strategies for handling constraint interactions, on data from five open source projects. Our evaluation shows that, even with the most complex strategy, our approach generates complete fix lists that are mostly short and concise, in a fraction of a second.

87 citations

Journal ArticleDOI
TL;DR: This paper argues for a unified view combining a spatial, temporal, and thematic component of space and time for knowledge organization, representation, and reasoning on the Semantic Web.
Abstract: Space and time have not received much attention on the Semantic Web so far. While their importance has been recognized recently, existing work reduces them to simple latitude-longitude pairs and time stamps. In contrast, we argue that space and time are fundamental ordering relations for knowledge organization, representation, and reasoning. While most research on Semantic Web reasoning has focused on thematic aspects, this paper argues for a unified view combining a spatial, temporal, and thematic component. Besides their impact on the representation of and reasoning about individuals and classes, we outline the role of space and time for ontology modularization, evolution, and the handling of vague and contradictory knowledge. Instead of proposing yet another specific methodology, the presented work illustrates the relevance of space and time using various examples from the geo-sciences.

64 citations

Proceedings ArticleDOI
03 Sep 2012
TL;DR: This paper copes with the large number of repairs by focusing on what caused an inconsistency and presenting repairs as a linearly growing repair tree and it is shown that the approach computes repair trees in milliseconds on average.
Abstract: Resolving inconsistencies in software models is a complex task because the number of repairs grows exponentially. Existing approaches thus emphasize on selected repairs only but doing so diminishes their usefulness. This paper copes with the large number of repairs by focusing on what caused an inconsistency and presenting repairs as a linearly growing repair tree. The cause is computed by examining the run-time evaluation of the inconsistency to understand where and why it failed. The individual changes that make up repairs are then modeled in a repair tree as alternatives and sequences reflecting the syntactic structure of the inconsistent design rule. The approach is automated and tool supported. Its scalability was empirically evaluated on 29 UML models and 18 OCL design rules where we show that the approach computes repair trees in milliseconds on average. We believe that the approach is applicable to arbitrary modeling and constraint languages.

64 citations