Author
Helmut Seidl
Other affiliations: Goethe University Frankfurt, Saarland University, University of Trier ...read more
Bio: Helmut Seidl is an academic researcher from Technische Universität München. The author has contributed to research in topics: Decidability & Tree automaton. The author has an hindex of 37, co-authored 207 publications receiving 4410 citations. Previous affiliations of Helmut Seidl include Goethe University Frankfurt & Saarland University.
Papers published on a yearly basis
Papers
More filters
01 Jan 2004
TL;DR: In this paper, the authors apply linear algebra techniques to precise interprocedural dataflow analysis, and describe analyses that determine for each program point identities that are valid among the program variables whenever control reaches that program point.
Abstract: We apply linear algebra techniques to precise interprocedural dataflow analysis. Specifically, we describe analyses that determine for each program point identities that are valid among the program variables whenever control reaches that program point. Our analyses fully interpret assignment statements with affine expressions on the right hand side while considering other assignments as non-deterministic and ignoring conditions at branches. Under this abstraction, the analysis computes the set of all affine relations and, more generally, all polynomial relations of bounded degree precisely. The running time of our algorithms is linear in the program size and polynomial in the number of occurring variables. We also show how to deal with affine preconditions and local variables and indicate how to handle parameters and return values of procedures.
190 citations
TL;DR: For finite tree automata with weights in a field R, a polynomial time algorithm is presented for deciding ambiguity-equivalence, provided R-operations and R-tests for 0 can be performed in constant time.
Abstract: It is shown that for every constant m it can be decided in polynomial time whether or not two m-ambiguous finite tree automata are equivalent. In general, inequivalence for finite tree automata is DEXPTIME-complete with respect to logspace reductions, and PSPACE-complete with respect to logspace reductions, if the automata in question are supposed to accept only finite languages. For finite tree automata with weights in a field R, a polynomial time algorithm is presented for deciding ambiguity-equivalence, provided R-operations and R-tests for 0 can be performed in constant time. This result is used to construct an algorithm deciding ambiguity-inequivalence of finite tree automata in randomized polynomial time. Finally, for every constant m it is shown that it can be decided in polynomial time whether or not a given finite tree automaton is m-ambiguous.
176 citations
10 Jan 2007
TL;DR: This work presents a new adaptive type checking algorithm based on forward type inference through exact characterizations of output languages that correctly type-checks all call-by-value smtts.
Abstract: Stay macro tree transducers (smtts) are an expressive formalism for reasoning about XSLT-like document transformations. Here, we consider the exact type checking problem for smtts. While the problem is decidable, the involved technique of inverse type inference is known to have exponential worst-case complexity (already for top-down transformations without parameters). We present a new adaptive type checking algorithm based on forward type inference through exact characterizations of output languages. The new algorithm correctly type-checks all call-by-value smtts. Given that the output type is specified by a deterministic automaton, the algorithm is polynomial-time whenever the transducer uses only few parameters and visits every input node only constantly often. Our new approach can also be generalized from smtts to stay macro forest transducers which additionally support concatenation as built-in output operation.
141 citations
13 Jun 2005
TL;DR: It is proved that TL -- and thus in particular DTL - despite their expressiveness still allow for effective inverse type inference and is obtained by means of a translation of TL programs into compositions of top-down finite state tree transductions with parameters, also called (stay) macro tree transducers.
Abstract: MSO logic on unranked trees has been identified as a convenient theoretical framework for reasoning about expressiveness and implementations of practical XML query languages. As a corresponding theoretical foundation of XML transformation languages, the "transformation language" TL is proposed. This language is based on the "document transformation language" DTL of Maneth and Neven which incorporates full MSO pattern matching, arbitrary navigation in the input tree using also MSO patterns, and named procedures. The new language generalizes DTL by additionally allowing procedures to accumulate intermediate results in parameters. It is proved that TL -- and thus in particular DTL - despite their expressiveness still allow for effective inverse type inference. This result is obtained by means of a translation of TL programs into compositions of top-down finite state tree transductions with parameters, also called (stay) macro tree transducers.
138 citations
Journal Article•
TL;DR: In this article, an adaptive type checking algorithm based on forward type inference through exact characterizations of output languages is presented, which correctly type-checks all call-by-value stay macro tree transducers.
Abstract: Stay macro tree transducers (smtts) are an expressive formalism for reasoning about XSLT-like document transformations. Here, we consider the exact type checking problem for smtts. While the problem is decidable, the involved technique of inverse type inference is known to have exponential worst-case complexity (already for top-down transformations without parameters). We present a new adaptive type checking algorithm based on forward type inference through exact characterizations of output languages. The new algorithm correctly type-checks all call-by-value smtts. Given that the output type is specified by a deterministic automaton, the algorithm is polynomial-time whenever the transducer uses only few parameters and visits every input node only constantly often. Our new approach can also be generalized from smtts to stay macro forest transducers which additionally support concatenation as built-in output operation.
137 citations
Cited by
More filters
Book•
22 Oct 1999
TL;DR: This book is unique in providing an overview of the four major approaches to program analysis: data flow analysis, constraint-based analysis, abstract interpretation, and type and effect systems.
Abstract: Program analysis utilizes static techniques for computing reliable information about the dynamic behavior of programs. Applications include compilers (for code improvement), software validation (for detecting errors) and transformations between data representation (for solving problems such as Y2K). This book is unique in providing an overview of the four major approaches to program analysis: data flow analysis, constraint-based analysis, abstract interpretation, and type and effect systems. The presentation illustrates the extensive similarities between the approaches, helping readers to choose the best one to utilize.
1,955 citations
Book•
01 Jan 1997TL;DR: The goal of this book is to provide a textbook which presents the basics ofTree automata and several variants of tree automata which have been devised for applications in the aforementioned domains.
Abstract: CONTENTS 7 Acknowledgments Many people gave substantial suggestions to improve the contents of this book. These are, in alphabetic order, Introduction During the past few years, several of us have been asked many times about references on finite tree automata. On one hand, this is the witness of the liveness of this field. On the other hand, it was difficult to answer. Besides several excellent survey chapters on more specific topics, there is only one monograph devoted to tree automata by Gécseg and Steinby. Unfortunately, it is now impossible to find a copy of it and a lot of work has been done on tree automata since the publication of this book. Actually using tree automata has proved to be a powerful approach to simplify and extend previously known results, and also to find new results. For instance recent works use tree automata for application in abstract interpretation using set constraints, rewriting, automated theorem proving and program verification, databases and XML schema languages. Tree automata have been designed a long time ago in the context of circuit verification. Many famous researchers contributed to this school which was headed by A. Church in the late 50's and the early 60's: B. Trakhtenbrot, Many new ideas came out of this program. For instance the connections between automata and logic. Tree automata also appeared first in this framework, following the work of Doner, Thatcher and Wright. In the 70's many new results were established concerning tree automata, which lose a bit their connections with the applications and were studied for their own. In particular, a problem was the very high complexity of decision procedures for the monadic second order logic. Applications of tree automata to program verification revived in the 80's, after the relative failure of automated deduction in this field. It is possible to verify temporal logic formulas (which are particular Monadic Second Order Formulas) on simpler (small) programs. Automata, and in particular tree automata, also appeared as an approximation of programs on which fully automated tools can be used. New results were obtained connecting properties of programs or type systems or rewrite systems with automata. Our goal is to fill in the existing gap and to provide a textbook which presents the basics of tree automata and several variants of tree automata which have been devised for applications in the aforementioned domains. We shall discuss only finite tree automata, and the …
1,492 citations
22 May 2016
TL;DR: This paper presents a binary analysis framework that implements a number of analysis techniques that have been proposed in the past and implements these techniques in a unifying framework, which allows other researchers to compose them and develop new approaches.
Abstract: Finding and exploiting vulnerabilities in binary code is a challenging task. The lack of high-level, semantically rich information about data structures and control constructs makes the analysis of program properties harder to scale. However, the importance of binary analysis is on the rise. In many situations binary analysis is the only possible way to prove (or disprove) properties about the code that is actually executed. In this paper, we present a binary analysis framework that implements a number of analysis techniques that have been proposed in the past. We present a systematized implementation of these techniques, which allows other researchers to compose them and develop new approaches. In addition, the implementation of these techniques in a unifying framework allows for the direct comparison of these apporaches and the identification of their advantages and disadvantages. The evaluation included in this paper is performed using a recent dataset created by DARPA for evaluating the effectiveness of binary vulnerability analysis techniques. Our framework has been open-sourced and is available to the security community.
758 citations