scispace - formally typeset
Search or ask a question
Author

Viktor Schuppan

Bio: Viktor Schuppan is an academic researcher. The author has contributed to research in topics: Conjunctive normal form & Constraint programming. The author has an hindex of 1, co-authored 1 publications receiving 36 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Investigation of notions of unsatisfiable cores for LTL that arise from the syntax tree of an LTL formula, from converting it into a conjunctive normal form, and from proofs of its unsatisfiability, which are more fine-grained than existing ones.

40 citations


Cited by
More filters
Book ChapterDOI
11 Oct 2011
TL;DR: It turns out that even combining two solvers in a simple fashion significantly increases the share of solved instances while reducing CPU time spent, and no solver dominates or solves all instances.
Abstract: We perform a comprehensive experimental evaluation of off-the-shelf solvers for satisfiability of propositional LTL. We consider a wide range of solvers implementing three major classes of algorithms: reduction to model checking, tableau-based approaches, and temporal resolution. Our set of benchmark families is significantly more comprehensive than those in previous studies. It takes the benchmark families of previous studies, which only have a limited overlap, and adds benchmark families not used for that purpose before. We find that no solver dominates or solves all instances. Solvers focused on finding models and solvers using temporal resolution or fixed point computation show complementary strengths and weaknesses. This motivates and guides estimation of the potential of a portfolio solver. It turns out that even combining two solvers in a simple fashion significantly increases the share of solved instances while reducing CPU time spent.

58 citations

Book ChapterDOI
12 Oct 2015
TL;DR: An inductive procedure for finding temporal implicants is obtained by the introduction of selection functions that appear in a process equivalent to Skolemization in first order logic, and is able to generate concise implicant of a property, describing a small fragment of the input signal that causes violation of a formula.
Abstract: Runtime verification and model checking are two important methods for assessing correctness of systems. In both techniques, detecting an error is witnessed by an execution that violates the system specification. However, a faulty execution on its own may not provide sufficiently precise insight to the causes of the reported violation. Additional, often manual effort is required to properly diagnose the system. In this paper we present a method for analyzing such causes. The specifications we consider are expressed in LTL (Linear Temporal Logic) and MTL (Metric Temporal Logic), and the execution models are taken as ultimately-periodic words, and finite variability continuous signals respectively. The diagnostics problem is defined for the propositional case as the search for a small implicant of a formula which is satisfied by a given valuation, or equivalently a subset of that valuation sufficient to render the formula true. We propose a suitable notion of implicants in the temporal case, that are semantically based on signal subsets, and guarantee the existence of prime implicants for arbitrary temporal properties. An inductive procedure for finding temporal implicants is obtained by the introduction of selection functions that appear in a process equivalent to Skolemization in first order logic. Through the model restrictions we impose for LTL and MTL we are able to generate concise implicants of a property, describing a small fragment of the input signal that causes violation of a formula.

38 citations

Proceedings Article
03 Aug 2013
TL;DR: This paper proposes a scenario-based diagnosis at a specification's operator level using weak or strong fault models using efficient SAT encodings, and shows how to achieve that effectively for specifications in LTL.
Abstract: Product defects and rework efforts due to flawed specifications represent major issues for a project's performance, so that there is a high motivation for providing effective means that assist designers in assessing and ensuring a specification's quality. Recent research in the context of formal specifications, e.g. on coverage and vacuity, offers important means to tackle related issues. In the currently underrepresented research direction of diagnostic reasoning on a specification, we propose a scenario-based diagnosis at a specification's operator level using weak or strong fault models. Drawing on efficient SAT encodings, we show in this paper how to achieve that effectively for specifications in LTL. Our experimental results illustrate our approach's validity and attractiveness.

30 citations

Journal ArticleDOI
TL;DR: This work proposes new sanity checking techniques that automatically detect flaws and suggest improvements of given requirements and describes a semi-automatic completeness evaluation that can assess the coverage of user requirements and suggest missing properties the user might have wanted to formulate.
Abstract: In the last decade it became a common practice to formalise software requirements to improve the clarity of users' expectations. In this work we build on the fact that functional requirements can be expressed in temporal logic and we propose new sanity checking techniques that automatically detect flaws and suggest improvements of given requirements. Specifically, we describe and experimentally evaluate approaches to consistency and redundancy checking that identify all inconsistencies and pinpoint their exact source (the smallest inconsistent set). We further report on the experience obtained from employing the consistency and redundancy checking in an industrial environment. To complete the sanity checking we also describe a semi-automatic completeness evaluation that can assess the coverage of user requirements and suggest missing properties the user might have wanted to formulate. The usefulness of our completeness evaluation is demonstrated in a case study of an aeroplane control system.

24 citations

Proceedings ArticleDOI
30 Oct 2011
TL;DR: In this article, the authors propose an approach to prove that a message sequence chart (MSC) can not be satisfied by any trace of a given hybrid automata network, and explain why an MSC is unfeasible.
Abstract: Networks of Hybrid Automata are a clean modelling framework for complex systems with discrete and continuous dynamics. Message Sequence Charts (MSCs) are a consolidated language to describe desired behaviors of a network of interacting components. Techniques to analyze the feasibility of an MSC over a given HA network are based on specialized bounded model checking techniques, and focus on efficiently constructing traces of the network that witness the MSC behavior. Unfortunately, these techniques are unable to deal with the “unfeasibility” of the MSC, i.e. that no trace of the network satisfies the MSC. In this paper, we tackle the problem of MSC unfeasibility: first, we propose specialized techniques to prove that an MSC can not be satisfied by any trace of a given HA network; second, we show how to explain why an MSC is unfeasible. The approach is cast in an SMT-based verification framework, using a local time semantics, where the timescales of the automata in the network are synchronized upon shared events. In order to prove unfeasibility, we generalize k-induction to deal with the structure of the MSC, so that the simple path condition is localized to each fragment of the MSC. The explanations are provided as formulas in the variables representing the time points of the events of the MSCs, and are generated using unsatisfiable core extraction and interpolation. An experimental evaluation demonstrates the effectiveness of the approach in proving unfeasibility, and the adequacy of the automatically generated explanations.

22 citations