scispace - formally typeset
Search or ask a question
Author

Jean-François Raskin

Bio: Jean-François Raskin is an academic researcher from Université libre de Bruxelles. The author has contributed to research in topics: Decidability & Markov decision process. The author has an hindex of 47, co-authored 293 publications receiving 7429 citations. Previous affiliations of Jean-François Raskin include Free University of Brussels & Université de Namur.


Papers
More filters
Proceedings ArticleDOI
01 May 2010
TL;DR: This paper first extends transition systems with features in order to describe the combined behaviour of an entire system family, and defines and implements a model checking technique that allows to verify such transition systems against temporal properties.
Abstract: In product line engineering, systems are developed in families and differences between family members are expressed in terms of features. Formal modelling and verification is an important issue in this context as more and more critical systems are developed this way. Since the number of systems in a family can be exponential in the number of features, two major challenges are the scalable modelling and the efficient verification of system behaviour. Currently, the few attempts to address them fail to recognise the importance of features as a unit of difference, or do not offer means for automated verification. In this paper, we tackle those challenges at a fundamental level. We first extend transition systems with features in order to describe the combined behaviour of an entire system family. We then define and implement a model checking technique that allows to verify such transition systems against temporal properties. An empirical evaluation shows substantial gains over classical approaches.

359 citations

BookDOI
TL;DR: This paper presents a meta-analyses of parallel SAT simplification on GPU architectures and its applications in reinforcement learning, artificial intelligence, and bioinformatics.
Abstract: Citation for published version (APA): Osama, M., & Wijs, A. (2019). Parallel SAT simplification on GPU architectures. In L. Zhang, & T. Vojnar (Eds.), Tools and Algorithms for the Construction and Analysis of Systems 25th International Conference, TACAS 2019, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019, Proceedings (pp. 21-40). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11427 LNCS). Springer. https://doi.org/10.1007/978-3030-17462-0_2

320 citations

Journal ArticleDOI
TL;DR: This paper proposes an efficient automata-based approach to linear time logic (LTL) model checking of variability-intensive systems, and provides an in-depth treatment of the FTS model checking algorithm.
Abstract: The premise of variability-intensive systems, specifically in software product line engineering, is the ability to produce a large family of different systems efficiently. Many such systems are critical. Thorough quality assurance techniques are thus required. Unfortunately, most quality assurance techniques were not designed with variability in mind. They work for single systems, and are too costly to apply to the whole system family. In this paper, we propose an efficient automata-based approach to linear time logic (LTL) model checking of variability-intensive systems. We build on earlier work in which we proposed featured transitions systems (FTSs), a compact mathematical model for representing the behaviors of a variability-intensive system. The FTS model checking algorithms verify all products of a family at once and pinpoint those that are faulty. This paper complements our earlier work, covering important theoretical aspects such as expressiveness and parallel composition as well as more practical things like vacuity detection and our logic feature LTL. Furthermore, we provide an in-depth treatment of the FTS model checking algorithm. Finally, we present SNIP, a new model checker for variability-intensive systems. The benchmarks conducted with SNIP confirm the speedups reported previously.

239 citations

Journal ArticleDOI
TL;DR: An algorithm for computing the set of states from which a player can win with probability 1 with a randomized observation-based strategy for a Buechi objective is given and it is shown that these algorithms are optimal by proving matching lower bounds.
Abstract: We study observation-based strategies for two-player turn-based games on graphs with omega-regular objectives. An observation-based strategy relies on imperfect information about the history of a play, namely, on the past sequence of observations. Such games occur in the synthesis of a controller that does not see the private state of the plant. Our main results are twofold. First, we give a fixed-point algorithm for computing the set of states from which a player can win with a deterministic observation-based strategy for any omega-regular objective. The fixed point is computed in the lattice of antichains of state sets. This algorithm has the advantages of being directed by the objective and of avoiding an explicit subset construction on the game graph. Second, we give an algorithm for computing the set of states from which a player can win with probability 1 with a randomized observation-based strategy for a Buechi objective. This set is of interest because in the absence of perfect information, randomized strategies are more powerful than deterministic ones. We show that our algorithms are optimal by proving matching lower bounds.

233 citations

Proceedings ArticleDOI
01 Jan 2010
TL;DR: It is shown that the problem of deciding the existence of a winning strategy for the protagonist is NP-complete, and the previously best known upper bound was EXPSPACE and no lower bound was known, so an optimal coNP-complete bound is given.
Abstract: In mean-payoff games, the objective of the protagonist is to ensure that the limit average of an infinite sequence of numeric weights is nonnegative. In energy games, the objective is to ensure that the running sum of weights is always nonnegative. Generalized mean-payoff and energy games replace individual weights by tuples, and the limit average (resp. running sum) of each coordinate must be (resp. remain) nonnegative. These games have applications in the synthesis of resource-bounded processes with multiple resources. We prove the finite-memory determinacy of generalized energy games and show the inter-reducibility of generalized mean-payoff and energy games for finite-memory strategies. We also improve the computational complexity for solving both classes of games with finite-memory strategies: while the previously best known upper bound was EXPSPACE, and no lower bound was known, we give an optimal coNP-complete bound. For memoryless strategies, we show that the problem of deciding the existence of a winning strategy for the protagonist is NP-complete.

163 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

01 Jan 2009
TL;DR: This paper presents a meta-modelling framework for modeling and testing the robustness of the modeled systems and some of the techniques used in this framework have been developed and tested in the field.
Abstract: ing WS1S Systems to Verify Parameterized Networks . . . . . . . . . . . . 188 Kai Baukus, Saddek Bensalem, Yassine Lakhnech and Karsten Stahl FMona: A Tool for Expressing Validation Techniques over Infinite State Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 J.-P. Bodeveix and M. Filali Transitive Closures of Regular Relations for Verifying Infinite-State Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Bengt Jonsson and Marcus Nilsson Diagnostic and Test Generation Using Static Analysis to Improve Automatic Test Generation . . . . . . . . . . . . . 235 Marius Bozga, Jean-Claude Fernandez and Lucian Ghirvu Efficient Diagnostic Generation for Boolean Equation Systems . . . . . . . . . . . . 251 Radu Mateescu Efficient Model-Checking Compositional State Space Generation with Partial Order Reductions for Asynchronous Communicating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Jean-Pierre Krimm and Laurent Mounier Checking for CFFD-Preorder with Tester Processes . . . . . . . . . . . . . . . . . . . . . . . 283 Juhana Helovuo and Antti Valmari Fair Bisimulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Thomas A. Henzinger and Sriram K. Rajamani Integrating Low Level Symmetries into Reachability Analysis . . . . . . . . . . . . . 315 Karsten Schmidt Model-Checking Tools Model Checking Support for the ASM High-Level Language . . . . . . . . . . . . . . 331 Giuseppe Del Castillo and Kirsten Winter Table of

1,687 citations

Journal ArticleDOI
TL;DR: PDDL2.1 as discussed by the authors is a modelling language capable of expressing temporal and numeric properties of planning domains and has been used in the International Planning Competitions (IPC) since 1998.
Abstract: In recent years research in the planning community has moved increasingly towards application of planners to realistic problems involving both time and many types of resources. For example, interest in planning demonstrated by the space research community has inspired work in observation scheduling, planetary rover exploration and spacecraft control domains. Other temporal and resource-intensive domains including logistics planning, plant control and manufacturing have also helped to focus the community on the modelling and reasoning issues that must be confronted to make planning technology meet the challenges of application. The International Planning Competitions have acted as an important motivating force behind the progress that has been made in planning since 1998. The third competition (held in 2002) set the planning community the challenge of handling time and numeric resources. This necessitated the development of a modelling language capable of expressing temporal and numeric properties of planning domains. In this paper we describe the language, PDDL2.1, that was used in the competition. We describe the syntax of the language, its formal semantics and the validation of concurrent plans. We observe that PDDL2.1 has considerable modelling power -- exceeding the capabilities of current planning technology -- and presents a number of important challenges to the research community.

1,420 citations

Proceedings ArticleDOI
01 Jan 2002
TL;DR: This work presents an algorithm for model checking safety properties using lazy abstraction and describes an implementation of the algorithm applied to C programs and provides sufficient conditions for the termination of the method.
Abstract: One approach to model checking software is based on the abstract-check-refine paradigm: build an abstract model, then check the desired property, and if the check fails, refine the model and start over. We introduce the concept of lazy abstraction to integrate and optimize the three phases of the abstract-check-refine loop. Lazy abstraction continuously builds and refines a single abstract model on demand, driven by the model checker, so that different parts of the model may exhibit different degrees of precision, namely just enough to verify the desired property. We present an algorithm for model checking safety properties using lazy abstraction and describe an implementation of the algorithm applied to C programs. We also provide sufficient conditions for the termination of the method.

1,238 citations