scispace - formally typeset
Search or ask a question

Showing papers in "Formal Aspects of Computing in 2016"


Journal ArticleDOI
TL;DR: A black-box active learning algorithm for inferring extended finite state machines (EFSM) by dynamic black- box analysis based on a novel learning model based on so-called tree queries that induces a generalization of the classical Nerode equivalence and canonical automata construction to the symbolic setting.
Abstract: We present a black-box active learning algorithm for inferring extended finite state machines (EFSM)s by dynamic black-box analysis. EFSMs can be used to model both data flow and control behavior of software and hardware components. Different dialects of EFSMs are widely used in tools for model-based software development, verification, and testing. Our algorithm infers a class of EFSMs called register automata. Register automata have a finite control structure, extended with variables (registers), assignments, and guards. Our algorithm is parameterized on a particular theory, i.e., a set of operations and tests on the data domain that can be used in guards. Key to our learning technique is a novel learning model based on so-called tree queries. The learning algorithm uses tree queries to infer symbolic data constraints on parameters, e.g., sequence numbers, time stamps, identifiers, or even simple arithmetic. We describe sufficient conditions for the properties that the symbolic constraints provided by a tree query in general must have to be usable in our learning model. We also show that, under these conditions, our framework induces a generalization of the classical Nerode equivalence and canonical automata construction to the symbolic setting. We have evaluated our algorithm in a black-box scenario, where tree queries are realized through (black-box) testing. Our case studies include connection establishment in TCP and a priority queue from the Java Class Library.

104 citations


Journal ArticleDOI
TL;DR: A formal and general framework for architecture composability based on an associative, commutative and idempotent architecture composition operator, which establishes preservation of liveness properties by architecture composition.
Abstract: Architectures depict design principles: paradigms that can be understood by all, allow thinking on a higher plane and avoiding low-level mistakes. They provide means for ensuring correctness by construction by enforcing global properties characterizing the coordination between components. An architecture can be considered as an operator A that, applied to a set of components $${\mathcal{B}}$$B, builds a composite component $${A(\mathcal{B})}$$A(B) meeting a characteristic property $${\Phi}$$?. Architecture composability is a basic and common problem faced by system designers. In this paper, we propose a formal and general framework for architecture composability based on an associative, commutative and idempotent architecture composition operator $${\oplus}$$?. The main result is that if two architectures A1 and A2 enforce respectively safety properties $${\Phi_{1}}$$?1 and $${\Phi_{2}}$$?2, the architecture $${A_{1} \oplus A_{2}}$$A1?A2 enforces the property $${\Phi_{1} \land \Phi_{2}}$$?1??2, that is both properties are preserved by architecture composition. We also establish preservation of liveness properties by architecture composition. The presented results are illustrated by a running example and a case study.

44 citations


Journal ArticleDOI
TL;DR: A principled modular approach to the development of construction and verification tools for imperative programs, in which the control flow and the data flow are cleanly separated, is presented.
Abstract: We present a principled modular approach to the development of construction and verification tools for imperative programs, in which the control flow and the data flow are cleanly separated. Our simplest verification tool uses Kleene algebra with tests for the control flow of while-programs and their standard relational semantics for the data flow. It is expanded to a basic program construction tool by adding an operation for the specification statement and one single axiom. To include recursive procedures, Kleene algebras with tests are expanded further to quantales with tests. In this more expressive setting, iteration and the specification statement can be defined explicitly and stronger program transformation rules can be derived. Programming our approach in the Isabelle/HOL interactive theorem prover yields simple lightweight mathematical components as well as program construction and verification tools that are correct by construction themselves. Verification condition generation and program construction rules are based on equational reasoning and supported by powerful Isabelle tactics and automated theorem proving. A number of examples shows our tools at work.

37 citations


Journal ArticleDOI
TL;DR: It is shown that for any set of behavioral relations, behavioral profiles are strictly less expressive than regular languages, entailing that behavioral profiles cannot be used to decide trace equivalence of finite automata and thus Petri nets.
Abstract: Behavioral profiles have been proposed as a behavioral abstraction of dynamic systems, specifically in the context of business process modeling. A behavioral profile can be seen as a complete graph over a set of task labels, where each edge is annotated with one relation from a given set of binary behavioral relations. Since their introduction, behavioral profiles were argued to provide a convenient way for comparing pairs of process models with respect to their behavior or computing behavioral similarity between process models. Still, as of today, there is little understanding of the expressive power of behavioral profiles. Via counter-examples, several authors have shown that behavioral profiles over various sets of behavioral relations cannot distinguish certain systems up to trace equivalence, even for restricted classes of systems represented as safe workflow nets. This paper studies the expressive power of behavioral profiles from two angles. Firstly, the paper investigates the expressive power of behavioral profiles and systems captured as acyclic workflow nets. It is shown that for unlabeled acyclic workflow net systems, behavioral profiles over a simple set of behavioral relations are expressive up to configuration equivalence. When systems are labeled, this result does not hold for any of several previously proposed sets of behavioral relations. Secondly, the paper compares the expressive power of behavioral profiles and regular languages. It is shown that for any set of behavioral relations, behavioral profiles are strictly less expressive than regular languages, entailing that behavioral profiles cannot be used to decide trace equivalence of finite automata and thus Petri nets.

32 citations



Journal ArticleDOI
TL;DR: Decidability and undecidability results are obtained via a translation to data-centric dynamic systems, a recently devised framework for the formal specification and verification of data-aware business processes working over full-fledged relational databases with constraints.
Abstract: Petri nets with name creation and management ($${ u}$$ź-PNs) have been recently introduced as an expressive model for dynamic (distributed) systems, whose dynamics are determined not only by how tokens flow in the system, but also by the pure names they carry. On the one hand, this extension makes the resulting nets strictly more expressive than P/T nets: they can be exploited to capture a plethora of interesting systems, such as distributed systems enriched with channels and name passing, service interaction with correlation mechanisms, and resource-constrained workflow nets that explicitly account for process instances. On the other hand, fundamental properties like coverability, termination and boundedness are decidable for $${ u}$$ź-PNs. In this work, we go one step beyond the verification of such general properties, and provide decidability and undecidability results of model checking $${ u}$$ź-PNs against variants of first-order $${\mu}$$μ-calculus, recently proposed in the area of data-aware process analysis. While this model checking problem is undecidable in the general case, decidability can be obtained by considering different forms of boundedness, which still give raise to an infinite-state transition system. We then ground our framework to tackle the problem of soundness checking over workflow nets enriched with explicit process instances and resources. Notably, our decidability results are obtained via a translation to data-centric dynamic systems, a recently devised framework for the formal specification and verification of data-aware business processes working over full-fledged relational databases with constraints. In this light, our results contribute to the cross-fertilization between the area of formal methods for concurrent systems and that of foundations of data-aware processes, which has not been extensively investigated so far.

26 citations


Journal ArticleDOI
TL;DR: This paper allows interference to be represented by a process rather than a relation and hence derive more general rely-guarantee laws, and introduces a rely quotient operator, which generalises a rely relation to a process.
Abstract: The rely-guarantee technique allows one to reason compositionally about concurrent programs. To handle interference the technique makes use of rely and guarantee conditions, both of which are binary relations on states. A rely condition is an assumption that the environment performs only atomic steps satisfying the rely relation and a guarantee is a commitment that every atomic step the program makes satisfies the guarantee relation. In order to investigate rely-guarantee reasoning more generally, in this paper we allow interference to be represented by a process rather than a relation and hence derive more general rely-guarantee laws. The paper makes use of a weak conjunction operator between processes, which generalises a guarantee relation to a guarantee process, and introduces a rely quotient operator, which generalises a rely relation to a process. The paper focuses on the algebraic properties of the general rely-guarantee theory. The Jones-style rely-guarantee theory can be interpreted as a model of the general algebraic theory and hence the general laws presented here hold for that theory.

25 citations


Journal ArticleDOI
TL;DR: This paper introduces a formally precise approach to separate architectural style design decisions from application-specific decisions, and then uses these separate decisions as inputs to an automated synthesizer that supports a model-driven development (MDD) approach to architecture synthesis with style as a separate design variable.
Abstract: Reliably producing software architectures in selected architectural styles requires significant expertise yet remains difficult and error-prone. Our research goals are to better understand the nature of style-specific architectures, and relieve architects of the need to produce such architectures by hand. To achieve our goals, this paper introduces a formally precise approach to separate architectural style design decisions from application-specific decisions, and then uses these separate decisions as inputs to an automated synthesizer. This in effect supports a model-driven development (MDD) approach to architecture synthesis with style as a separate design variable. We claim that it is possible to formalize this separation of concerns, long implicit in software engineering research; to automatically synthesize style-specific architectures; and thereby to improve software design productivity and quality. To test these claims, we employed a combination of experimental systems and case study methods: we developed an MDD tool and used it to carry out case studies using Kitchenham's methods. Our contributions include: a theoretical framework formalizing our separation of concerns and synthesis approach; an MDD framework, Monarch; and results of case studies that we interpret as supporting our claims. This work advances our understanding of software architectural style as a formal refinement; makes application descriptions an explicit subject of study; and suggests that synthesis of architectures can improve software productivity and quality.

24 citations


Journal ArticleDOI
TL;DR: This work proposes new sanity checking techniques that automatically detect flaws and suggest improvements of given requirements and describes a semi-automatic completeness evaluation that can assess the coverage of user requirements and suggest missing properties the user might have wanted to formulate.
Abstract: In the last decade it became a common practice to formalise software requirements to improve the clarity of users' expectations. In this work we build on the fact that functional requirements can be expressed in temporal logic and we propose new sanity checking techniques that automatically detect flaws and suggest improvements of given requirements. Specifically, we describe and experimentally evaluate approaches to consistency and redundancy checking that identify all inconsistencies and pinpoint their exact source (the smallest inconsistent set). We further report on the experience obtained from employing the consistency and redundancy checking in an industrial environment. To complete the sanity checking we also describe a semi-automatic completeness evaluation that can assess the coverage of user requirements and suggest missing properties the user might have wanted to formulate. The usefulness of our completeness evaluation is demonstrated in a case study of an aeroplane control system.

24 citations


Journal ArticleDOI
TL;DR: This work proposes a model of multiparty, self-adaptive communications with access control and secure information flow guarantees, and is equipped with local and global adaptation mechanisms for reacting to security violations of different gravity.
Abstract: We present a comprehensive model of structured communications in which self-adaptation and security concerns are jointly addressed. More specifically, we propose a model of multiparty, self-adaptive communications with access control and secure information flow guarantees. In our model, multiparty protocols (choreographies) are described as global types; security violations occur when process implementations of protocol participants attempt to read or write messages of inappropriate security levels within directed exchanges. Such violations trigger adaptation mechanisms that prevent the violations to occur and/or to propagate their effect in the choreography. Our model is equipped with local and global adaptation mechanisms for reacting to security violations of different gravity; type soundness results ensure that the overall multiparty protocol is still correctly executed while the system adapts itself to preserve the participants' security.

24 citations


Journal ArticleDOI
TL;DR: This paper relies on the foundational notions of ASM ground model and model refinement to obtain a precise model for a client-server application for Cloud systems, to tackle the problem of making Cloud services usable to different end-devices by adapting on-the-fly the content coming from the Cloud to the different devices contexts.
Abstract: The request of formal methods for the specification and analysis of distributed systems is nowadays increasing, especially when considering the development of Cloud systems and Web applications. This is due to the fact that modeling languages currently used in these areas have informal definitions and ambiguous semantics, and therefore their use may be unreliable. Thanks to their mathematical foundation, formal methods can guarantee rigorous system design, leading to precise models where requirements can be validated and properties can be assured, already at the early stages of the system development. In this paper, we present a rigorous engineering process for distributed systems, based on the Abstract State Machines (ASM) formal method. We rely on the foundational notions of ASM ground model and model refinement to obtain a precise model for a client-server application for Cloud systems. This application has been proposed to tackle the problem of making Cloud services usable to different end-devices by adapting on-the-fly the content coming from the Cloud to the different devices contexts. The ASM-based modeling process is supported by a number of validation and verification activities that have been exploited on the component under development to guarantee consistency, correctness, and reliability properties.

Journal ArticleDOI
TL;DR: This paper completes existing work in the area by introducing more asynchronous communication models and showing their differences, and proposes and illustrated an implemented tool chain based on the TLA+ formalism and model checking.
Abstract: Asynchronous communication is often viewed as a single entity, the counterpart of synchronous communication. Although the basic concept of asynchronous communication is the decoupling of send and receive events, there is actually room for a variety of additional specification of the communication, for instance in terms of ordering. Yet, these different asynchronous communications are used interchangeably and seldom distinguished. This paper is a contribution to the study of these models, their differences, and how they are related. In this paper, the variety of point-to-point asynchronous communication paradigms is considered with two approaches. In the first and theoretical one, communication models are specified as properties on the ordering of events in distributed executions. In the second and more practical approach that involves composition of peers, they are modeled with transition systems and message histories as part of a framework. The described framework enables to model peer composition and compatibility properties. Besides, an implemented tool chain based on the TLA+ formalism and model checking is also proposed and illustrated. The conformance of the two approaches is highlighted. A hierarchy is established between the studied communication models. From the execution viewpoint, it completes existing work in the area by introducing more asynchronous communication models and showing their differences. The framework is shown to offer abstract implementations of the communication models. Both the correctness and the completeness of the descriptions in the framework are studied. This reveals necessary restrictions on the behavior of the peers so that the communication models are actually implementable.

Journal ArticleDOI
TL;DR: A language-independent proof system for full equivalence is introduced, which is parametric in the operational semantics of two languages and in a state-similarity relation and illustrated on two programs in two different languages that both compute the Collatz sequence.
Abstract: Two programs are fully equivalent if, for the same input, either they both diverge or they both terminate with the same result. Full equivalence is an adequate notion of equivalence for programs written in deterministic languages. It is useful in many contexts, such as capturing the correctness of program transformations within the same language, or capturing the correctness of compilers between two different languages. In this paper we introduce a language-independent proof system for full equivalence, which is parametric in the operational semantics of two languages and in a state-similarity relation. The proof system is sound: a proof tree establishes the full equivalence of the programs given to it as input. We illustrate it on two programs in two different languages (an imperative one and a functional one), that both compute the Collatz sequence. The Collatz sequence is an interesting case study since it is not known whether the sequence terminates or not; nevertheless, our proof system shows that the two programs are fully equivalent (even if we cannot establish termination or divergence of either one).

Journal ArticleDOI
TL;DR: It is claimed that the largest number of people in the world have never heard of a company called SPI before, and that it is the first company of its kind to be established in Europe.
Abstract: Резюме. В статье проанализированы наиболее известные отечественные и зарубежные количественные показатели засух. Установлено, что на Европейской части России южнее 55  с. ш. наиболее тесная связь временных рядов гидротермического коэффициента (ГТК) наблюдается со стандартизованным индексом осадков (SPI) и стандартизованным индексом осадков и испаряемости (SPEI). Однако высокие коэффициенты корреляции между временными рядами показателей, отражающих и засушливость, и переувлажнение, не гарантируют одинаковую интерпретацию характеристик засух. Выявлено, что ГТК по сравнению с другими показателями существенно завышает повторяемость засух в период активной вегетации практически на всей территории исследования. Предложена новая методика определения градаций интенсивности засух по ГТК, повышающая сравнимость характеристик засух при использовании индексов ГТК и SPI. Ключевые слова. Количественные показатели засухи, гидротермический коэффициент, стандартизированный индекс осадков, Европейская часть России.

Journal ArticleDOI
TL;DR: A new way of reconciling Event-B refinement with linear temporal logic (LTL) properties is presented, covering liveness in the context of anticipated events, and relaxing constraints between adjacent refinement levels.
Abstract: In this paper we present a new way of reconciling Event-B refinement with linear temporal logic (LTL) properties. In particular, the results presented in this paper allow properties to be established for abstract system models, and identify conditions to ensure that the properties (suitably translated) continue to hold as those models are developed through refinement. There are several novel elements to this achievement: (1) we identify conditions that allow LTL properties to be mapped across refinement chains; (2) we provide translations of LTL predicates to reflect the introduction through refinement of new events and the renaming and splitting of existing events; (3) we do this for an extended version of LTL particularly suited to Event-B, including state predicates and enabledness of events, which can be model-checked at the abstract level. Our results are more general than any previous work in this area, covering liveness in the context of anticipated events, and relaxing constraints between adjacent refinement levels. The approach is illustrated with a case study. This enables designers to develop event based models and to consider their execution patterns so that liveness and fairness properties can be verified for Event-B systems.

Journal ArticleDOI
TL;DR: A wireless fire alarm system is considered, regulated by the EN 54 standard, and formal requirements engineering, modeling and verification are performed and severe design flaws are uncovered that would have prevented its certification.
Abstract: The design of distributed, safety-critical real-time systems is challenging due to their high complexity, the potentially large number of components, and complicated requirements and environment assumptions that stem from international standards. We present a case study that shows that despite those challenges, the automated formal verification of such systems is not only possible, but practicable even in the context of small to medium-sized enterprises. We considered a wireless fire alarm system, regulated by the EN 54 standard. We performed formal requirements engineering, modeling and verification and uncovered severe design flaws that would have prevented its certification. For an improved design, we provided dependable verification results which in particular ensure that certification tests for a relevant regulation standard will be passed. In general we observe that if system tests are specified by generalized test procedures, then verifying that a system will pass any test following those test procedures is a cost-efficient approach to improve the product quality based on formal methods. Based on our experience, we propose an approach useful to integrate the application of formal methods to product development in SME.

Journal ArticleDOI
TL;DR: This contribution presents a formalised algorithm in the Isabelle/HOL proof assistant to compute echelon forms, and, as a consequence, characteristic polynomials of matrices, and proves its correctness over Bézout domains.
Abstract: In this contribution we present a formalised algorithm in the Isabelle/HOL proof assistant to compute echelon forms, and, as a consequence, characteristic polynomials of matrices. We have proved its correctness over Bezout domains, but its executability is only guaranteed over Euclidean domains, such as the integer ring and the univariate polynomials over a field. This is possible since the algorithm has been parameterised by a (possibly non-computable) operation that returns the Bezout coefficients of a pair of elements of a ring. The echelon form is also used to compute determinants and inverses of matrices. As a by-product, some algebraic structures have been implemented (principal ideal domains, Bezout domains, etc.). In order to improve performance, the algorithm has been refined to immutable arrays inside of Isabelle and code can be generated to functional languages as well.

Journal ArticleDOI
TL;DR: The specific complexity class of the weak probabilistic bisimulation problem is discussed, several practical algorithms and linear programming problem transformations that enable an efficient solution are considered, and empirical results demonstrate the effectiveness of the minimization approach on standard benchmarks.
Abstract: Weak probabilistic bisimulation on probabilistic automata can be decided by an algorithm that needs to check a polynomial number of linear programming problems encoding weak transitions. It is hence of polynomial complexity. This paper discusses the specific complexity class of the weak probabilistic bisimulation problem, and it considers several practical algorithms and linear programming problem transformations that enable an efficient solution. We then discuss two different implementations of a probabilistic automata weak probabilistic bisimulation minimizer, one of them employing SAT modulo linear arithmetic as the solver technology. Empirical results demonstrate the effectiveness of the minimization approach on standard benchmarks, also highlighting the benefits of compositional minimization.

Journal ArticleDOI
TL;DR: It’s time to get used to the idea that the world doesn’t need to know much about you, because you already know that.
Abstract: Резюме. Целью работы является оценка роли атмосферных переносов тепла и влаги в формировании изменчивости температурного режима в высоких широтах Арктики в зимний сезон, включая усиление потепления в последние два десятилетия. Для этого выполнены расчеты атмосферных переносов тепла и влаги через параллель 70° с. ш. в область 70–90° с. ш. на различных изобарических уровнях по данным реанализа ERA/Interim за 1979–2014 гг. Показано, что основной приток явного и скрытого тепла в зимний период поступает через атлантическую часть 70-ой параллели от 0 до 80° в.д. в слое от поверхности до 750 гПа с максимумом на 1000 гПа. Колебания атмосферного притока через эти «ворота» объясняют более 40 % изменений средней по области 70–90° с. ш. приповерхностной температуры воздуха зимой в период наибольшего роста температуры с 1997 по 2014 г. В пространственном распределении его влияния на зимнюю приповерхностную температуру выделяется область от Норвежского до Восточно-Сибирского моря с максимумами над Баренцевым и Карским морями, простирающимися вплоть до Северного полюса. Ключевые слова. Арктика, климат, потепление, атмосферный перенос тепла.

Journal ArticleDOI
TL;DR: The performance of the analysis is addressed and a significantly more efficient alternative to the verification of the rule side conditions is presented, which is improved by carrying out partial verification on component metadata throughout component compositions and by using behavioural patterns.
Abstract: In previous work we presented a CSP-based systematic approach that fosters the rigorous design of component-based development. Our approach is strictly defined in terms of composition rules, which are the only permitted way to compose components. These rules guarantee the preservation of properties (particularly deadlock freedom) by construction in component composition. Nevertheless, their application is allowed only under certain conditions whose verification via model checking turned out impracticable even for some simple designs, and particularly those involving cyclic topologies. In this paper, we address the performance of the analysis and present a significantly more efficient alternative to the verification of the rule side conditions, which are improved by carrying out partial verification on component metadata throughout component compositions and by using behavioural patterns. The use of metadata, together with behavioural patterns, demands new composition rules, which allow previous exponential time verifications to be carried out now in linear time. Two case studies (the classical dining philosophers, also used as a running example, and an industrial version of a leadership election algorithm) are presented to illustrate and validate the overall approach.

Journal ArticleDOI
TL;DR: After adding checkpoints to the syntax of session behaviours, the operational semantics is formalised via an LTS, and natural notions of checkpoint compliance and sub-behaviour are defined, which prove to be both decidable andidable.
Abstract: In the setting of session behaviours, we study an extension of the concept of compliance when a disciplined form of backtracking and of output skipping is present. After adding checkpoints to the syntax of session behaviours, we formalise the operational semantics via an LTS, and define natural notions of checkpoint compliance and sub-behaviour, which we prove to be both decidable. Then we extend the operational semantics with skips and we show the decidability of the obtained compliance.

Journal ArticleDOI
TL;DR: This work presents a formal model, named data-flow reactive system (DFRS), which can be automatically obtained from natural-language requirements that describe functional, reactive and temporal properties and shows that an e-DFRS can be encoded as a TIOTS: an alternative timed model based on the widely used IOLTS and ioco.
Abstract: At the very beginning of system development, typically only natural-language requirements are documented. As an informal source of information, however, natural-language specifications may be ambiguous and incomplete; this can be hard to detect by means of manual inspection. In this work, we present a formal model, named data-flow reactive system (DFRS), which can be automatically obtained from natural-language requirements that describe functional, reactive and temporal properties. A DFRS can also be used to assess whether the requirements are consistent and complete. We define two variations of DFRS: a symbolic and an expanded version. A symbolic DFRS (s-DFRS) is a concise representation that inherently avoids an explicit representation of (possibly infinite) sets of states and, thus, the state space-explosion problem. We use s-DFRS as part of a technique for test-case generation from natural-language requirements. In our approach, an expanded DFRS (e-DFRS) is built dynamically from a symbolic one, possibly limited to some bound; in this way, bounded analysis (e.g., reachability, determinism, completeness) can be performed. We adopt the s-DFRS as an intermediary representation from which models, for instance, SCR and CSP, are obtained for the purpose of test generation. An e-DFRS can also be viewed as the semantics of the s-DFRS from which it is generated. In order to connect such a semantic representation to established ones in the literature, we show that an e-DFRS can be encoded as a TIOTS: an alternative timed model based on the widely used IOLTS and ioco. To validate our overall approach, we consider two toy examples and two examples from the aerospace and automotive industry. Test cases are independently created and we verify that they are all compatible with the corresponding e-DFRS models generated from symbolic ones. This verification is performed mechanically with the aid of the NAT2TEST tool, which supports the manipulation of such models.

Journal ArticleDOI
TL;DR: This work provides a detailed description of a partial order reduction for explicit state model checking in ProB and gives a comprehensive description for elaborating the implementation into the LTL model checker of ProB for checking LTL−X formulae.
Abstract: Partial order reduction has been very successful at combatting the state explosion problem for lower-level formalisms, but has thus far made hardly any impact for model checking higher-level formalisms such as B, Z or TLA+. This paper attempts to remedy this issue in the context of Event-B, with its much more fine-grained events and thus increased potential for event-independence and partial order reduction. In this work, we provide a detailed description of a partial order reduction for explicit state model checking in ProB. The technique is evaluated on a variety of models. The implementation of the method is discussed, which is based on new constraint-based analyses. Further, we give a comprehensive description for elaborating the implementation into the LTL model checker of ProB for checking LTL?X formulae.

Journal ArticleDOI
TL;DR: This paper proposes a formalisation of UML state machines using coloured Petri nets, and considers in particular concurrent aspects, the hierarchy induced by composite states and their associated activities, external, local or inter-level transitions, entry/exit/do behaviours, transition priorities, and shallow history pseudostates.
Abstract: With the increasing complexity of dynamic concurrent systems, a phase of formal specification and formal verification is needed. UML state machines are widely used to specify dynamic systems behaviours. However, the official semantics of UML is described in a semi-formal manner, which renders the formal verification of complex systems delicate. In this paper, we propose a formalisation of UML state machines using coloured Petri nets. We consider in particular concurrent aspects (orthogonal regions, forks, joins, variables), the hierarchy induced by composite states and their associated activities, external, local or inter-level transitions, entry/exit/do behaviours, transition priorities, and shallow history pseudostates. We use a CD player as a motivating example, and run various verifications using CPN Tools.

Journal ArticleDOI
TL;DR: A new approach is proposed that leverages the saturation algorithm both as an iteration strategy constructing the product directly, as well as in a new fixed-point computation algorithm to find strongly connected components on-the-fly by incrementally processing the components of the model.
Abstract: Efficient symbolic and explicit-state model checking approaches have been developed for the verification of linear time temporal logic (LTL) properties. Several attempts have been made to combine the advantages of the various algorithms. Model checking LTL properties usually poses two challenges: one must compute the synchronous product of the state space and the automaton model of the desired property, then look for counterexamples that is reduced to finding strongly connected components (SCCs) in the state space of the product. In case of concurrent systems, where the phenomenon of state space explosion often prevents the successful verification, the so-called saturation algorithm has proved its efficiency in state space exploration. This paper proposes a new approach that leverages the saturation algorithm both as an iteration strategy constructing the product directly, as well as in a new fixed-point computation algorithm to find strongly connected components on-the-fly by incrementally processing the components of the model. Complementing the search for SCCs, explicit techniques and component-wise abstractions are used to prove the absence of counterexamples. The resulting on-the-fly, incremental LTL model checking algorithm proved to scale well with the size of models, as the evaluation on models of the Model Checking Contest suggests.

Journal ArticleDOI
TL;DR: The Coq proof assistant is used to formalize the language-independent elementary composition operators Union and Substitution and the proof that the conformance of models with respect to metamodels is preserved during composition, and it is shown that more sophisticated composition operators that share parts of the implementation and have several properties in common can be built from the basic ones.
Abstract: Model composition is a crucial activity in Model Driven Engineering both to reuse validated and verified model elements and to handle separately the various aspects in a complex system and then weave them while preserving their properties. Many research activities target this compositional validation and verification (V & V) strategy: allow the independent assessment of components and minimize the residual V & V activities at assembly time. However, there is a continuous and increasing need for the definition of new composition operators that allow the reconciliation of existing models to build new systems according to various requirements. These ones are usually built from scratch and must be systematically verified to assess that they preserve the properties of the assembled elements. This verification is usually tedious but is mandatory to avoid verifying the composite system for each use of the operators. Our work addresses these issues, we first target the use of proof assistants for specifying and verifying compositional verification frameworks relying on formal verification techniques instead of testing and proofreading. Then, using a divide and conquer approach, we focus on the development of elementary composition operators that are easy to verify and can be used to further define complex composition operators. In our approach, proofs for the complex operators are then obtained by assembling the proofs of the basic operators. To illustrate our proposal, we use the Coq proof assistant to formalize the language-independent elementary composition operators Union and Substitution and the proof that the conformance of models with respect to metamodels is preserved during composition. We show that more sophisticated composition operators that share parts of the implementation and have several properties in common (especially: aspect oriented modeling composition approach, invasive software composition, and package merge) can then be built from the basic ones, and that the proof of conformance preservation can also be built from the proofs of basic operators.

Journal ArticleDOI
TL;DR: A typed framework for the analysis of multiparty interaction with dynamic role authorization and delegation and introduces a typing discipline that ensures that processes never reduce to authorization errors, including when parties dynamically acquire authorizations.
Abstract: Protocols in distributed settings usually rely on the interaction of several parties and often identify the roles involved in communications. Roles may have a behavioral interpretation, as they do not necessarily correspond to sites or physical devices. Notions of role authorization thus become necessary to consider settings in which, e.g., different sites may be authorized to act on behalf of a single role, or in which one site may be authorized to act on behalf of different roles. This flexibility must be equipped with ways of controlling the roles that the different parties are authorized to represent, including the challenging case in which role authorizations are determined only at runtime. We present a typed framework for the analysis of multiparty interaction with dynamic role authorization and delegation. Building on previous work on conversation types with role assignment, our formal model is based on an extension of the $${\pi}$$ź-calculus in which the basic resources are pairs channel-role, which denote the access right of interacting along a given channel representing the given role. To specify dynamic authorization control, our process model includes (1) a novel scoping construct for authorization domains, and (2) communication primitives for authorizations, which allow to pass around authorizations to act on a given channel. An authorization error then corresponds to an action involving a channel and a role not enclosed by an appropriate authorization scope. We introduce a typing discipline that ensures that processes never reduce to authorization errors, including when parties dynamically acquire authorizations.

Journal ArticleDOI
TL;DR: This document is intended to help clarify the role of Twitter in the operation of this website and its role in the social media landscape.
Abstract: Резюме. Анализируется соотношение вклада природно-климатических и антропогенных факторов в многолетние изменения годового и сезонного стока Волги и Дона, происходившие с конца XIX до начала XXI века. Учитываются долговременные фазы повышения и понижения стока, влияние на их характеристики хозяйственной деятельности как в гидрографической сети, так и на водосборах. Показано, что антропогенное воздействие на сток этих рек в отдельные периоды соизмеримо с природно-климатическим или даже превосходит его. Ключевые слова. Годовой и сезонный речной сток, долговременные изменения, природно-климатические и антропогенные факторы, реки-индикаторы, разностно-интегральная кривая, водно-балансовые методы, водохозяйственная статистика.

Journal ArticleDOI
TL;DR: A classical theory of supervisory control is adapted for synthesizing a controller for controlling the behavior of a system modeled using graph transition systems, used to synthesize a controller that can impose both behavioral and structural constraints on the system during an adaptation.
Abstract: Correctness of the behavior of an adaptive system during dynamic adaptation is an important challenge to realize correct adaptive systems. Dynamic adaptation refers to changes to both the functionality of the computational entities that comprise a composite system, as well as the structure of their interconnections, in response to variations in the environment, e.g., the load of requests on a server system. In this research, we view the problem of correct structural adaptation as a supervisory control problem and synthesize a reconfiguration controller that guides the behavior of a system during adaptation. The reconfiguration controller observes the system behavior during an adaptation and controls the system behavior by allowing/disallowing actions in a way to ensure that a given property is satisfied and a deadlock is avoided. The system during adaptation is modeled using a graph transition system and properties to be enforced are specified using a graph automaton. We adapt a classical theory of supervisory control for synthesizing a controller for controlling the behavior of a system modeled using graph transition systems. This theory is used to synthesize a controller that can impose both behavioral and structural constraints on the system during an adaptation. We apply a tool that we have implemented to support our approach on a case study involving https servers.

Journal ArticleDOI
TL;DR: This paper presents a model of session-based concurrency with mechanisms for run-time adaptation, and equips the model with a type system that ensures communication safety and consistency properties: while safety guarantees absence of run- time communication errors, consistency ensures that update actions do not disrupt already established session protocols.
Abstract: Communication-centric systems are software systems built as assemblies of distributed artifacts that interact following predefined communication protocols. Session-based concurrency is a type-based approach to ensure the conformance of communication-centric systems to such protocols. This paper presents a model of session-based concurrency with mechanisms for run-time adaptation. Our model allows us to specify communication-centric systems whose session behavior can be dynamically updated at run-time. We improve on previous work by proposing an event-based approach: adaptation requests, issued by the system itself or by its context, are assimilated to events which may trigger adaptation routines. These routines exploit type-directed checks to enable the reconfiguration of processes with active protocols. We equip our model with a type system that ensures communication safety and consistency properties: while safety guarantees absence of run-time communication errors, consistency ensures that update actions do not disrupt already established session protocols. We provide soundness results for binary and multiparty protocols.