scispace - formally typeset
Search or ask a question

Showing papers in "Formal Aspects of Computing in 2009"


Journal ArticleDOI
TL;DR: This work presents a final reference for the Circus denotational semantics based on Hoare and He’s Unifying Theories of Programming (UTP) that allows the proof of meta-theorems about Circus including the refinement laws in which it is interested.
Abstract: Circus specifications define both data and behavioural aspects of systems using a combination of Z and CSP constructs. Previously, a denotational semantics has been given to Circus; however, a shallow embedding of Circus in Z, in which the mapping from Circus constructs to their semantic representation as a Z specification, with yet another language being used as a meta-language, was not useful for proving properties like the refinement laws that justify the distinguishing development technique associated with Circus. This work presents a final reference for the Circus denotational semantics based on Hoare and He’s Unifying Theories of Programming (UTP); as such, it allows the proof of meta-theorems about Circus including the refinement laws in which we are interested. Its correspondence with the CSP semantics is illustrated with some examples. We also discuss the library of lemmas and theorems used in the proofs of the refinement laws. Finally, we give an account of the mechanisation of the Circus semantics and of the mechanical proofs of the refinement laws.

137 citations


Journal ArticleDOI
TL;DR: This work describes here another approach to program construction, which it is referred to as invariant based programming, where it starts by formulating the specifications and the internal loop invariants for the program, before the authors write the program code itself.
Abstract: Program verification is usually done by adding specifications and invariants to the program and then proving that the verification conditions are all true. This makes program verification an alternative to or a complement to testing. We describe here another approach to program construction, which we refer to as invariant based programming, where we start by formulating the specifications and the internal loop invariants for the program, before we write the program code itself. The correctness of the code is then easy to check at the same time as one is constructing it. In this approach, program verification becomes a complement to coding rather than to testing. The purpose is to produce programs and software that are correct by construction. We present a new kind of diagrams, nested invariant diagrams, where program specifications and invariants (rather than the control) provide the main organizing structure. Nesting of invariants provide an extension hierarchy that allows us to express the invariants in a very compact manner. We have studied the feasibility of formulating specifications and loop invariants before the code itself has been written in a number of case studies. Our experience is that a systematic use of figures, in combination with a rough idea of the intended behavior of the algorithm, makes it rather straightforward to formulate the invariants needed for the program, to construct the code around these invariants and to check that the resulting program is indeed correct. We describe our experiences from using invariant based programming in practice, both from teaching programmers how to construct programs that they prove correct themselves, and from teaching invariant based programming for CS students in class.

48 citations


Journal ArticleDOI
TL;DR: TIC, a real-time specification language, is applied to complement Simulink with TIC formal verification capability to enlarge the design space by representing environment properties to open systems, and handle complex diagrams as the analysis of continuous and discrete behavior is supported.
Abstract: Simulink has been widely used in industry to model and simulate embedded systems. With the increasing usage of embedded systems in real-time safety-critical situations, Simulink becomes deficient to analyze (timing) requirements with high-level assurance. In this article, we apply Timed Interval Calculus (TIC), a real-time specification language, to complement Simulink with TIC formal verification capability. We elaborately construct TIC library functions to model Simulink library blocks which are used to compose Simulink diagrams. Next, Simulink diagrams are automatically transformed into TIC models which preserve functional and timing aspects. Important requirements such as timing bounded liveness can be precisely specified in TIC for whole diagrams or some components. Lastly, validation of TIC models can be rigorously conducted with a high degree of automation using a generic theorem prover. Our framework can enlarge the design space by representing environment properties to open systems, and handle complex diagrams as the analysis of continuous and discrete behavior is supported.

45 citations


Journal ArticleDOI
TL;DR: This work formalises the relationship between salience and cognitive load revealed by empirical data and adds these rules to an abstract cognitive architecture, based on higher-order logic and developed for the formal verification of usability properties.
Abstract: Well-designed interfaces use procedural and sensory cues to increase the cognitive salience of appropriate actions. However, empirical studies suggest that cognitive load can influence the strength of those cues. We formalise the relationship between salience and cognitive load revealed by empirical data. We add these rules to our abstract cognitive architecture, based on higher-order logic and developed for the formal verification of usability properties. The interface of a fire engine dispatch task from the empirical studies is then formally modelled and verified. The outcomes of this verification and their comparison with the empirical data provide a way of assessing our salience and load rules. They also guide further iterative refinements of these rules. Furthermore, the juxtaposition of the outcomes of formal analysis and empirical studies suggests new experimental hypotheses, thus providing input to researchers in cognitive science.

40 citations


Journal ArticleDOI
TL;DR: A theory of testing is presented that integrates into Hoare and He’s Unifying Theory of Programming (UTP) and gives test cases a denotational semantics by viewing them as specification predicates, which allows for relating test cases via refinement to specifications and programs.
Abstract: This paper presents a theory of testing that integrates into Hoare and He’s Unifying Theory of Programming (UTP). We give test cases a denotational semantics by viewing them as specification predicates. This reformulation of test cases allows for relating test cases via refinement to specifications and programs. Having such a refinement order that integrates test cases, we develop a testing theory for fault-based testing. Fault-based testing uses test data designed to demonstrate the absence of a set of pre-specified faults. A well-known fault-based technique is mutation testing. In mutation testing, first, faults are injected into a program by altering (mutating) its source code. Then, test cases that can detect these errors are designed. The assumption is that other faults will be caught, too. In this paper, we apply the mutation technique to both, specifications and programs. Using our theory of testing, two new test case generation laws for detecting injected (anticipated) faults are presented: one is based on the semantic level of UTP design predicates, the other on the algebraic properties of a small programming language.

35 citations


Journal ArticleDOI
TL;DR: This article outlines a new contract semantics which applies equally well in concurrent and sequential contexts and permits a flexible use of contracts for specifying the mutual rights and obligations of clients and suppliers while preserving the potential for parallelism.
Abstract: The SCOOP model extends the Eiffel programming language to provide support for concurrent programming. The model is based on the principles of Design by Contract. The semantics of contracts used in the original proposal (SCOOP_97) is not suitable for concurrent programming because it restricts parallelism and complicates reasoning about program correctness. This article outlines a new contract semantics which applies equally well in concurrent and sequential contexts and permits a flexible use of contracts for specifying the mutual rights and obligations of clients and suppliers while preserving the potential for parallelism. We argue that it is indeed a generalisation of the traditional correctness semantics. We also propose a proof technique for concurrent programs which supports proofs—similar to those for traditional non-concurrent programs—of partial correctness and loop termination in the presence of asynchrony.

33 citations


Journal ArticleDOI
TL;DR: This work defines an innovative framework where generic pattern concepts based on roles are precisely defined as a formal role metamodel using Object-Z, and formalizes the properties that must be captured in a class model when a design pattern is deployed.
Abstract: Design patterns are typically defined imprecisely using natural language descriptions with graphical annotations. It is also common to describe patterns using a concrete design example with implementation details. Several approaches have been proposed to describe design patterns abstractly based on role concepts. However, the notion of role differs in each approach. The behavioral aspects of patterns are not addressed in the role-based approaches. This paper presents a rigorous approach to describe design patterns based on role concepts. Adopting metamodeling and formalism, our work defines an innovative framework where generic pattern concepts based on roles are precisely defined as a formal role metamodel using Object-Z. Individual patterns are specified using these generic role concepts in terms of pattern role models. Temporal behaviors of patterns are also specified using Object-Z and integrated in the pattern role models. Patterns described this way are abstract, separating pattern realization information from the pattern description. They are also precise providing a rigorous foundation for reasoning about pattern properties. This paper also formalizes the properties that must be captured in a class model when a design pattern is deployed. These properties are defined generically in terms of role bindings from a pattern role model to a class model. They provide a precise basis for checking the validity of pattern utilisation in designs. Our work is supported by tools. We have developed an initial role metamodel using an existing modeling framework, Eclipse Modeling Framework (EMF) and have transformed the metamodel to Object-Z using model transformation techniques. Complex constraints are added to the transformed Object-Z model. More importantly, we implement the role metamodel. Using this implementation, pattern authors can develop an initial pattern role model in the same modeling framework and convert the initial model to Object-Z using our transformation rules. The transformed Object-Z model is then enhanced with behavioral features of the pattern. This tool support significantly improves the practicability of applying formalism to design patterns.

32 citations


Journal ArticleDOI
TL;DR: The formal specification of the physical behaviour of devices ‘unplugged’ from their digital effects is explored to better understand the nature of physical interaction and the way this can be exploited to improve the design of hybrid devices with both physical and digital features.
Abstract: This paper explores the formal specification of the physical behaviour of devices ‘unplugged’ from their digital effects. By doing this we seek to better understand the nature of physical interaction and the way this can be exploited to improve the design of hybrid devices with both physical and digital features. We use modified state transition networks of the physical behaviour, which we call physiograms, and link these to parallel diagrams of the digital state. These are used to describe a number of features of physical interaction exposed by previous work and relevant properties expressed using a formal semantics of the diagrams. As well as being an analytic tool, the physigrams have been used in a case study where product designers used and adapted them as part of the design process.

28 citations


Journal ArticleDOI
TL;DR: This paper surveys recent results linking the relational model of refinement to the process algebraic models, and derives relational refinement rules for specifications containing both internal operations and outputs that corresponds to failures–divergences refinement.
Abstract: Two styles of description arise naturally in formal specification: state-based and behavioural. In state-based notations, a system is characterised by a collection of variables, and their values determine which actions may occur throughout a system history. Behavioural specifications describe the chronologies of actions—interactions between a system and its environment. The exact nature of such interactions is captured in a variety of semantic models with corresponding notions of refinement; refinement in state based systems is based on the semantics of sequential programs and is modelled relationally. Acknowledging that these viewpoints are complementary, substantial research has gone into combining the paradigms. The purpose of this paper is to do three things. First, we survey recent results linking the relational model of refinement to the process algebraic models. Specifically, we detail how variations in the relational framework lead to relational data refinement being in correspondence with traces–divergences, singleton failures and failures–divergences refinement in a process semantics. Second, we generalise these results by providing a general flexible scheme for incorporating the two main “erroneous” concurrent behaviours: deadlock and divergence, into relational refinement. This is shown to subsume previous characterisations. In doing this we derive relational refinement rules for specifications containing both internal operations and outputs that corresponds to failures–divergences refinement. Third, the theory has been formally specified and verified using the interactive theorem prover KIV.

27 citations


Journal ArticleDOI
TL;DR: How far contracts can take us in verifying interesting properties of concurrent systems using modular Hoare rules is described and how theorem proving methods developed for sequential Eiffel can be extended to the concurrent case is shown.
Abstract: SCOOP is a concurrent programming language with a new semantics for contracts that applies equally well in concurrent and sequential contexts. SCOOP eliminates race conditions and atomicity violations by construction. However, it is still vulnerable to deadlocks. In this paper we describe how far contracts can take us in verifying interesting properties of concurrent systems using modular Hoare rules and show how theorem proving methods developed for sequential Eiffel can be extended to the concurrent case. However, some safety and liveness properties depend upon the environment and cannot be proved using the Hoare rules. To deal with such system properties, we outline a SCOOP Virtual Machine (SVM) as a fair transition system. The SVM makes it feasible to use model-checking and theorem proving methods for checking global temporal logic properties of SCOOP programs. The SVM uses the Hoare rules where applicable to reduce the number of steps in a computation.

24 citations


Journal ArticleDOI
TL;DR: A calculus of object-oriented refinement is developed, as an extension to the classical theory of data refinement, in which the refinement rules are classified into four categories according to their natures and uses in object- oriented software design.
Abstract: An object-oriented program consists of a section of class declarations and a main method. The class declaration section represents the structure of an object-oriented program, that is the data, the classes and relations among them. The execution of the main method realizes the application by invoking methods of objects of the classes defined in the class declarations. Class declarations define the general properties of objects and how they collaborate with each other in realizing the application task programmed as the main method. Note that for one class declaration section, different main methods can be programmed for different applications, and this is an important feature of reuse in object-oriented programming. On the other hand, different class declaration sections may support the same applications, but these different class declaration sections can make significant difference with regards to understanding, reuse and maintainability of the applications. With a UML-like modeling language, the class declaration section of a program is represented as a class diagram, and the instances of the class diagram are represented by object diagrams, that form the state space of the program. In this paper, we define a class diagram and its object diagrams as directed labeled graphs, and investigate what changes in the class structure maintain the capability of providing functionalities (or services). We formalize such a structure change by the notion of structure refinement. A structure refinement is a transformation from one graph to another that preserves the capability of providing services, that is, the resulting class graph should be able to provide at least as many, and as good, services (in terms of functional refinement) as the original graph. We then develop a calculus of object-oriented refinement, as an extension to the classical theory of data refinement, in which the refinement rules are classified into four categories according to their natures and uses in object-oriented software design. The soundness of the calculus is proved and the completeness of the refinement rules of each category is established with regard to normal forms defined for object-oriented programs. These completeness results show the power of the simple refinement rules. The normal forms and the completeness results together capture the essence of polymorphism, dynamic method binding and object sharing by references in object-oriented computation.

Journal ArticleDOI
TL;DR: It is argued that new modelling techniques from computer science can also be employed in computational modelling of the mind, being specifically targeted at psychological level modelling, in which it is advantageous to directly represent the distribution of control.
Abstract: Previous research has developed a formal methods-based (cognitive-level) model of the Interacting Cognitive Subsystems central engine, with which we have simulated attentional capture in the context of Barnard’s key-distractor Attentional Blink task. This model captures core aspects of the allocation of human attention over time and as such should be applicable across a range of practical settings when human attentional limitations come into play. In addition, this model simulates human electrophysiological data, such as electroencephalogram recordings, which can be compared to real electrophysiological data recorded from human participants. We have used this model to evaluate the performance trade-offs that would arise from varying key parameters and applying either a constructive or a reactive approach to improving interactive systems in a stimulus rich environment. A strength of formal methods is that they are abstract and the resulting specifications of the operator are general purpose, ensuring that our findings are broadly applicable. Thus, we argue that new modelling techniques from computer science can also be employed in computational modelling of the mind. These would complement existing techniques, being specifically targeted at psychological level modelling, in which it is advantageous to directly represent the distribution of control.

Journal ArticleDOI
TL;DR: This article overcomes the limitation of Basuki’s approach by incorporating many aspects of user behaviour into a single user model, and introduces a more natural approach to model human–computer interaction.
Abstract: This article describes a framework to formally model and analyse human behaviour. This is shown by a simple case study of a chocolate vending machine, which represents many aspects of human behaviour. The case study is modelled and analysed using the Maude rewrite system. This work extends a previous work by Basuki which attempts to model interactions between human and machine and analyse the possibility of errors occurring in the interactions. By redesigning the interface, it can be shown that certain kinds of error can be avoided for some users. This article overcomes the limitation of Basuki’s approach by incorporating many aspects of user behaviour into a single user model, and introduces a more natural approach to model human–computer interaction.

Journal ArticleDOI
TL;DR: An overview of the RISC ProofNavigator, an interactive proving assistant for the area of program verification that combines the user-guided top-down decomposition of proofs with the automatic simplification and closing of proof states by an external satisfiability solver.
Abstract: This paper gives an overview of the RISC ProofNavigator, an interactive proving assistant for the area of program verification. The assistant combines the user-guided top-down decomposition of proofs with the automatic simplification and closing of proof states by an external satisfiability solver. The software exhibits a modern graphical user interface which has been developed with a focus on simplicity in order to make the software suitable for educational scenarios. Nevertheless, some use cases of a certain level of complexity demonstrate that it may be also appropriate for various realistic applications.

Journal ArticleDOI
TL;DR: The elimination stack algorithm is linearisable by showing that any execution of the implementation can be transformed into an equivalent execution of an abstract model of a linearisable stack.
Abstract: We show how a sophisticated, lock-free concurrent stack implementation can be derived from an abstract specification in a series of verifiable steps. The algorithm is based on the scalable stack algorithm of Hendler et al. (Proceedings of the sixteenth annual ACM symposium on parallel algorithms, 27–30 June 2004, Barcelona, Spain, pp 206–215), which allows push and pop operations to be paired off and eliminated without affecting the central stack, thus reducing contention on the stack, and allowing multiple pairs of push and pop operations to be performed in parallel. Our algorithm uses a simpler data structure than Hendler, Shavit and Yerushalmi’s, and avoids an ABA problem. We first derive a simple lock-free stack algorithm using a linked-list implementation, and discuss issues related to memory management and the ABA problem. We then add an abstract model of the elimination process, from which we derive our elimination algorithm. This allows the basic algorithmic ideas to be separated from implementation details, and provides a basis for explaining and comparing different variants of the algorithm. We show that the elimination stack algorithm is linearisable by showing that any execution of the implementation can be transformed into an equivalent execution of an abstract model of a linearisable stack. Each step in the derivation is either a data refinement which preserves the level of atomicity, an operational refinement which may alter the level of atomicity, or a refactoring step which alters the structure of the system resulting from the preceding derivation. We verify our refinements using an extension of Lipton’s reduction method, allowing concurrent and non-concurrent aspects to be considered separately.

Journal ArticleDOI
TL;DR: The design and delivery of two courses that aim to develop skills of use to students in their subsequent professional practice, whether or not they apply formal methods directly are described.
Abstract: We describe the design and delivery of two courses that aim to develop skills of use to students in their subsequent professional practice, whether or not they apply formal methods directly. Both courses emphasise skills in model construction and analysis by testing rather than formal verification. The accessibility of the formalism is enhanced by the use of established notations (VDM-SL and VDM++). Motivation is improved by using credible examples drawn from industrial projects, and by using an industrial-strength tool set. We present examples from the courses and discuss student evaluation and examination performance. We stress the need for exercises and tests to support the development of abstraction skills.

Journal ArticleDOI
TL;DR: Four tools regarding their suitability for teaching formal software verification, namely the Frege Program Prover, the Key system, Perfect Developer, and the Prototype Verification System are compared.
Abstract: We compare four tools regarding their suitability for teaching formal software verification, namely the Frege Program Prover, the Key system, Perfect Developer, and the Prototype Verification System (PVS). We evaluate them on a suite of small programs, which are typical of courses dealing with Hoare-style verification, weakest preconditions, or dynamic logic. Finally we report our experiences with using Perfect Developer in class.

Journal ArticleDOI
TL;DR: A semi-automatized framework to describe and analyse business processes is presented and how all the issues can be embedded in a unifying computer tool is suggested.
Abstract: Both specification and verification of business processes are gaining more and more attention in the field. Most of the existing works in the last years are dealing with important, yet very specialized, issues. Among these, we can enumerate compensation constructs to cope with exceptions generated by long running business transactions, fully programmable fault and compensation handling mechanism, web service area, scope-based compensation and shared-labels for synchronization, and so on. The main purpose of this paper is to present a semi-automatized framework to describe and analyse business processes. Business analysts can now use a simple specification language (e.g., BPMN [Obj06]) to describe any type of activity in a company, in a concurrent and modular fashion. The associated programs (e.g., BPDs [Obj06]) have to be executed in an appropriate language (e.g., BPEL4WS [ACD+03]). Much more, they have to be confirmed to be sound, via some prescribed (a priori) conditions. We suggest how all the issues can be embedded in a unifying computer tool. We link our work with similar approaches and we justify our particular choices (besides BPMN and BPD): the TLA+ language for expressing the imposed behavioural conditions and Petri Nets ([EB87], [EB88]) to describe an intermediate semantics. In fact, we want to manage in an appropriate way the general relationship diagram (Fig. 1). Examples and case studies are provided.

Journal ArticleDOI
TL;DR: The results show that while the new model provides increased parallelism, this comes with the price of increased overhead due to lock management.
Abstract: We present a new concurrency model for the Eiffel programming language. The model is motivated by describing a number of semantic problems with the leading concurrency model for Eiffel, namely SCOOP. Our alternative model aims to preserve the existing behaviour of sequential programs and libraries wherever possible. Comparison with the SCOOP model is made. The concurrency aspects of the alternative model are presented in CSP along with a model of exceptions. The results show that while the new model provides increased parallelism, this comes with the price of increased overhead due to lock management.

Journal ArticleDOI
TL;DR: This paper presents an alternative solution to this problem of incomplete treatment of successful process termination, which is both closer to the original semantic model and provides greater flexibility over the type of parallel termination semantics available in CSP.
Abstract: In the original failure-divergence semantic model for Communicating Sequential Processes (CSP), the incomplete treatment of successful process termination, and in particular parallel termination, permitted unnatural processes to be defined. In response to these problems, a number of different solutions have been proposed by various authors since the original failure-divergence model was developed by Hoare, Brookes and Roscoe. This paper presents an alternative solution to this problem, which is both closer to the original semantic model and provides greater flexibility over the type of parallel termination semantics available in CSP.

Journal ArticleDOI
TL;DR: A unifying framework for understanding and developing SAT-based decision procedures for Satisfiability Modulo Theories (SMT), based on a reduction of the decision problem to propositional logic by means of a deductive system is presented.
Abstract: We present a unifying framework for understanding and developing SAT-based decision procedures for Satisfiability Modulo Theories (SMT). The framework is based on a reduction of the decision problem to propositional logic by means of a deductive system. The two commonly used techniques, eager encodings (a direct reduction to propositional logic) and lazy encodings (a family of techniques based on an interplay between a SAT solver and a decision procedure) are identified as special cases. This framework offers the first generic approach for eager encodings, and a simple generalization of various lazy techniques that are found in the literature.

Journal ArticleDOI
TL;DR: A graphical interactive tool, named GOAL, is introduced that can assist the user in understanding Büchi automata, linear temporal logic, and their relation and is essential for checking if a hand-drawn automaton is correct in the sense that it is equivalent to some intended temporal formula or reference automaton.
Abstract: We introduce a graphical interactive tool, named GOAL, that can assist the user in understanding Buchi automata, linear temporal logic, and their relation. Buchi automata and linear temporal logic are closely related and have long served as fundamental building blocks of linear-time model checking. Understanding their relation is instrumental in discovering algorithmic solutions to model checking problems or simply in using those solutions, e.g., specifying a temporal property directly by an automaton rather than a temporal formula so that the property can be verified by an algorithm that operates on automata. One main function of the GOAL tool is translation of a temporal formula into an equivalent Buchi automaton that can be further manipulated visually. The user may edit the resulting automaton, attempting to optimize it, or simply run the automaton on some inputs to get a basic understanding of how it operates. GOAL includes a large number of translation algorithms, most of which support past temporal operators. With the option of viewing the intermediate steps of a translation, the user can quickly grasp how a translation algorithm works. The tool also provides various standard operations and tests on Buchi automata, in particular the equivalence test which is essential for checking if a hand-drawn automaton is correct in the sense that it is equivalent to some intended temporal formula or reference automaton. Several use cases are elaborated to show how these GOAL functions may be combined to facilitate the learning and teaching of Buchi automata and linear temporal logic.

Journal ArticleDOI
TL;DR: An alternative approach is suggested, which is to define a refinement process for user interfaces which will allow us to maintain the same rigorous standards the authors apply to the rest of the system when they implement their user interface designs.
Abstract: Formal approaches to software development require that we correctly describe (or specify) systems in order to prove properties about our proposed solution prior to building it. We must then follow a rigorous process to transform our specification into an implementation to ensure that the properties we have proved are retained. Different transformation, or refinement, methods exist for different formal methods, but they all seek to ensure that we can guide the transformation in a way which preserves the desired properties of the system. Refinement methods also allow us to subsequently compare two systems to see if a refinement relation exists between the two. When we design and build the user interfaces of our systems we are similarly keen to ensure that they have certain properties before we build them. For example, do they satisfy the requirements of the user? Are they designed with known good design principles and usability considerations in mind? Are they correct in terms of the overall system specification? However, when we come to implement our interface designs we do not have a defined process to follow which ensures that we maintain these properties as we transform the design into code. Instead, we rely on our judgement and belief that we are doing the right thing and subsequent user testing to ensure that our final solution remains useable and satisfactory. We suggest an alternative approach, which is to define a refinement process for user interfaces which will allow us to maintain the same rigorous standards we apply to the rest of the system when we implement our user interface designs.

Journal ArticleDOI
Piotr Nienaltowski1
TL;DR: Two refinements of the access control policy for SCOOP are proposed: a type-based mechanism to specify which arguments of a routine call should be locked, and a lock passing mechanism for safe handling of complex synchronisation scenarios with mutual locking of several separate objects.
Abstract: The SCOOP model extends Eiffel to support the construction of concurrent applications with little more effort than sequential ones. The model provides strong safety guarantees: mutual exclusion and atomicity at the routine level, and FIFO scheduling of clients’ calls. Unfortunately, in the original proposal of the model (SCOOP_97) these guarantees come at a high price: they entail locking all the arguments of a feature call, even if the corresponding objects are never used by the feature. In most cases, the amount of locking is higher than necessary. Additionally, a client that holds a lock on a given processor cannot relinquish it temporarily when the lock is needed by one of its suppliers. This increases the likelihood of deadlocks; additionally, some interesting synchronisation scenarios, e.g. separate callbacks, cannot be implemented. We propose two refinements of the access control policy for SCOOP: a type-based mechanism to specify which arguments of a routine call should be locked, and a lock passing mechanism for safe handling of complex synchronisation scenarios with mutual locking of several separate objects. When combined, these refinements increase the expressive power of the model, give programmers more control over the computation, and enable more potential parallelism, thus reducing the risk of deadlock.

Journal ArticleDOI
TL;DR: This paper introduces an automatic approach for verifying action system refinements utilising standard CTL model checking, encoding each of the simulation conditions as a simulation machine, a Kripke structure on which the proof obligation can be discharged by checking that an associated CTL property holds.
Abstract: Action systems provide a formal approach to modelling parallel and reactive systems. They have a well established theory of refinement supported by simulation-based proof rules. This paper introduces an automatic approach for verifying action system refinements utilising standard CTL model checking. To do this, we encode each of the simulation conditions as a simulation machine, a Kripke structure on which the proof obligation can be discharged by checking that an associated CTL property holds. This procedure transforms each simulation condition into a model checking problem. Each simulation condition can then be model checked in isolation, or, if desired, together with the other simulation conditions by combining the simulation machines and the CTL properties.