scispace - formally typeset
Search or ask a question

Showing papers in "Software and Systems Modeling in 2005"


Journal ArticleDOI
TL;DR: It is postulate here that two core relations (representation and conformance) are associated to this principle, as inheritance and instantiation were associated to the object unification principle in the class-based languages of the 80’s.
Abstract: In November 2000, the OMG made public the MDA™initiative, a particular variant of a new global trend called MDE (Model Driven Engineering). The basic ideas of MDA are germane to many other approaches such as generative programming, domain specific languages, model-integrated computing, generic model management, software factories, etc. MDA may be defined as the realization of MDE principles around a set of OMG standards like MOF, XMI, OCL, UML, CWM, SPEM, etc. MDE is presently making several promises about the potential benefits that could be reaped from a move from code-centric to model-based practices. When we observe these claims, we may wonder when they may be satisfied: on the short, medium or long term or even never perhaps for some of them. This paper tries to propose a vision of the development of MDE based on some lessons learnt in the past 30 years in the development of object technology. The main message is that a basic principle ("Everything is an object") was most helpful in driving the technology in the direction of simplicity, generality and power of integration. Similarly in MDE, the basic principle that "Everything is a model" has many interesting properties, among others the capacity to generate a realistic research agenda. We postulate here that two core relations (representation and conformance) are associated to this principle, as inheritance and instantiation were associated to the object unification principle in the class-based languages of the 80's. We suggest that this may be most useful in understanding many questions about MDE in general and the MDA approach in particular. We provide some illustrative examples. The personal position taken in this paper would be useful if it could generate a critical debate on the research directions in MDE.

873 citations


Journal ArticleDOI
TL;DR: A system-level testing technique that combines test generation based on finite state machines with constraints with the goal of reducing the state space explosion otherwise inherent in using FSMs is proposed.
Abstract: Researchers and practitioners are still trying to find effective ways to model and test Web applications This paper proposes a system-level testing technique that combines test generation based on finite state machines with constraints We use a hierarchical approach to model potentially large Web applications The approach builds hierarchies of Finite State Machines (FSMs) that model subsystems of the Web applications, and then generates test requirements as subsequences of states in the FSMs These subsequences are then combined and refined to form complete executable tests The constraints are used to select a reduced set of inputs with the goal of reducing the state space explosion otherwise inherent in using FSMs The paper illustrates the technique with a running example of a Web-based course student information system and introduces a prototype implementation to support the technique

368 citations


Journal ArticleDOI
TL;DR: KeY is a tool that provides facilities for formal specification and verification of programs within a commercial platform for UML based software development and provides a state-of-the-art theorem prover for interactive and automated verification.
Abstract: KeY is a tool that provides facilities for formal specification and verification of programs within a commercial platform for UML based software development. Using the KeY tool, formal methods and object-oriented development techniques are applied in an integrated manner. Formal specification is performed using the Object Constraint Language (OCL), which is part of the UML standard. KeY provides support for the authoring and formal analysis of OCL constraints. The target language of KeY based development is Java Card DL, a proper subset of Java for smart card applications and embedded systems. KeY uses a dynamic logic for Java Card DL to express proof obligations, and provides a state-of-the-art theorem prover for interactive and automated verification. Apart from its integration into UML based software development, a characteristic feature of KeY is that formal specification and verification can be introduced incrementally.

279 citations


Journal ArticleDOI
TL;DR: The UML was at first an attempt to unify various object-oriented modeling languages, and it seemed that its target applications were primarily business systems, but is now being used to model applications and concepts in a variety of domains, including embedded systems and business workflows.
Abstract: Looking at other engineering disciplines, it is evident that modeling is a vital and important part of the development of complex artifacts. Good modeling techniques provide support for the separation of concerns principle, rigorous analysis of designs, and for structuring construction activities. Models can play a similar role in the development of software-based systems. Furthermore, the software development activity has a characteristic not present in physical engineering: deliverable products (software systems) can be generated from models (an observation made by Bran Selic, IBM/Rational at an MDA summer school). This characteristic can and should be exploited in our quest to make software development an engineering activity. The current landscape of modeling languages is highly diverse. Graphical languages, such as Petri Nets and Statecharts have successfully been used for years. Standardized languages such as the SDL have had a good base of tool support but have seen their use in industry diminish over time. Languages can be specific to an application domain (e.g., SDL was developed to support modeling of telecommunication systems), to a development phase (e.g., SCR method uses a language designed specifically for modeling requirements of reactive systems), or they can be general-purpose (e.g., the UML). Recently, much attention is on the development of domain-specific languages (DSLs). Proponents of DSLs claim that their use can help bridge the gap between a domain expert’s view of a software system and its implementtion. The domains covered by DSLs can range from highly individual application areas, such as “railroad planning applications” to broader domains, such as the “embedded system domain”. The UML was at first an attempt to unify various object-oriented modeling languages, and it seemed that its target applications were primarily business systems. The UML is now being used to model applications and concepts in a variety of domains, including embedded systems and business workflows. While this broadened the scope of the UML it has made it difficult to develop semantics that can be used to support its application in a number of domains. This has led to the realization that a single, consistent semantics that supports the use of the UML may not be possible, and to the view of the UML as a family of languages. There are currently a number of semantic variation points in the UML to support this notion. Developing a semantic framework for the UML that takes into consideration its numerous variation points is proving to be an extremely difficult task – and the important issues are not all technical! It should not be surprising then that in specific domains, the use of existing “domain specific languages” persists. A DSL, let it be a programming or a modeling language, has several advantages. Among them:

257 citations


Journal ArticleDOI
TL;DR: The testing and certification of UML and OCL models as supported by the validation tool USE is studied by introducing a language for defining properties of desired snapshots and by showing how such snapshots are generated.
Abstract: We study the testing and certification of UML and OCL models as supported by the validation tool USE. We extend the available USE features by introducing a language for defining properties of desired snapshots and by showing how such snapshots are generated. Within the approach, it is possible to treat test cases and validation cases. Test cases show that snapshots having desired properties can be constructed. Validation cases show that given properties are consequences of the original UML and OCL model.

200 citations


Journal ArticleDOI
TL;DR: STAIRS assigns a precise interpretation to the various steps in incremental system development based on an approach to refinement known from the field of formal methods and provides thereby a foundation for compositional analysis.
Abstract: The paper presents STAIRS (1), an approach to the compositional development of UML interactions sup- porting the specification of mandatory as well as potential behavior STAIRS has been designed to facilitate the use of interactions for requirement capture as well as test specifica- tion STAIRS assigns a precise interpretation to the various steps in incremental system development based on an ap- proach to refinement known from the field of formal meth- ods and provides thereby a foundation for compositional analysis An interaction may characterize three main kinds of traces A trace may be (1) positive in the sense that it is valid, legal or desirable, (2) negative meaning that it is invalid, illegal or undesirable, or (3) inconclusive meaning that it is considered irrelevant for the interaction in ques- tion The basic increments in system development proposed by STAIRS, are structured into three main kinds referred to as supplementing, narrowing and detailing Supplementing categorizes inconclusive traces as either positive or negative Narrowing reduces the set of positive traces to capture new design decisions or to match the problem more adequately Detailing involves introducing a more detailed description without significantly altering the externally observable be- havior

145 citations


Journal ArticleDOI
TL;DR: The need for dedicated approaches to model transformations, particularly for the data involved in tool integration, is motivated, the challenges involved are outlined, and a number of technologies and techniques are presented which allow the construction of flexible, powerful and practical model transformations.
Abstract: Model transformations are increasingly recognised as being of significant importance to many areas of software development and integration. Recent attention on model transformations has particularly focused on the OMG's Queries/Views/Transformations (QVT) Request for Proposals (RFP). In this paper I motivate the need for dedicated approaches to model transformations, particularly for the data involved in tool integration, outline the challenges involved, and then present a number of technologies and techniques which allow the construction of flexible, powerful and practical model transformations.

123 citations


Journal ArticleDOI
TL;DR: A way to disambiguate common flow modeling constructs, by expressing their semantics as constraints on runtime sequences of behavior execution, and shows that reduced ambiguity enables more powerful modeling abstractions, such as partial behavior specifications.
Abstract: Flow models underlie popular programming languages and many graphical behavior specification tools. However, their semantics is typically ambiguous, causing miscommunication between modelers and unexpected implementation results. This article introduces a way to disambiguate common flow modeling constructs, by expressing their semantics as constraints on runtime sequences of behavior execution. It also shows that reduced ambiguity enables more powerful modeling abstractions, such as partial behavior specifications. The runtime representation considered in this paper uses the Process Specification Language (PSL), which is defined in first-order logic, making it amenable to automated reasoning. The activity diagrams of the Unified Modeling Language are used for example flow models.

112 citations


Journal ArticleDOI
TL;DR: A framework for understanding Problem Frames is presented that locates them within the Requirements Engineering model of Zave and Jackson, and its subsequent formalization in the Reference Model of Gunter et al, and allows the relationship between problem frames, context diagrams and problem diagrams to be formally defined.
Abstract: This paper presents a framework for understanding Problem Frames that locates them within the Requirements Engineering model of Zave and Jackson, and its subsequent formalization in the Reference Model of Gunter et al. It distinguishes between problem frames, context diagrams and problem diagrams, and allows us to formally define the relationship between them as assumed in the Problem Frames framework. The semantics of a problem diagram is given in terms of `challenges', a notion that we also introduce. The notion of a challenge is interesting in its own right for two reasons: its proof theoretic derivation leads us to consider a challenge calculus that might underpin the Problem Frame operations of decomposition and recomposition; and it promises to extend the notion of formal refinement from software development to requirements engineering. In addition, the semantics supports a textual representation of the diagrams in which Problem Frames capture problems and their relationship to solutions. This could open the way for graphical Problem Frames tools.

70 citations


Journal ArticleDOI
TL;DR: Two large-grain, architectural design patterns that solve specific design tool integration problems that have been implemented and used in real-life engineering processes are described and compared.
Abstract: Design tool integration is a highly relevant area of software engineering that can greatly improve the efficiency of development processes. Design patterns have been widely recognized as important contributors to the success of software systems. This paper describes and compares two large-grain, architectural design patterns that solve specific design tool integration problems. Both patterns have been implemented and used in real-life engineering processes.

68 citations


Journal ArticleDOI
TL;DR: The dynamic semantics of UML are defined quite generally and cover a wide range of possible specializations and have led many (including, obviously, several of my co-panellists) to conclude that “UML has no semantics”.
Abstract: ion is the process of removing or hiding nonessential detail from view so that the principal characteristics of interest and their relationships are more clearly visible and, thus, more easily understood. This obviously depends on the area of concern, which means that abstraction is always done from a particular viewpoint. With the exception of the very earliest ones, thirdgeneration programming languages provided two fundamental and clearly separated viewpoints for specifying software: the structural and behavioural. This basic partitioning, although quite useful, is not sophisticated enough for today’s complex needs. In a modern objectoriented programming language, for instance, one can clearly see which objects are declared in a program (the structure), but it is very difficult to determine from looking at the code how their mutual interactions combine to realize an overall end-to-end use case (the behaviour). This is because these languages do not provide constructs that allow direct specification of interaction sequences – a common failing of all “modern” object-oriented programming languages. Instead, you specify how each component responds to individual inputs and then cross your fingers and hope that these increments add up to the desired end-to-end sequence. Unfortunately, quite often they do not. When that happens, it is very difficult to find out why, because it is not easy to discern the high-level sequence out of the mass of fragmented behaviour code. In contrast, UML provides a number of high-level viewpoints through a series of diagrams. This gives users the ability to specify systems using any convenient combination of diagrams. Since the semantic relationships between these viewpoints is specified by the underlying UML metamodel, it is often possible to automatically detect inconsistencies in the specifications – “often”, but not always. As Steve Mellor points out in his discussion, there are examples where one would like to be able to detect inconsistencies but where UML does not help. (What he fails to point out, however, is that the problem of determining this type of inter-viewpoint consistency is still an unresolved theoretical problem in the general case, particularly when concurrency is involved.) Still, UML 2.0 has done much to allow detection of such inconsistencies. In particular, it has done a much more thorough job of defining the dynamic (run-time) semantics of UML. At the most fundamental level, these semantics consist of an explicit specification of the essential structural entities involved at run time – elements such as objects, attributes, variables, and links – and the effects that individual actions have on these entities. This is complemented by a trace-based model that defines the semantics of inter-object actions (interactions). At the next level up are the semantics of higher-level behavioural formalisms, such as state machines and activities. These are layered on top of the semantics of actions and interactions. Although these semantics are not definedusing amathematical formalism, they are certainly open to such formalization and at least one major research project is currently under way to do just that. Of course, in the ideal situation, a formal model of these semantics should have been included in the standard, but this is hardly ever the case in practice (after all, how many modern programming languages have a formal semantics included in their definition)? The dynamic semantics of UML are defined quite generally and cover a wide range of possible specializations. This is because UML is intended to cover a relatively wide range of diverse domains. For example, the same semantics can be specialized to support either a synchronous worldview (i.e., a view in which all events occur at discrete intervals according to the beat of some global clock) or an asynchronous one. The organization of the UML 2.0 specification document is such that the sections describing semantics are not gathered in one place but are, instead, fragmented and scattered throughout the text – this is due to the nature of a specification document, which is not necessarily optimized for readability. Unfortunately, this has led many (including, obviously, several of my co-panellists) to conclude that “UML has no semantics”. However, a closer examination of the spec would clearly reveal that this is not the case. While it is fair to argue that the semantics of UMLmight be incomplete, imprecise and, perhaps, inconsistent, and it is fair to complain about the organization of the document itself, it is definitely incorrect and unfair to claim that UML has no semantics. As noted, some of the latitude in the semantics of UML is intentional, to allow a range of different domainspecific specializations. Still, UML was never intended as a universal base that covers all possible domains. Where the semantics of UML or its syntax are inappropriate, users have the option to use the Meta-Object Facility (MOF) to define a language that is independent of UML. However, in those cases where UML is suitable, it provides a very rich catalogue of pre-packaged, expertlydesigned modelling capabilities that are likely to be shared by many modelling languages. These include the foundational run-time semantics model described above as well as facilities for modelling system architectures, event-driven behaviour, high-level end-to-end interactions and complex hierarchical procedures. 12 B. Henderson-Sellers : UML – the Good, the Bad or the Ugly? Clearly, this capability is valuable to language designers. However, much more significant is the benefit to users who may already have knowledge of general UML. It is useful to recall that, since its adoption as a standard in 1997, UML has been taught and used more than any other modelling language and, consequently, has a significant and growing base of users. This large and growing community is able to reuse all of that knowledge when working with domain-specific variants. On top of such semantic reuse there is also the potential for tool reuse because a tool that supports standard “general” UML can, in principle, also support any profile of UML. Since there are very many commercial tools that support standard UML the benefit of this should not be underestimated. It is muchmore likely that such tools can be used to support domain-specific variants than custom tools for domain-specific languages. In many ways, the success of model-driven development in general is predicated on the availability of sophisticated and powerful tools, comparable to the kinds of sophisticated tools for supporting current programming languages (e.g., configuration management tools, compilation tools, build tools, debugging tools etc.). If a whole new array of such tools has to be developed for each new domain-specific language, it is unlikely that tool vendors would ever be able to keep up. Model-driven development would look a lot less attractive to its potential beneficiaries under such circumstances. Another distinguishing and highly useful characteristic of UML 2.0 is its support for constructing higherlevel abstractions. This includes concepts for specifying complex interconnected object structures, concepts for specifying either complex functions (UML activities) and for complex object interactions (UML interactions). Because most of these capabilities can be applied recursively, it becomes possible to define custom abstractions at practically any conceptual level. The net result of this is that UML 2.0 scales up much more easily to allow modelling of very complex systems of different kinds. Inevitably, one of the drawbacks of supporting multiple domains is a relatively large number of language features. This leads to another often repeated complaint about UML: that the language is just “too big and unwieldy”. But this ignores the fact that UML has been carefully modularized into a set of sub-languages many of which are independent of each other. Hence, it is not an “all or nothing” proposition. Just like one does not need to know all of the English language to use it effectively, users of UML can pick and appropriate only those parts that are of use in solving their problem. The rest can be safely ignored. To summarize, the support for abstraction in UML 2.0 is based on a proven set of capabilities that seem to be shared across many different domains. For such domains, this makes it much a more suitable foundation for constructing domain-specific languages. Support for automation Automation is, by far, the most effective way to boost reliability and productivity. The objective is to mechanize repetitive and uncreative tasks, where human fallibility causes problems. There are a number of ways in which automation can be applied in model-driven development. Perhaps the most obvious one is automatic code generation or, to put it differently, model compilation. While various historical and other incidental aspects may force us to compromise and use partial code generation that is supplemented by manual programming, fully automated code generation from models is clearly the ultimate objective. There are many examples of industrial modelbased systems that use full automatic code generation, so this is not merely a promise for the future but state of the art. Another way in which automation can be applied to models is in formal verification and validation. This has been a long-standing objective in software development, but has been thwarted in the past by the highly complex and non-linear nature of current programming languages. A single misaligned pointer can spell doom for a multimillion-line program and its users. Because of this, it is very difficult to mathematically formalize these languages in a way that accurately reflects their semantics. Therefore, results of formal analyses of modern software are often inaccurate and untrustworthy. The advantage that modelling languages provide is that their con

Journal ArticleDOI
TL;DR: The concepts described in the paper have been implemented in the Netsilon tool and operational model-driven Web information systems have been successfully deployed by translation from abstract models to platform specific models.
Abstract: This paper discusses platform independent Web application modeling and development in the context of model-driven engineering. A specific metamodel (and associated notation) is introduced and motivated for the modeling of dynamic Web specific concerns. Web applications are represented via three independent but related models (business, hypertext and presentation). A kind of action language (based on OCL and Java) is used all over these models to write methods and actions, specify constraints and express conditions. The concepts described in the paper have been implemented in the Netsilon tool and operational model-driven Web information systems have been successfully deployed by translation from abstract models to platform specific models.

Journal ArticleDOI
TL;DR: A new approach for automated pattern search based on minimal key structures is presented, able to detect all patterns described by the GOF, based on positive and negative search criteria for structures and is prototypically implemented using Rational Rose and Together.
Abstract: For the maintenance of software systems, developers have to completely understand the existing system. The usage of design patterns leads to benefits for new and young developers by enabling them to reuse the knowledge of their experienced colleagues. Design patterns can support a faster and better understanding of software systems. There are different approaches for supporting pattern recognition in existing systems by tools. They are evaluated by the Information Retrieval criteria precision and recall. An automated search based on structures has a highly positive influence on the manual validation of the results by developers. This validation of graphical structures is the most intuitive technique. In this paper a new approach for automated pattern search based on minimal key structures is presented. It is able to detect all patterns described by the GOF [15]. This approach is based on positive and negative search criteria for structures and is prototypically implemented using Rational Rose and Together.

Journal ArticleDOI
TL;DR: An object-oriented extension of Circus, an integration of Z, CSP, and Morgan’s refinement calculus, with a semantics based on the unifying theories of programming called OhCircus is presented.
Abstract: Previously, we presented Circus, an integration of Z, CSP, and Morgan's refinement calculus, with a semantics based on the unifying theories of programming. Circus provides a basis for development of state-rich concurrent systems; it has a formal semantics, a refinement theory, and a development strategy. The design of Circus is our solution to combining data and behavioural specifications. Here, we further explore this issue in the context of object-oriented features. Concretely, we present an object-oriented extension of Circus called OhCircus. We present its syntax, describe its semantics, explain the formalisation of method calls, and discuss our approach to refinement.

Journal ArticleDOI
TL;DR: A methodology based on a careful normalization and analysis of operation contracts and transition guards written with the Object Constraint Language (OCL) is proposed, illustrated by one case study that exemplifies the steps of the methodology and provides a first evaluation of its applicability.
Abstract: Many statechart-based testing strategies result in specifying a set of paths to be executed through a (flattened) statechart. These techniques can usually be easily automated so that the tester does not have to go through the tedious procedure of deriving paths manually to comply with a coverage criterion. The next step is then to take each test path individually and derive test requirements leading to fully specified test cases. This requires that we determine the system state required for each event/transition that is part of the path to be tested and the input parameter values for all events and actions associated with the transitions. We propose here a methodology towards the automation of this procedure, which is based on a careful normalization and analysis of operation contracts and transition guards written with the Object Constraint Language (OCL). It is illustrated by one case study that exemplifies the steps of our methodology and provides a first evaluation of its applicability.

Journal ArticleDOI
TL;DR: This work has developed an approach to tool integration which puts strong emphasis on software architecture and model-driven development, and considerably leverage the problem of composing a tightly integrated development environment from a set of heterogeneous engineering tools.
Abstract: A-posteriori integration of heterogeneous engineering tools supplied by different vendors constitutes a challenging task. In particular, this statement applies to incremental development processes where small changes have to be propagated --- potentially bidirectionally --- through a set of inter-dependent design documents which have to be kept consistent with each other. Responding to these challenges, we have developed an approach to tool integration which puts strong emphasis on software architecture and model-driven development. Starting from an abstract description of a software architecture, the architecture is gradually refined down to an implementation level. To integrate heterogeneous engineering tools, wrappers are constructed for abstracting from technical details and for providing homogenized data access. On top of these wrappers, incremental integration tools provide for inter-document consistency. These tools are based on graph models of the respective document classes and graph transformation rules for maintaining inter-document consistency. Altogether, the collection of support tools and the respective infrastructure considerably leverage the problem of composing a tightly integrated development environment from a set of heterogeneous engineering tools.

Journal ArticleDOI
TL;DR: In this paper, the authors apply the Whittle & Schumann synthesis algorithm to a component of an air traffic advisory system under development at NASA Ames Research Center and show how to generate code from the generated state machines using existing commercial code generation tools.
Abstract: There has been much recent interest in synthesis algorithms that generate finite state machines from scenarios of intended system behavior. One of the uses of such algorithms is in the transition from requirements scenarios to design. Despite much theoretical work on the nature of these algorithms, there has been very little work on applying the algorithms to practical applications. In this paper, we apply the Whittle & Schumann synthesis algorithm [32] to a component of an air traffic advisory system under development at NASA Ames Research Center. We not only apply the algorithm to generate state machine designs from scenarios but also show how to generate code from the generated state machines using existing commercial code generation tools. The results demonstrate the possibility of generating application code directly from scenarios of system behavior.

Journal ArticleDOI
TL;DR: This paper discusses how to define and execute model refactorings as rule-based transformations in the context of the UML and MOF standards and presents an experimental tool to execute this kind of transformation.
Abstract: A rule-based update transformation is a model transformation where a single model is transformed in place. A model refactoring is a model transformation that improves the design described in the model. A refactoring should only affect a previously chosen subset of the original model. In this paper, we discuss how to define and execute model refactorings as rule-based transformations in the context of the UML and MOF standards. We also present an experimental tool to execute this kind of transformation.

Journal ArticleDOI
TL;DR: This illustrates how a visual formalism, constraint diagrams, may be used in order to specify such systems precisely, as in a “video rental service.”
Abstract: We develop an abstract model for our case-study: software to support a "video rental service." This illustrates how a visual formalism, constraint diagrams, may be used in order to specify such systems precisely.

Journal ArticleDOI
TL;DR: In this article, the authors present dETI, the next generation of the Electronic Tool Integration (ETI) platform, an open platform for the interactive experimentation with and coordination of heterogeneous software tools via the internet.
Abstract: In this paper we present dETI, the next generation of the Electronic Tool Integration (ETI) platform, an open platform for the interactive experimentation with and coordination of heterogeneous software tools via the internet. Our redesign, which is based on the experience gained while running the ETI platform since 1997, focusses on the tool integration process, which clearly marked the bottleneck for the wide acceptance of the ETI platform on the side of an important group of users: the tool providers. The new integration approach makes use of standard Web Services technology, which perfectly fits in the overall ETI architecture. Our approach realizes a clear separation of concerns, which overcomes all the previously observed obstacles by (i) decoupling the integration tasks of the tool providers and the ETI team, (ii) pulling the ETI team out of the upgrading and maintenance loop and (iii) handing the upgrading and access control over to the tool providers. This guarantees the scalability in the number of tools available within ETI, and addresses the flexibility concerns of the tool providers.

Journal ArticleDOI
TL;DR: A generic framework for process-oriented software development organizations is presented and the respective way of managing the process model, and the instantiation of their processes with the Rational Unified Process disciplines, whenever they are available, or with other kind of processes.
Abstract: In this paper, a proposal of a generic framework for process-oriented software development organizations is presented. Additionally, the respective way of managing the process model, and the instantiation of their processes with the Rational Unified Process (RUP) disciplines, whenever they are available, or with other kind of processes is suggested. The proposals made here were consolidated with experiences from real projects and we report the main results from one of those projects.

Journal ArticleDOI
TL;DR: B and eb3 are complementary: the former is better at expressing complex ordering and static data integrity constraints, whereas the latter provides a simpler, modular, explicit representation of dynamic constraints that is closer to the user's point of view, while providing loosely coupled definitions of data attributes.
Abstract: This paper compares two formal methods, B and eb 3, for specifying information systems. These two methods are chosen as examples of the state-based paradigm and the event-based paradigm, respectively. The paper considers four viewpoints: functional behavior expression, validation, verification, and evolution. Issues in expressing event ordering constraints, data integrity constraints, and modularity are thereby considered. A simple case study is used to illustrate the comparison, namely, a library management system. Two equivalent specifications are presented using each method. The paper concludes that B and eb 3 are complementary. The former is better at expressing complex ordering and static data integrity constraints, whereas the latter provides a simpler, modular, explicit representation of dynamic constraints that are closer to the user's point of view, while providing loosely coupled definitions of data attributes. The generality of these results from the state-based paradigm and the event-based paradigm perspective are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a model-driven runtime (MDR) environment that is able to execute a platform-independent model for a specific purpose instead of transforming it, and explain the concepts of an MDR that interprets OCL-annotated class diagrams and state machines to realize Web applications.
Abstract: A large part of software development these days deals with building so-called Web applications. Many of these applications are data-base-powered and exhibit a page layout and navigational structure that is close to the class structure of the entities being managed by the system. Also, there is often only limited application-specific business logic. This makes the usual three-tier architectural approach unappealing, because it results in a lot of unnecessary overhead. One possible solution to this problem is the use of model-driven architecture (MDA). A simple platform-independent domain model describing only the entity structure of interest could be transformed into a platform-specific model that incorporates a persistence mechanism and a user interface. Yet, this raises a number of additional problems caused by the one-way, multi-transformational nature of the MDA process. To cope with these problems, the authors propose the notion of a model-driven runtime (MDR) environment that is able to execute a platform-independent model for a specific purpose instead of transforming it. The paper explains the concepts of an MDR that interprets OCL-annotated class diagrams and state machines to realize Web applications. It shows the authors' implementation of the approach, the Infolayer system, which is already used by a number of applications. Experiences from these applications are described, and the approach is compared to others.

Journal ArticleDOI
TL;DR: This paper presents an approach to model distributed systems based on a goal-oriented requirements acquisition, which is supported by a modeling language, the ANote, which presents views that capture the most important modeling aspects according to the concept currently under consideration.
Abstract: An important issue in getting the agent technology into mainstream software development is the development of appropriate methodologies for developing agent-oriented systems This paper presents an approach to model distributed systems based on a goal-oriented requirements acquisition These models are acquired as instances of a conceptual meta-model The latter can be represented as a graph where each node captures a concept such as, eg, goal, action, agent, or scenario, and where the edges capture semantic links between such abstractions This approach is supported by a modeling language, the ANote, which presents views that capture the most important modeling aspects according to the concept currently under consideration

Journal ArticleDOI
Cindy Eisner1
TL;DR: This work describes the experience of modeling and formally verifying a software cache algorithm using the model checker RuleBase, and uses a highly detailed model created directly from the C code itself, rather than a high-level abstract model.
Abstract: We describe the experience of modeling and formally verifying a software cache algorithm using the model checker RuleBase. Contrary to prevailing wisdom, we used a highly detailed model created directly from the C code itself, rather than a high-level abstract model.

Journal ArticleDOI
TL;DR: The aim is to demonstrate that it is possible to integrate two well established formal methods whilst maintaining their individual advantages, using the combination of CSP and B.
Abstract: In this paper a file transmission protocol specification is developed using the combination of two formal methods: CSP and B. The aim is to demonstrate that it is possible to integrate two well established formal methods whilst maintaining their individual advantages. We discuss how to compositionally verify the specification and ensure that it preserves some abstract properties. We also discuss how the structure of the specification follows a particular style which may be generally applicable when modelling other protocols using this combination .

Journal ArticleDOI
TL;DR: In this article, the authors introduce proper concepts for modelling business rules and specify their semantics, and introduce proper models for modeling business rules in object-oriented analysis languages such as OCaml.
Abstract: A major purpose of analysis is to represent precisely all relevant facts, as they are observed in the external world. A substantial problem in object-oriented analysis is that most modelling languages are more suitable to build computational models than to develop conceptual models. It is a rather blind assumption that concepts that are convenient for design can also be applied during analysis. Preconditions, postconditions and invariants are typical examples of concepts with blurred semantics. At the level of analysis they are most often used to specify business rules. This paper introduces proper concepts for modelling business rules and specifies their semantics.