scispace - formally typeset
Search or ask a question

Showing papers in "Software and Systems Modeling in 2010"


Journal ArticleDOI
TL;DR: The two-step process mining approach, implemented in the context of ProM, overcomes many of the limitations of traditional approaches and enables the user to control the balance between “overfitting” and “underfitting’.
Abstract: Process mining includes the automated discovery of processes from event logs. Based on observed events (e.g., activities being executed or messages being exchanged) a process model is constructed. One of the essential problems in process mining is that one cannot assume to have seen all possible behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such “overfitting” by generalizing the model to allow for more behavior. This generalization is often driven by the representation language and very crude assumptions about completeness. As a result, parts of the model are “overfitting” (allow only for what has actually been observed) while other parts may be “underfitting” (allow for much more behavior without strong support for it). None of the existing techniques enables the user to control the balance between “overfitting” and “underfitting”. To address this, we propose a two-step approach. First, using a configurable approach, a transition system is constructed. Then, using the “theory of regions”, the model is synthesized. The approach has been implemented in the context of ProM and overcomes many of the limitations of traditional approaches.

369 citations


Journal ArticleDOI
TL;DR: An overview of the current state of traceability research and practice in requirements engineering and model-driven developers is provided, identifying commonalities and differences in these areas and uncover several unresolved challenges which affect both domains.
Abstract: Traceability--the ability to follow the life of software artifacts--is a topic of great interest to software developers in general, and to requirements engineers and model-driven developers in particular. This article aims to bring those stakeholders together by providing an overview of the current state of traceability research and practice in both areas. As part of an extensive literature survey, we identify commonalities and differences in these areas and uncover several unresolved challenges which affect both domains. A good common foundation for further advances regarding these challenges appears to be a combination of the formal basis and the automated recording opportunities of MDD on the one hand, and the more holistic view of traceability in the requirements engineering domain on the other hand.

321 citations


Journal ArticleDOI
TL;DR: It is shown that any transformation language sufficient to the needs of model-driven development would have to be able to express non-bijective transformations.
Abstract: We consider the OMG’s queries, views and transformations standard as applied to the specification of bidirectional transformations between models. We discuss what is meant by bidirectional transformations, and the model-driven development scenarios in which they are needed. We analyse the fundamental requirements on tools which support such transformations, and discuss some semantic issues which arise. In particular, we show that any transformation language sufficient to the needs of model-driven development would have to be able to express non-bijective transformations. We argue that a considerable amount of basic research is needed before suitable tools will be fully realisable, and suggest directions for this future research.

312 citations


Journal ArticleDOI
TL;DR: This paper reports on the challenges of the model transformation from UML class diagrams and OCL to Alloy and draws on the lessons learnt and presents a UML profile for Alloy to facilitate better the representation of Alloy concepts in the UML.
Abstract: The Unified Modeling Language (UML) is the de facto language used in the industry for software specifications. Once an application has been specified, Model Driven Architecture (MDA) techniques can be applied to generate code from such specifications. Since implementing a system based on a faulty design requires additional cost and effort, it is important to analyse the UML models at earlier stages of the software development lifecycle. This paper focuses on utilizing MDA techniques to deal with the analysis of UML models and identify design faults within a specification. Specifically, we show how UML models can be automatically transformed into Alloy which, in turn, can be automatically analysed by the Alloy Analyzer. The proposed approach relies on MDA techniques to transform UML models to Alloy. This paper reports on the challenges of the model transformation from UML class diagrams and OCL to Alloy. Those issues are caused by fundamental differences in the design philosophy of UML and Alloy. To facilitate better the representation of Alloy concepts in the UML, the paper draws on the lessons learnt and presents a UML profile for Alloy.

209 citations


Journal ArticleDOI
TL;DR: This paper identified four orthogonal traceability dimensions in SPL development, one of which is an extension of what is often considered as “traceability of variability”, which constitutes one of the two contributions of this paper.
Abstract: Software product line (SPL) engineering is a recent approach to software development where a set of software products are derived for a well defined target application domain, from a common set of core assets using analogous means of production (for instance, through Model Driven Engineering). Therefore, such family of products are built from reuse, instead of developed individually from scratch. SPL promise to lower the costs of development, increase the quality of software, give clients more flexibility and reduce time to market. These benefits come with a set of new problems and turn some older problems possibly more complex. One of these problems is traceability management. In the European AMPLE project we are creating a common traceability framework across the various activities of the SPL development. We identified four orthogonal traceability dimensions in SPL development, one of which is an extension of what is often considered as "traceability of variability". This constitutes one of the two contributions of this paper. The second contribution is the specification of a metamodel for a repository of traceability links in the context of SPL and the implementation of a respective traceability framework. This framework enables fundamental traceability management operations, such as trace import and export, modification, query and visualization. The power of our framework is highlighted with an example scenario.

127 citations


Journal ArticleDOI
TL;DR: This paper refines the earlier description of code generation by model transformation with an improved architecture for the composition of model-to-model normalization rules, solving the problem of combining type analysis and transformation.
Abstract: The realization of model-driven software development requires effective techniques for implementing code generators for domain-specific languages. This paper identifies techniques for improving separation of concerns in the implementation of generators. The core technique is code generation by model transformation, that is, the generation of a structured representation (model) of the target program instead of plain text. This approach enables the transformation of code after generation, which in turn enables the extension of the target language with features that allow better modularity in code generation rules. The technique can also be applied to ‘internal code generation’ for the translation of high-level extensions of a DSL to lower-level constructswithin the sameDSL using model-to-model transformations. This paper refines our earlier description of code generation by model transformation with an improved architecture for the composition of model-to-model normalization rules, solving the problem of combining type analysis and transformation. Instead of coarse-grained stages that alternate between normalization and type analysis, we have developed a new style of type analysis that can be integrated with normalizing transformations in a fine-grained manner. The normalization strategy has a simple extension interface and integrates non-local, context-sensitive transformation rules. We have applied the techniques in a realistic case study of domain-specific language engineering, i.e. the code generator for WebDSL, using Stratego, a high-level transformation language that integrates model-to-model, model-to-code, and code-to-code transformations.

95 citations


Journal ArticleDOI
TL;DR: It turns out that TGGs and declarative QVT have many concepts in common, and QVT-Core can be implemented by transforming Q VT-Core mappings to TGG rules, which can be executed by a TGG transformation engine that performs the actual QVT transformation.
Abstract: The Model Driven Architecture (MDA) is an approach to develop software based on different models. There are separate models for the business logic and for platform specific details. Moreover, code can be generated automatically from these models. This makes transforma- tions a core technology for MDA and for model-based software engineering approaches in general. Query/View/Transformation (QVT) is the transformation technology recently proposed for this purpose by the OMG. Triple Graph Grammars (TGGs) are another transformation technology proposed in the mid-nineties, used for example in the FUJABA CASE tool. In contrast to many other transformation technologies, both QVT and TGGs declaratively define the relation between two models. With this definition, a transformation engine can execute a transformation in either direction and, based on the same definition, can also propagate changes from one model to the other. In this paper, we compare the concepts of the declarative languages of QVT and TGGs. It turns out that TGGs and declarative QVT have many concepts in common. In fact, QVT-Core can be mapped to TGGs. We show that QVT-Core can be implemented by transforming QVT-Core mappings to TGG rules, which can then be executed by a TGG transformation engine that performs the actual QVT transformation. Furthermore, we discuss an approach for mapping QVT-Relations to TGGs. Based on the semantics of TGGs, we clarify semantic gaps that we identified in the declarative languages of QVT and, furthermore, we show how TGGs can benefit from the concepts of QVT.

84 citations


Journal ArticleDOI
TL;DR: This paper provides executable semantics as well as a concise and scalable implementation of module superimposition based on ATL, a composition technique that allows for extending and overriding rules in transformation modules.
Abstract: As the application of model transformation becomes increasingly commonplace, the focus is shifting from model transformation languages to the model transformations themselves. The properties of model transformations, such as scalability, maintainability and reusability, have become important. Composition of model transformations allows for the creation of smaller, maintainable and reusable transformation definitions that together perform a larger transformation. This paper focuses on composition for two rule-based model transformation languages: the ATLAS Transformation Language (ATL) and the QVT Relations language. We propose a composition technique called module superimposition that allows for extending and overriding rules in transformation modules. We provide executable semantics as well as a concise and scalable implementation of module superimposition based on ATL.

80 citations


Journal ArticleDOI
TL;DR: The approach presented here is applied in the context of the ReDSeeDS project (Requirements Driven Software Development System) that aims at requirements-based software reuse, which makes use of traceability information to determine potentially reusable architectures, design, or code artifacts based on a given set of reusable requirements.
Abstract: In recent years, traceability has been globally accepted as being a key success factor of software development projects. However, the multitude of different, poorly integrated taxonomies, approaches and technologies impedes the application of traceability techniques in practice. This paper presents a comprehensive view on traceability, pertaining to the whole software development process. Based on the state of the art, the field is structured according to six specific activities related to traceability as follows: definition, recording, identification, maintenance, retrieval, and utilization. Using graph technology, a comprehensive and seamless approach for supporting these activities is derived, combining them in one single conceptual framework. This approach supports the definition of metamodels for traceability information, recording of traceability information in graph-based repositories, identification and maintenance of traceability relationships using transformations, as well as retrieval and utilization of traceability information using a graph query language. The approach presented here is applied in the context of the ReDSeeDS project (Requirements Driven Software Development System) that aims at requirements-based software reuse. ReDSeeDS makes use of traceability information to determine potentially reusable architectures, design, or code artifacts based on a given set of reusable requirements. The project provides case studies from different domains for the validation of the approach.

70 citations


Journal ArticleDOI
TL;DR: A comprehensive traceability approach that combines classical traceability approaches for MDE and global model management in form of dynamic hierarchical mega models is presented and is further outlined by using an industrial case study and by presenting an implementation of the concepts in forms of a prototype.
Abstract: In the world of model-driven engineering (MDE) support for traceability and maintenance of traceability information is essential. On the one hand, classical traceability approaches for MDE address this need by supporting automated creation of traceability information on the model element level. On the other hand, global model management approaches manually capture traceability information on the model level. However, there is currently no approach that supports comprehensive traceability, comprising traceability information on both levels, and efficient maintenance of traceability information, which requires a high-degree of automation and scalability. In this article, we present a comprehensive traceability approach that combines classical traceability approaches for MDE and global model management in form of dynamic hierarchical mega models. We further integrate efficient maintenance of traceability information based on top of dynamic hierarchical mega models. The proposed approach is further outlined by using an industrial case study and by presenting an implementation of the concepts in form of a prototype.

57 citations


Journal ArticleDOI
TL;DR: This paper presents a new specification language QML/CS that can be used to model non-functional product properties of components and component-based software systems, and discusses semantic concepts for the specification of non- functional properties, taking into account the specific needs of a component market.
Abstract: Component-based software engineering (CBSE) is viewed as an opportunity to deal with the increasing complexity of modern-day software. Along with CBSE comes the notion of component markets, where more or less generic pieces of software are traded, to be combined into applications by third-party application developers. For such a component market to work successfully, all relevant properties of components must be precisely and formally described. This is especially true for non-functional properties, such as performance, memory foot print, or security. While the specification of functional properties is well understood, non-functional properties are only beginning to become a research focus. This paper discusses semantic concepts for the specification of non-functional properties, taking into account the specific needs of a component market. Based on these semantic concepts, we present a new specification language QML/CS that can be used to model non-functional product properties of components and component-based software systems.

Journal ArticleDOI
TL;DR: This paper proposes a mapping model which allows to define arbitrarily complex mappings between elements of the abstract and concrete syntax and introduces a novel architecture for DSM environments which enables these concepts, and provides an overview of the tool support.
Abstract: Modern domain-specific modeling (DSM) frameworks provide refined techniques for developing new languages based on the clear separation of conceptual elements of the language (called abstract syntax) and their graphical visual representation (called concrete syntax). This separation is usually achieved by recording traceability information between the abstract and concrete syntax using mapping models. However, state-of-the-art DSM frameworks impose severe restrictions on traceability links between elements of the abstract syntax and the concrete syntax. In the current paper, we propose a mapping model which allows to define arbitrarily complex mappings between elements of the abstract and concrete syntax. Moreover, we demonstrate how live model transformations can complement mapping models in providing bidirectional synchronization and implicit traceability between models of the abstract and the concrete syntax. In addition, we introduce a novel architecture for DSM environments which enables these concepts, and provide an overview of the tool support.

Journal ArticleDOI
TL;DR: A new technique is presented that utilizes antipatterns as a mechanism for remedying quality problems in UC models and the results indicate that applying the technique improves the overall quality and clarity of UC models.
Abstract: Use case (UC) modeling is a popular requirements modeling technique. While these models are simple to create and read; this simplicity is often misconceived, leading practitioners to believe that creating high quality models is straightforward. Therefore, many low quality models that are inconsistent, incorrect, contain premature restrictive design decision and contain ambiguous information are produced. To combat this problem of creating low quality UC models, this paper presents a new technique that utilizes antipatterns as a mechanism for remedying quality problems in UC models. The technique, supported by the tool ARBIUM, provides a framework for developers to define antipatterns. The feasibility of the approach is demonstrated by applying it to a real-world system. The results indicate that applying the technique improves the overall quality and clarity of UC models.

Journal ArticleDOI
TL;DR: A new way to model web applications is presented, based on software couplings that are new to web applications, dynamic flow of control, distributed integration, and partial dynamic web application development, which is based on the notion of atomic sections, which allow analysis tools to build the analog of a control flow graph for web applications.
Abstract: Web software applications have become complex, sophisticated programs that are based on novel computing technologies. Their most essential characteristic is that they represent a different kind of software deployment—most of the software is never delivered to customers’ computers, but remains on servers, allowing customers to run the software across the web. Although powerful, this deployment model brings new challenges to developers and testers. Checking static HTML links is no longer sufficient; web applications must be evaluated as complex software products. This paper focuses on three aspects of web applications that are unique to this type of deployment: (1) an extremely loose form of coupling that features distributed integration, (2) the ability that users have to directly change the potential flow of execution, and (3) the dynamic creation of HTML forms. Taken together, these aspects allow the potential control flow to vary with each execution, thus the possible control flows cannot be determined statically, prohibiting several standard analysis techniques that are fundamental to many software engineering activities. This paper presents a new way to model web applications, based on software couplings that are new to web applications, dynamic flow of control, distributed integration, and partial dynamic web application development. This model is based on the notion of atomic sections, which allow analysis tools to build the analog of a control flow graph for web applications. The atomic section model has numerous applications in web applications; this paper applies the model to the problem of testing web applications.

Journal ArticleDOI
TL;DR: A heuristic approach for efficiently analyzing constraint specifications built from constraint patterns and exploits the semantic properties of constraint patterns, thereby enabling syntax-based consistency checking in polynomial-time and introduces a consistency checker implementing these ideas.
Abstract: Precision and consistency are important prerequisites for class models to conform to their intended domain semantics. Precision can be achieved by augmenting models with design constraints and consistency can be achieved by avoiding contradictory constraints. However, there are different views of what constitutes a contradiction for design constraints. Moreover, state-of-the-art analysis approaches for proving constrained models consistent either scale poorly or require the use of interactive theorem proving. In this paper, we present a heuristic approach for efficiently analyzing constraint specifications built from constraint patterns. This analysis is based on precise notions of consistency for constrained class models and exploits the semantic properties of constraint patterns, thereby enabling syntax-based consistency checking in polynomial-time. We introduce a consistency checker implementing these ideas and we report on case studies in applying our approach to analyze industrial-scale models. These studies show that pattern-based constraint development supports the creation of concise specifications and provides immediate feedback on model consistency.

Journal ArticleDOI
TL;DR: This paper presents an approach for the analysis of graph transformation rules based on an intermediate OCL representation that translates different rule semantics into OCL, together with the properties of interest (like rule applicability, conflicts or independence), to analyse the operational semantics of Domain Specific Visual Languages.
Abstract: In this paper we present an approach for the analysis of graph transformation rules based on an intermediate OCL representation. We translate different rule semantics into OCL, together with the properties of interest (like rule applicability, conflicts or independence). The intermediate representation serves three purposes: (1) it allows the seamless integration of graph transformation rules with the MOF and OCL standards, and enables taking the meta-model and its OCL constraints (i.e. well-formedness rules) into account when verifying the correctness of the rules; (2) it permits the interoperability of graph transformation concepts with a number of standards-based model-driven development tools; and (3) it makes available a plethora of OCL tools to actually perform the rule analysis. This approach is especially useful to analyse the operational semantics of Domain Specific Visual Languages. We have automated these ideas by providing designers with tools for the graphical specification and analysis of graph transformation rules, including a back-annotation mechanism that presents the analysis results in terms of the original language notation.

Journal ArticleDOI
TL;DR: The aim of this paper is to present a strategy for the automatic generation of a basic behavior schema that facilitates the definition of the behavioral specification and ensures its quality obtaining, as a result, an improved code generation phase.
Abstract: The specification of a software system must include all relevant static and dynamic aspects of the domain. Dynamic aspects are usually specified by means of a behavioral schema consisting of a set of system operations that the user may execute to update the system state. To be useful, such a set must be complete (i.e. through these operations, users should be able to modify the population of all elements in the class diagram) and executable (i.e. for each operation, there must exist a system state over which the operation can be successfully applied). A manual specification of these operations is an error-prone and time-consuming activity. Therefore, the aim of this paper is to present a strategy for the automatic generation of a basic behavior schema. Operations in the schema are drawn from the static aspects of the domain as defined in the UML class diagram and take into account possible dependencies among them to ensure the completeness and executability of the operations. We believe our approach is especially useful in a Model-Driven Development setting, where the full implementation of the system is derived from its specification. In this context, our approach facilitates the definition of the behavioral specification and ensures its quality obtaining, as a result, an improved code generation phase.

Journal ArticleDOI
Ivan Kurtev1
TL;DR: The possibilities and benefits of introducing and using reflection in a rule-based model transformation language and some language abstractions to achieve structural and behavioral reflection are studied.
Abstract: Computational reflection is a well-known technique applied in many existing programming languages ranging from functional to object-oriented languages. In this paper we study the possibilities and benefits of introducing and using reflection in a rule-based model transformation language. The paper identifies some language abstractions to achieve structural and behavioral reflection. Reflective features are motivated by examples of problems derived from the experience with currently used transformation languages. Example solutions are given by using an experimental language with reflective capabilities. The paper also outlines possible implementation strategies for adding reflection to a language and discusses their advantages and disadvantages.

Journal ArticleDOI
Jon Whittle1
TL;DR: A formal semantics for activity diagram constructs available in activity diagrams such as interruptible regions, activity groups, concurrent node executions, and flow final nodes is defined and motivated by a NASA air traffic control subsystem.
Abstract: UML2.0 introduced interaction overview diagrams (IODs) as a way of specifying relationships between UML interactions. IODs are a variant of activity diagrams that show control flow between a set of interactions. The nodes in an IOD are either inline interactions or references to an interaction. A number of recent papers have defined a formal semantics for IODs. These are restricted, however, to interactions that can be specified using basic sequence diagrams. This excludes the many rich modeling constructs available in activity diagrams such as interruptible regions, activity groups, concurrent node executions, and flow final nodes. It is non-trivial to allow such constructs in IODs because their meaning has to be interpreted in the context of interaction sequences rather than activities. In this paper, we consider how some of these activity diagram constructs can be used practically in IODs. We motivate the integration of these constructs into IODs using a NASA air traffic control subsystem and define a formal semantics for these constructs that builds on an existing semantics definition for IODs.

Journal ArticleDOI
TL;DR: While companies such as Amazon, Google, and Force.com are providing services for and from the cloud there are aspects of cloud computing that can benefit from research in the model-driven software development area, including enabling interoperability across cloud computing environments, and integrating mobile and cloud-based applications.
Abstract: Cloud computing is poised to become a major driving force behind European and American businesses. Long-standing projects like the SETI@Home project and facilities such as SourceForge leverage third party distributed storage and computational resources to deliver services. Companies are seeking to commercialize this approach to service delivery through the use of cloud computing technologies. Cloud computing commerce can take several forms: customers can rent an infrastructure, a platform, or predefined services. While predefined cloud-based services for email, blogs, wikis, and media storage are well known, more complex business oriented applications like customer relationship management are starting to appear. While companies such as Amazon, Google, and Force.com are providing services for and from the cloud there are aspects of cloud computing that can benefit from research in the model-driven software development area. For example, software and system modeling research can yield results that address problems related to the safety and integrity of data (where the user does not control the physical location of the storage anymore), efficiency of storage and retrieval, and decoupling of applications from underlying operating systems, and other computing platforms and infrastructures. Software and system modeling research can also produce results that head-off future problems related to migration of services to new cloud computing environments that will inevitably arise as technologies evolve. For example, enabling interoperability across cloud computing environments, and integrating mobile and cloud-based applications are challenging problems that will arise. As is the case for many new

Journal ArticleDOI
TL;DR: This paper presents the design and implementation of a transformational model of a product line of scalar vector graphics and JavaScript applications and explains how it was simplified by lifting selected features and their compositions from the original product line to another product line.
Abstract: Model driven engineering (MDE) of software product lines (SPLs) merges two increasing important paradigms that synthesize programs by transformation. MDE creates programs by transforming models, and SPLs elaborate programs by applying transformations called features. In this paper, we present the design and implementation of a transformational model of a product line of scalar vector graphics and JavaScript applications. We explain how we simplified our implementation by lifting selected features and their compositions from our original product line (whose implementations were complex) to features and their compositions of another product line (whose specifications were simple). We used operators to map higher-level features and their compositions to their lower-level counterparts. Doing so exposed commuting relationships among feature compositions in both product lines that helped validate our model and implementation.

Journal ArticleDOI
TL;DR: A family of new BDA metrics are defined, as extensions to the basic DI metric, based on different weighting mechanisms, and it is shown that they can be used to predict more realistic dependency information.
Abstract: Behavioral dependency analysis (BDA) and the visualization of dependency information have been identified as a high priority in industrial software systems (in specific, distributed systems). BDA determines the extent to which the functionality of one system entity (e.g., an object or a node) depends on other entities. Among many uses, a BDA is used to perform risk analysis and assessment, load planning, fault tolerance and redundancy provisions in distributed systems. Traditionally, most BDA techniques are based on source code or execution traces of a system. However, as model-driven development is gaining more popularity, there is a need for model-based BDA techniques. To address this need, we proposed in a previous work a metric, referred to as dependency index (DI), for the BDA of distributed objects and nodes based on UML behavioral models (sequence diagrams). However, in our previous BDA work, for simplicity, it was assumed that all messages are equivalent in terms of the dependencies they entail. However, to perform a more realistic BDA on real-world systems, messages must be weighted, e.g., certain messages may be more critical (or important) than others, and thus entail more intensive dependency. To address the above need, we define in this article a family of new BDA metrics, as extensions to our basic DI metric, based on different weighting mechanisms. Through an example application of the proposed metrics, we show that they can be used to predict more realistic dependency information. Furthermore, we derive interesting observations from our dependency analysis that would influence, in practice, practical decisions, which could not have been easily derived without it, e.g., we come up with a suggestion to install more reliable data-transmission network links between two nodes to ensure a reliable communication on links with intensive dependencies.