scispace - formally typeset
Search or ask a question

Showing papers in "Software and Systems Modeling in 2016"


Journal ArticleDOI
TL;DR: This work focuses on the collaborative function dimension and presents a set of concrete examples of CPS challenges based on a pick and place machine that solves a distributed version of the Towers of Hanoi puzzle.
Abstract: Advances in computation and communication are taking shape in the form of the Internet of Things, Machine-to-Machine technology, Industry 4.0, and Cyber-Physical Systems (CPS). The impact on engineering such systems is a new technical systems paradigm based on ensembles of collaborating embedded software systems. To successfully facilitate this paradigm, multiple needs can be identified along three axes: (i) online configuring an ensemble of systems, (ii) achieving a concerted function of collaborating systems, and (iii) providing the enabling infrastructure. This work focuses on the collaborative function dimension and presents a set of concrete examples of CPS challenges. The examples are illustrated based on a pick and place machine that solves a distributed version of the Towers of Hanoi puzzle. The system includes a physical environment, a wireless network, concurrent computing resources, and computational functionality such as, service arbitration, various forms of control, and processing of streaming video. The pick and place machine is of medium-size complexity. It is representative of issues occurring in industrial systems that are coming online. The entire study is provided at a computational model level, with the intent to contribute to the model-based research agenda in terms of design methods and implementation technologies necessary to make the next generation systems a reality.

175 citations


Journal ArticleDOI
TL;DR: A framework for the description of model transformation intents is defined, which includes a description of properties a model transformation has to satisfy to qualify as a suitable realization of an intent.
Abstract: The notion of model transformation intent is proposed to capture the purpose of a transformation. In this paper, a framework for the description of model transformation intents is defined, which includes, for instance, a description of properties a model transformation has to satisfy to qualify as a suitable realization of an intent. Several common model transformation intents are identified, and the framework is used to describe six of them in detail. A case study from the automotive industry is used to demonstrate the usefulness of the proposed framework for identifying crucial properties of model transformations with different intents and to illustrate the wide variety of model transformation intents that an industrial model-driven software development process typically encompasses.

103 citations


Journal ArticleDOI
TL;DR: A comprehensive compliance management framework with a main focus on design-time compliance management as a first step towards a preventive lifetime compliance support, which enables the automation of compliance-related activities that are amenable to automation, and therefore can significantly reduce the expenditures spent on compliance.
Abstract: Today's enterprises demand a high degree of compliance of business processes to meet diverse regulations and legislations. Several industrial studies have shown that compliance management is a daunting task, and organizations are still struggling and spending billions of dollars annually to ensure and prove their compliance. In this paper, we introduce a comprehensive compliance management framework with a main focus on design-time compliance management as a first step towards a preventive lifetime compliance support. The framework enables the automation of compliance-related activities that are amenable to automation, and therefore can significantly reduce the expenditures spent on compliance. It can help experts to carry out their work more efficiently, cut the time spent on tedious manual activities, and reduce potential human errors. An evident candidate compliance activity for automation is the compliance checking, which can be achieved by utilizing formal reasoning and verification techniques. However, formal languages are well known of their complexity as only versed users in mathematical theories and formal logics are able to use and understand them. However, this is generally not the case with business and compliance practitioners. Therefore, in the heart of the compliance management framework, we introduce the Compliance Request Language (CRL), which is formally grounded on temporal logic and enables the abstract pattern-based specification of compliance requirements. CRL constitutes a series of compliance patterns that spans three structural facets of business processes; control flow, employed resources and temporal perspectives. Furthermore, CRL supports the specification of compensations and non-monotonic requirements, which permit the relaxation of some compliance requirements to handle exceptional situations. An integrated tool suite has been developed as an instantiation artefact, and the validation of the approach is undertaken in several directions, which includes internal validity, controlled experiments, and functional testing.

100 citations


Journal ArticleDOI
TL;DR: Clafer is presented, a class modeling language with first-class support for feature modeling and a formal semantics built in a structurally explicit way that explains the meaning of hierarchical models whereby properties can be arbitrarily nested in the presence of inheritance and feature modeling constructs.
Abstract: We present Clafer (class, feature, reference), a class modeling language with first-class support for feature modeling. We designed Clafer as a concise notation for meta-models, feature models, mixtures of meta- and feature models (such as components with options), and models that couple feature models and meta-models via constraints (such as mapping feature configurations to component configurations or model templates). Clafer allows arranging models into multiple specialization and extension layers via constraints and inheritance. We identify several key mechanisms allowing a meta-modeling language to express feature models concisely. Clafer unifies basic modeling constructs, such as class, association, and property, into a single construct, called clafer. We provide the language with a formal semantics built in a structurally explicit way. The resulting semantics explains the meaning of hierarchical models whereby properties can be arbitrarily nested in the presence of inheritance and feature modeling constructs. The semantics also enables building consistent automated reasoning support for the language: To date, we implemented three reasoners for Clafer based on Alloy, Z3 SMT, and Choco3 CSP solvers. We show that Clafer meets its design objectives using examples and by comparing to other languages.

99 citations


Journal ArticleDOI
TL;DR: This article analyzes the usage of models at runtime in the existing research literature using the Systematic Literature Review (SLR) research method to provide an overview and classification of current research approaches using models at Runtime and to identify research areas not covered by models atruntime so far.
Abstract: In the context of software development, models provide an abstract representation of a software system or a part of it. In the software development process, they are primarily used for documentation and communication purposes in analysis, design, and implementation activities. Model-Driven Engineering (MDE) further increases the importance of models, as in MDE models are not only used for documentation and communication, but as central artefacts of the software development process. Various recent research approaches take the idea of using models as central artefacts one step further by using models at runtime to cope with dynamic aspects of ever-changing software and its environment. In this article, we analyze the usage of models at runtime in the existing research literature using the Systematic Literature Review (SLR) research method. The main goals of our SLR are building a common classification and surveying the existing approaches in terms of objectives, techniques, architectures, and kinds of models used in these approaches. The contribution of this article is to provide an overview and classification of current research approaches using models at runtime and to identify research areas not covered by models at runtime so far.

99 citations


Journal ArticleDOI
TL;DR: This invited paper briefly overviews the evolution of the VIATRA/IncQuery family by highlighting key features and illustrating main transformation concepts along an open case study influenced by an industrial project.
Abstract: The current release of VIATRA provides open-source tool support for an event-driven, reactive model transformation engine built on top of highly scalable incremental graph queries for models with millions of elements and advanced features such as rule-based design space exploration complex event processing or model obfuscation. However, the history of the VIATRA model transformation framework dates back to over 16 years. Starting as an early academic research prototype as part of the M.Sc project of the the first author it first evolved into a Prolog-based engine followed by a family of open-source projects which by now matured into a component integrated into various industrial and open-source tools and deployed over multiple technologies. This invited paper briefly overviews the evolution of the VIATRA/IncQuery family by highlighting key features and illustrating main transformation concepts along an open case study influenced by an industrial project.

96 citations


Journal ArticleDOI
TL;DR: The needs and challenges for designing and operating CPS are identified along with corresponding technologies to address the challenges and their potential impact, and select key enablers for a new type of system integration are discussed.
Abstract: Embedding computing power in a physical environment has provided the functional flexibility and performance necessary in modern products such as automobiles, aircraft, smartphones, and more. As product features came to increasingly rely on software, a network infrastructure helped factor out common hardware and offered sharing functionality for further innovation. A logical consequence was the need for system integration. Even in the case of a single original end manufacturer who is responsible for the final product, system integration is quite a challenge. More recently, there have been systems coming online that must perform system integration even after deployment--that is, during operation. This has given rise to the cyber-physical systems (CPS) paradigm. In this paper, select key enablers for a new type of system integration are discussed. The needs and challenges for designing and operating CPS are identified along with corresponding technologies to address the challenges and their potential impact. The intent is to contribute to a model-based research agenda in terms of design methods, implementation technologies, and organization challenges necessary to bring the next-generation systems online.

84 citations


Journal ArticleDOI
TL;DR: This paper tries to clarify and visualize the space of design choices for bidirectional transformations from an MDE point of view, in the form of a feature model, and uses the selected list of existing approaches by mapping them to the feature model.
Abstract: Bidirectional model transformation is a key technology in model-driven engineering (MDE), when two models that can change over time have to be kept constantly consistent with each other. While several model transformation tools include at least a partial support to bidirectionality, it is not clear how these bidirectional capabilities relate to each other and to similar classical problems in computer science, from the view update problem in databases to bidirectional graph transformations. This paper tries to clarify and visualize the space of design choices for bidirectional transformations from an MDE point of view, in the form of a feature model. The selected list of existing approaches are characterized by mapping them to the feature model. Then, the feature model is used to highlight some unexplored research lines in bidirectional transformations.

66 citations


Journal ArticleDOI
TL;DR: A SysML profile designed for modelling the safety-related concerns of a system allows for greater consistency between safety information and system design information and can aid in communicating that information to stakeholders.
Abstract: Communication both between development teams and between individual developers is a common source of safety-related faults in safety---critical system design. Communication between experts in different fields can be particularly challenging due to gaps in assumed knowledge, vocabulary and understanding. Faults caused by communication failures must be removed once found, which can be expensive if they are found late in the development process. Aiding communication earlier in development can reduce faults and costs. Modelling languages for design have been shown through practical experience to improve communication through better information presentation and increased information consistency. In this paper, we describe a SysML profile designed for modelling the safety-related concerns of a system. The profile models common safety concepts from safety standards and safety analysis techniques integrated with system design information. We demonstrate that the profile is capable of modelling the concepts through examples. We also show the use of supporting tools to aid the application of the profile through analysis of the model and generation of reports presenting safety information in formats appropriate to the target reader. Through increased traceability and integration, the profile allows for greater consistency between safety information and system design information and can aid in communicating that information to stakeholders.

63 citations


Journal ArticleDOI
TL;DR: A broad view is provided about the difficulties that are encountered during the model checking process applied at the verification phase of PLC software production and can be used to provide guidance for the scholars and practitioners planning to integrate model checking to PLC-based software verification activities.
Abstract: Programmable logic controllers (PLCs) are heavily used in industrial control systems, because of their high capacity of simultaneous input/output processing capabilities. Characteristically, PLC systems are used in mission critical systems, and PLC software needs to conform real-time constraints in order to work properly. Since PLC programming requires mastering low-level instructions or assembly like languages, an important step in PLC software production is modelling using a formal approach like Petri nets or automata. Afterward, PLC software is produced semiautomatically from the model and refined iteratively. Model checking, on the other hand, is a well-known software verification approach, where typically a set of timed properties are verified by exploring the transition system produced from the software model at hand. Naturally, model checking is applied in a variety of ways to verify the correctness of PLC-based software. In this paper, we provide a broad view about the difficulties that are encountered during the model checking process applied at the verification phase of PLC software production. We classify the approaches from two different perspectives: first, the model checking approach/tool used in the verification process, and second, the software model/source code and its transformation to model checker's specification language. In a nutshell, we have mainly examined SPIN, SMV, and UPPAAL-based model checking activities and model construction using Instruction Lists (and alike), Function Block Diagrams, and Petri nets/automata-based model construction activities. As a result of our studies, we provide a comparison among the studies in the literature regarding various aspects like their application areas, performance considerations, and model checking processes. Our survey can be used to provide guidance for the scholars and practitioners planning to integrate model checking to PLC-based software verification activities.

61 citations


Journal ArticleDOI
TL;DR: This article proposes a QVT-R tool that supports meta-models enriched with OCL constraints and proposes an alternative enforcement semantics that works according to the simple and predictable “principle of least change.”
Abstract: QVT Relations (QVT-R) is the standard language proposed by the OMG to specify bidirectional model transformations. Unfortunately, in part due to ambiguities and omissions in the original semantics, acceptance and development of effective tool support have been slow. Recently, the checking semantics of QVT-R has been clarified and formalized. In this article, we propose a QVT-R tool that complies to such semantics. Unlike any other existing tool, it also supports meta-models enriched with OCL constraints (thus avoiding returning ill-formed models) and proposes an alternative enforcement semantics that works according to the simple and predictable "principle of least change." The implementation is based on an embedding of both QVT-R transformations and UML class diagrams (annotated with OCL) in Alloy, a lightweight formal specification language with support for automatic model finding via SAT solving. We also show how this technique can be applied to bidirectionalize ATL, a popular (but unidirectional) model transformation language.

Journal ArticleDOI
TL;DR: The ModelJoin approach for the rapid creation of views is presented and the textual DSL is validated in a case study using the Palladio Component Model.
Abstract: Fragmentation of information across instances of different metamodels poses a significant problem for software developers and leads to a major increase in effort of transformation development. Moreover, compositions of metamodels tend to be incomplete, imprecise, and erroneous, making it impossible to present it to users or use it directly as input for applications. Customized views satisfy information needs by focusing on a particular concern, and filtering out information that is not relevant to this concern. For a broad establishment of view-based approaches, an automated solution to deal with separate metamodels and the high complexity of model transformations is necessary. In this paper, we present the ModelJoin approach for the rapid creation of views. Using a human-readable textual DSL, developers can define custom views declaratively without having to write model transformations or define a bridging metamodel. Instead, a metamodel generator and higher-order transformations create annotated target metamodels and the appropriate transformations on-the-fly. The resulting views, which are based on these metamodels, contain joined instances and can effectively express concerns unforseen during metamodel design. We have applied the ModelJoin approach and validated the textual DSL in a case study using the Palladio Component Model.

Journal ArticleDOI
TL;DR: The semantics is novel in several ways: it deals with aspects of UML-RT that have not been formalized before, such as thread allocation, service provision points, and service access points; and it supports an action language.
Abstract: We propose a formal semantics for UML-RT, a UML profile for real-time and embedded systems. The formal semantics is given by mapping UML-RT models into a language called kiltera, a real-time extension of the $$\pi $$?-calculus. Previous attempts to formalize the semantics of UML-RT have fallen short by considering only a very small subset of the language and providing fundamentally incomplete semantics based on incorrect assumptions, such as a one-to-one correspondence between "capsules" and threads. Our semantics is novel in several ways: (1) it deals with both state machine diagrams and capsule diagrams; (2) it deals with aspects of UML-RT that have not been formalized before, such as thread allocation, service provision points, and service access points; (3) it supports an action language; and (4) the translation has been implemented in the form of a transformation from UML-RT models created with IBM's RSA-RTE tool, into kiltera code. To our knowledge, this is the most comprehensive formal semantics for UML-RT to date.

Journal ArticleDOI
TL;DR: The approach presented in this article builds on four process-supported documentation techniques which can be selected, composed and applied to design an organization-specific documentation process and builds on a meta-model for EA documentation, which is implemented in an EA-repository prototype that supports the configuration and execution of the documentation techniques.
Abstract: The business capabilities of modern enterprises crucially rely on the enterprises' information systems and underlying IT infrastructure. Hence, optimization of the business-IT alignment is a key objective of Enterprise Architecture Management (EAM). To achieve this objective, EAM creates, maintains and analyzes a model of the current state of the Enterprise Architecture. This model covers different concepts reflecting both the business and the IT perspective and has to be constantly maintained in response to ongoing transformations of the enterprise. In practice, EA models grow large and are difficult to maintain, since many stakeholders from various backgrounds have to contribute architecture-relevant information. EAM literature and two practitioner surveys conducted by the authors indicate that EA model maintenance, in particular the manual documentation activities, poses one of the biggest challenges to EAM in practice. Current research approaches target the automation of the EA documentation based on specific data sources. These approaches, as our systematic literature review showed, do not consider enterprise specificity of the documentation context or the variability of the data sources from organization to organization. The approach presented in this article specifically accounts for these factors and presents a situational method for EA documentation. It builds on four process-supported documentation techniques which can be selected, composed and applied to design an organization-specific documentation process. The techniques build on a meta-model for EA documentation, which is implemented in an EA-repository prototype that supports the configuration and execution of the documentation techniques. We applied our documentation method assembly process at a German insurance company and report the findings from this case study in particular regarding practical applicability and usability of our approach.

Journal ArticleDOI
TL;DR: In this paper, a front-end inference algorithm that extracts abstract behavioral descriptions of methods, called contracts, which retain resource dependency information is integrated with a number of possible different back-ends that analyze contracts and derive deadlock information.
Abstract: We present a framework for statically detecting deadlocks in a concurrent object-oriented language with asynchronous method calls and cooperative scheduling of method activations. Since this language features recursion and dynamic resource creation, deadlock detection is extremely complex and state-of-the-art solutions either give imprecise answers or do not scale. In order to augment precision and scalability, we propose a modular framework that allows several techniques to be combined. The basic component of the framework is a front-end inference algorithm that extracts abstract behavioral descriptions of methods, called contracts, which retain resource dependency information. This component is integrated with a number of possible different back-ends that analyze contracts and derive deadlock information. As a proof-of-concept, we discuss two such back-ends: (1) an evaluator that computes a fixpoint semantics and (2) an evaluator using abstract model checking.

Journal ArticleDOI
TL;DR: An efficient procedure based on heuristic search for checking well-known bisimulation equivalences for concurrent systems specified through process algebras, which tries to improve both the memory occupation and the time required for proving the equivalence of systems.
Abstract: Equivalence checking plays a crucial role in formal verification since it is a natural relation for expressing the matching of a system implementation against its specification. In this paper, we present an efficient procedure, based on heuristic search, for checking well-known bisimulation equivalences for concurrent systems specified through process algebras. The method tries to improve, with respect to other solutions, both the memory occupation and the time required for proving the equivalence of systems. A prototype has been developed to evaluate the approach on several examples of concurrent system specifications.

Journal ArticleDOI
TL;DR: A description logic (DL)-based approach is proposed to represent both models and their relations in a common DL knowledge base and applies reasoning to detect inconsistencies in the variability of goal and feature models.
Abstract: Goal models represent requirements and intentions of a software system. They play an important role in the development life cycle of software product lines (SPLs). In the domain engineering phase, goal models guide the development of variability in SPLs by providing the rationale for the variability, while they are used for the configuration of SPLs in the application engineering phase. However, variability in SPLs, which is represented by feature models, usually has design and implementation-induced constraints. When those constraints are not aligned with variability in goal models, the configuration with goal models becomes error prone. To remedy this problem, we propose a description logic (DL)-based approach to represent both models and their relations in a common DL knowledge base. Moreover, we apply reasoning to detect inconsistencies in the variability of goal and feature models. A formal proof is provided to demonstrate the correctness of the reasoning approach. An empirical evaluation shows computational tractability of the inconsistency detection.

Journal ArticleDOI
TL;DR: This paper builds on a list of generic types of work-arounds found in practice and explores whether and how they can be detected by process mining techniques, and obtains results obtained for four work-around types in five real-life processes.
Abstract: Business process work-arounds are specific forms of incompliant behavior, where employees intentionally decide to deviate from the required procedures although they are aware of them. Detecting and understanding the work-arounds performed can guide organizations in redesigning and improving their processes and support systems. Existing process mining techniques for compliance checking and diagnosis of incompliant behavior rely on the available information in event logs and emphasize technological capabilities for analyzing this information. They do not distinguish intentional incompliance and do not address the sources of this behavior. In contrast, the paper builds on a list of generic types of work-arounds found in practice and explores whether and how they can be detected by process mining techniques. Results obtained for four work-around types in five real-life processes are reported. The remaining two types are not reflected in events logs and cannot be currently detected by process mining. The detected work-around data are further analyzed for identifying correlations between the frequency of specific work-around types and properties of the processes and of specific activities. The analysis results promote the understanding of work-around situations and sources.

Journal ArticleDOI
TL;DR: A novel approach called ReqAligner is introduced that aids analysts to spot signs of duplication in use cases in an automated fashion and is applied to five real-world specifications, achieving promising results and identifying many sources of duplications in the use cases.
Abstract: Developing high-quality requirements specifications often demands a thoughtful analysis and an adequate level of expertise from analysts. Although requirements modeling techniques provide mechanisms for abstraction and clarity, fostering the reuse of shared functionality (e.g., via UML relationships for use cases), they are seldom employed in practice. A particular quality problem of textual requirements, such as use cases, is that of having duplicate pieces of functionality scattered across the specifications. Duplicate functionality can sometimes improve readability for end users, but hinders development-related tasks such as effort estimation, feature prioritization, and maintenance, among others. Unfortunately, inspecting textual requirements by hand in order to deal with redundant functionality can be an arduous, time-consuming, and error-prone activity for analysts. In this context, we introduce a novel approach called ReqAligner that aids analysts to spot signs of duplication in use cases in an automated fashion. To do so, ReqAligner combines several text processing techniques, such as a use case-aware classifier and a customized algorithm for sequence alignment. Essentially, the classifier converts the use cases into an abstract representation that consists of sequences of semantic actions, and then these sequences are compared pairwise in order to identify action matches, which become possible duplications. We have applied our technique to five real-world specifications, achieving promising results and identifying many sources of duplication in the use cases.

Journal ArticleDOI
TL;DR: The past year of SoSyM has been characterized by several changes, still very sad about the loss of the authors' dear friend Robert France in February 2015, and Jeff Gray started the demanding job as new Editor-In-Chief for So SyM in April.
Abstract: With the inception of SoSyM in 2001, we will this year celebrate its 15th year anniversary! Over the past 14 volumes, SoSyM has published a total of 512 different articles and editorials (145 regular papers, 159 special section papers, 76 theme section papers, 45 editorials, 46 guest editorials, 34 expert voices, 1 industry voice, 4 discussion/overview papers, 2 errata). The journal is doing very well and recently received an Impact Factor of 1.408. In 2015, SoSyM published 79 articles and editorials (26 regular papers, 42 special section papers, 5 editorials, 6 guest editorials), which included 1580 pages (issues 1 and 2 were doubled in size to reduce the backlog of papers in the online pipeline). For the issues of 2016 Springer has increased again the number of pages per issue from 256 pages in 2015 to 294 pages so that we will publish at least 1176 pages this year. The number of paper downloads from the Springer SoSyM site continues to be higher than the first decade of publication. There were 198 submissions to SoSyM during the 2015 calendar year. The submissions included 147 regular papers, 12 special section papers, 36 theme section papers, 2 industry voice submissions, and 1 overview submission. The acceptance rate over the past 12 months has been 21.8 %. There were 21 desk rejects (all from regular submissions), which were returned within 4 days of submission. The average time to final decision (accept and reject) was 145 days. The past year of SoSyM has been characterized by several changes. We are still very sad about the loss of our dear friend Robert France in February 2015, who was the founder of SoSyM and remained the SoSyM Editor-In-Chief until his passing. Jeff Gray started the demanding job as new Editor-In-Chief for SoSyM in April. Several Editors completed their term of service (Jeff Offutt and Franck Barbier)-we appreciate their help! We also were excited to announce that Timothy Lethbridge was added to the Editorial Board in 2015, joining our other new Editors, Esther Guerra and Yves Le Traon, who started their Editorial Board service in late 2014. Throughout the 2015 publication cycle, numerous actions were defined and completed as a result of discussions at the 2014 Editorial Board meeting held at MODELS 2014 in Valencia. SoSyM established several new awards, which were presented at MODELS 2015. For the first time, a journal-first option was available in collaboration …

Journal ArticleDOI
TL;DR: This work presents a static analysis-based technique for reverse engineering finite state machine models from a large subset of sequential Java programs that enumerates all feasible program paths in a class using symbolic execution and records execution summary for each path.
Abstract: We present a static analysis-based technique for reverse engineering finite state machine models from a large subset of sequential Java programs. Our approach enumerates all feasible program paths in a class using symbolic execution and records execution summary for each path. Subsequently, it creates states and transitions by analyzing symbolic execution summaries. Our approach also detects any unhandled exceptions.

Journal ArticleDOI
TL;DR: This paper shows how SPLs can be modelled in an incremental, modular fashion using Feature Nets, provides a Feature Nets variant that supports modelling dynamic SPLs, and proposes an analysis method for SPL modelled as Feature Nets.
Abstract: Software product lines (SPLs) are diverse systems that are developed using a dual engineering process: (a) family engineering defines the commonality and variability among all members of the SPL, and (b) application engineering derives specific products based on the common foundation combined with a variable selection of features. The number of derivable products in an SPL can thus be exponential in the number of features. This inherent complexity poses two main challenges when it comes to modelling: firstly, the formalism used for modelling SPLs needs to be modular and scalable. Secondly, it should ensure that all products behave correctly by providing the ability to analyse and verify complex models efficiently. In this paper, we propose to integrate an established modelling formalism (Petri nets) with the domain of software product line engineering. To this end, we extend Petri nets to Feature Nets. While Petri nets provide a framework for formally modelling and verifying single software systems, Feature Nets offer the same sort of benefits for software product lines. We show how SPLs can be modelled in an incremental, modular fashion using Feature Nets, provide a Feature Nets variant that supports modelling dynamic SPLs, and propose an analysis method for SPL modelled as Feature Nets. By facilitating the construction of a single model that includes the various behaviours exhibited by the products in an SPL, we make a significant step towards efficient and practical quality assurance methods for software product lines.

Journal ArticleDOI
TL;DR: This paper extends an existing language evaluation framework in order to evaluate the support for choreographies in BPMN 2.0 and gives potential solutions that may be taken into account in future extensions or improvements to BPMn 2.0.
Abstract: The concept of choreography has emerged over the past years as a foundational concept for capturing and managing collaborative business processes. The latest version of the Business Process Modeling Notation (BPMN 2.0) adopts choreography as a first-class citizen. However, it remains an open question whether BPMN 2.0 is actually appropriate for capturing this concept. In this paper, we extend an existing language evaluation framework in order to evaluate the support for choreographies in BPMN 2.0. The framework provides a means of identifying the strengths and weaknesses of BPMN 2.0 for choreographies. We also give potential solutions that may be taken into account in future extensions or improvements to BPMN 2.0.

Journal ArticleDOI
TL;DR: This work proposes to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources, which also allows the chaining of soft links that is useful for modular applications.
Abstract: Model repositories play a central role in the model driven development of complex software-intensive systems by offering means to persist and manipulate models obtained from heterogeneous languages and tools. Complex models can be assembled by interconnecting model fragments by hard links, i.e., regular references, where the target end points to external resources using storage-specific identifiers. This approach, in certain application scenarios, may prove to be a too rigid and error prone way of interlinking models. As a flexible alternative, we propose to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources. These soft links can be calculated on-demand with graceful handling for temporarily unresolved references. In the background, the links are maintained efficiently and flexibly by using incremental model query evaluation. The approach is applicable to modeling environments or even property graphs for representing query results as first-class relations, which also allows the chaining of soft links that is useful for modular applications. The approach is evaluated using the Eclipse Modeling Framework (EMF) and EMF-IncQuery in two complex industrial case studies. The first case study is motivated by a knowledge management project from the financial domain, involving a complex interlinked structure of concept and business process models. The second case study is set in the avionics domain with strict traceability requirements enforced by certification standards (DO-178b). It consists of multiple domain models describing the allocation scenario of software functions to hardware components.

Journal ArticleDOI
TL;DR: In invariant-based techniques, capable of verifying safety properties and deadlock-freedom of sub-systems of the functional level of the DALA autonomous robot, this work goes far beyond the capacity of existing monolithic verification tools.
Abstract: We propose invariant-based techniques for the efficient verification of safety and deadlock-freedom properties of component-based systems. Components and their interactions are described in the BIP language. Global invariants of composite components are obtained by combining local invariants of their constituent components with interaction invariants that take interactions into account. We study new techniques for computing interaction invariants. Some of these techniques are incremental, i.e., interaction invariants of a composite hierarchically structured component are computed by reusing invariants of its constituents. We formalize incremental construction of components in the BIP language as the process of building progressively complex components by adding interactions (synchronization constraints) to atomic components. We provide sufficient conditions ensuring preservation of invariants when new interactions are added. When these conditions are not satisfied, we propose methods for generating new invariants in an incremental manner by reusing existing invariants from the constituents in the incremental construction. The reuse of existing invariants reduces considerably the overall verification effort. The techniques have been implemented in the D-Finder toolset. Among the experiments conducted, we have been capable of verifying safety properties and deadlock-freedom of sub-systems of the functional level of the DALA autonomous robot. This work goes far beyond the capacity of existing monolithic verification tools.

Journal ArticleDOI
TL;DR: This paper defines elicitation axes for the documentation of BI-specific requirements, gives a list of six BI entities that should be accounted for to operationalize business monitoring, and provides notations for the modeling of these entities.
Abstract: Business intelligence (BI) is perceived as a critical activity for organizations and is increasingly discussed in requirements engineering (RE). RE can contribute to the successful implementation of BI systems by assisting the identification and analysis of such systems' requirements and the production of the specification of the system to be. Within RE for BI systems, we focus in this paper on the following questions: (i) how the expectations of a BI system's stakeholders can be translated into accurate BI requirements, and (ii) how do we operationalize specifically these requirements in a system specification? In response, we define elicitation axes for the documentation of BI-specific requirements, give a list of six BI entities that we argue should be accounted for to operationalize business monitoring, and provide notations for the modeling of these entities. We survey important contributions of BI to define elicitation axes, adapt existing BI notations issued from RE literature, and complement them with new BI-specific notations. Using the i* framework, we illustrate the application of our proposal using a real-world case study.

Journal ArticleDOI
TL;DR: This paper presents an automated approach for synthesizing a U ML state machine modeling the life cycle of an object that occurs in different states in a UML activity diagram that makes life cycles that have been modeled implicitly in activity diagrams explicit.
Abstract: Unified modeling language (UML) activity diagrams can model the flow of stateful business objects among activities, implicitly specifying the life cycles of those objects. The actual object life cycles are typically expressed in UML state machines. The implicit life cycles in UML activity diagrams need to be discovered in order to derive the actual object life cycles or to check the consistency with an existing life cycle. This paper presents an automated approach for synthesizing a UML state machine modeling the life cycle of an object that occurs in different states in a UML activity diagram. The generated state machines can contain parallelism, loops, and cross-synchronization. The approach makes life cycles that have been modeled implicitly in activity diagrams explicit. The synthesis approach has been implemented using a graph transformation tool and has been applied in several case studies.

Journal ArticleDOI
TL;DR: The idea of using a Web browser as an editing platform, coupled with the storage options available within the cloud, provides powerful new capabilities that have transformed the way the authors interact with colleagues to design and create documents, as well as all other sorts of artifacts.
Abstract: InMarch 2006, Google purchased Upstartle to gain access to their browser-based word processor called Writely [1]. This acquisition from over a decade ago led to what we now know as Google Docs, which ushered in a new form of collaborative authoring tools. The idea of using a Web browser as an editing platform, coupled with the storage options available within the cloud, provides powerful new capabilities that have transformed the way we interact with colleagues to design and create documents, as well as all other sorts of artifacts. Specialized text processing solutions, like the LaTeX-focused Overleaf environment [2], bring a fresh new approach to collaboration using long-standing traditional tools. Furthermore, browserand cloud-based authoring tools have penetrated many domains. For example, in computer science education, tools such as Scratch help new programmers learn block-based coding in a browser, where programs are stored in the cloud with a large repository (over 13.M shared Scratch programs are available at the time of this writing) of user-shared examples [3]. There are multiple benefits of combining browser-based authoring environments with a cloud service. An obvious advantage is the platform independence that can be achieved through a browser, allowing the tool implementers to focus more on the core tool features, instead of the implementation morass of reproducing the same tool functionality across different platforms (browser incompatibility issues notwithstanding). Cloud services not only allow resources

Journal ArticleDOI
TL;DR: In the presence of increasing complexity, simulation has become a vital tool for obtaining additional information about a subset of the world, such as the interactions of all entities, processes, life forms, and other key components that aid in understanding a particular context.
Abstract: In the presence of increasing complexity, simulation has becomeavital tool for obtaining additional information about a subset of the world, such as the interactions of all entities, processes, life forms, and other key components that aid in understanding a particular context. A key advantage of simulation is the Principle of Substitution, which is perhaps best summarized by a quote from Marvin Minsky:

Journal ArticleDOI
TL;DR: The verification approach uses a translation of Simulink models to sequential programs that can be verified using traditional software verification techniques, and detailed discussions about the correctness of each step in the verification process are provided.
Abstract: This paper presents an approach to modular contract-based verification of discrete-time multi-rate Simulink models. The verification approach uses a translation of Simulink models to sequential programs that can then be verified using traditional software verification techniques. Automatic generation of the proof obligations needed for verification of correctness with respect to contracts, and automatic proofs are also discussed. Furthermore, the paper provides detailed discussions about the correctness of each step in the verification process. The verification approach is demonstrated on a case study involving control software for prevention of pressure peaks in hydraulics systems.