scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Conceptual Modeling in 2016"


Book ChapterDOI
14 Nov 2016
TL;DR: This article describes a mapping from UML/OCL conceptual schemas to Blueprints, an abstraction layer on top of a variety of graph databases, and Gremlin, a graph traversal language, via an intermediate Graph metamodel.
Abstract: The need to store and manipulate large volume of (unstructured) data has led to the development of several NoSQL databases for better scalability. Graph databases are a particular kind of NoSQL databases that have proven their efficiency to store and query highly interconnected data, and have become a promising solution for multiple applications. While the mapping of conceptual schemas to relational databases is a well-studied field of research, there are only few solutions that target conceptual modeling for NoSQL databases and even less focusing on graph databases. This is specially true when dealing with the mapping of business rules and constraints in the conceptual schema. In this article we describe a mapping from UML/OCL conceptual schemas to Blueprints, an abstraction layer on top of a variety of graph databases, and Gremlin, a graph traversal language, via an intermediate Graph metamodel. Tool support is fully available.

67 citations


Book ChapterDOI
14 Nov 2016
TL;DR: This paper explains the evolution of a Conceptual Schema of the Human Genome (CSHG), which seeks to provide a clear and precise understanding of the human genome, and shows how over time this model has evolved, thus it has discovered better forms of representation.
Abstract: The objective of the work is to present the benefits of the application of Conceptual Modeling (CM) in complex domains, such as genomics. This paper explains the evolution of a Conceptual Schema of the Human Genome (CSHG), which seeks to provide a clear and precise understanding of the human genome. We want to highlighting all the advantages of the application of CM in a complex domain such as Genomic Information Systems (GeIS). We show how over time this model has evolved, thus we have discovered better forms of representation. As we advanced in exploring the domain, we understood that we should be extending and incorporating the new concepts detected into our model. Here we present and discuss the evolution to reach the current version (CSHG v2). A solution based on conceptual models gives a clear definition of the domain with direct implications for the medical context.

50 citations


Book ChapterDOI
14 Nov 2016
TL;DR: In this article, the authors propose an alternative approach that separates the concern of producing accurate models with ensuring their structuredness, sometimes sacrificing the former to ensure the latter, and apply a well-known heuristic that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one.
Abstract: This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.

48 citations


Book ChapterDOI
14 Nov 2016
TL;DR: In this article, the authors propose a fully automated and scalable method for online detection of process drift from event streams, which performs statistical tests over distributions of behavioral relations between events, as observed in two adjacent windows of adaptive size, sliding along with the stream.
Abstract: Existing business process drift detection methods do not work with event streams. As such, they are designed to detect inter-trace drifts only, i.e. drifts that occur between complete process executions (traces), as recorded in event logs. However, process drift may also occur during the execution of a process, and may impact ongoing executions. Existing methods either do not detect such intra-trace drifts, or detect them with a long delay. Moreover, they do not perform well with unpredictable processes, i.e. processes whose logs exhibit a high number of distinct executions to the total number of executions. We address these two issues by proposing a fully automated and scalable method for online detection of process drift from event streams. We perform statistical tests over distributions of behavioral relations between events, as observed in two adjacent windows of adaptive size, sliding along with the stream. An extensive evaluation on synthetic and real-life logs shows that our method is fast and accurate in the detection of typical change patterns, and performs significantly better than the state of the art.

47 citations


Book ChapterDOI
14 Nov 2016
TL;DR: This paper critically survey the existing literature in ontology-driven conceptual modeling in order to identify the kind of research that has been performed over the years and establish its current state of the art by describing the use and the application of ontologies.
Abstract: In this paper, we critically survey the existing literature in ontology-driven conceptual modeling in order to identify the kind of research that has been performed over the years and establish its current state of the art by describing the use and the application of ontologies in mapping phenomena to models. We are interested if there exist any connections between representing kinds of phenomena with certain ontologies and conceptual modeling languages. To understand and identify any gaps and research opportunities, our literature study is conducted in the form of a systematic mapping review, which aims at structuring and classifying the area that is being investigated. Our results indicate that there are several research gaps that should be addressed, which we translated into several future research opportunities.

45 citations


Book ChapterDOI
14 Nov 2016
TL;DR: This paper proposes a novel design method considering the new features brought by NOSQL and encompassing relational and co-relational design altogether, and adopting the classical 3-phase design used for relational databases.
Abstract: Big Data has recently gained popularity and has strongly questioned relational databases as universal storage systems, especially in the presence of analytical workloads. As result, co-relational alternatives, commonly known as NOSQL (Not Only SQL) databases, are extensively used for Big Data. As the primary focus of NOSQL is on performance, NOSQL databases are directly designed at the physical level, and consequently the resulting schema is tailored to the dataset and access patterns of the problem in hand. However, we believe that NOSQL design can also benefit from traditional design approaches. In this paper we present a method to design databases for analytical workloads. Starting from the conceptual model and adopting the classical 3-phase design used for relational databases, we propose a novel design method considering the new features brought by NOSQL and encompassing relational and co-relational design altogether.

40 citations


Book ChapterDOI
14 Nov 2016
TL;DR: The RationalGRL framework is developed, in which argument diagrams can be mapped to goal models and the result of the evaluation of arguments and their counterarguments with GRL initial satisfaction values are integrated.
Abstract: Goal modeling languages, such as i* and the Goal-oriented Requirements Language (GRL), capture and analyze high-level goals and their relationships with lower level goals and tasks. However, in such models, the rationalization behind these goals and tasks and the selection of alternatives are usually left implicit. To better integrate goal models and their rationalization, we develop the RationalGRL framework, in which argument diagrams can be mapped to goal models. Moreover, we integrate the result of the evaluation of arguments and their counterarguments with GRL initial satisfaction values. We develop an interface between the argument web tools OVA and TOAST and the Eclipse-based tool for GRL called jUCMNav. We demonstrate our methodology with a case study from the Schiphol Group.

35 citations


Book ChapterDOI
14 Nov 2016
TL;DR: This paper proposes to transform specific elements of a US set into a Use-Case Diagram using the granularity information obtained through tagging, and involves continuous round-tripping between the US and UC views.
Abstract: User Stories (US) are mostly used as basis for representing requirements in agile development. Written in a direct manner, US fail in producing a visual representation of the main system-to-be functions. A Use-Case Diagram (UCD), on the other hand, intends to provide such a view. Approaches that map US sets to a UCD have been proposed; they however consider every US as a Use Case (UC). Nevertheless, a valid UC should not be an atomic task or a sub-process but enclose an entire scenario of the system use instead. A unified model of US templates to tag US sets was previously build. Within functional elements, it notably distinguishes granularity levels. In this paper, we propose to transform specific elements of a US set into a UCD using the granularity information obtained through tagging. In practice, such a transformation involves continuous round-tripping between the US and UC views; a CASE-tool supports this.

35 citations


Book ChapterDOI
14 Nov 2016
TL;DR: This work revisits one of the earliest such approaches to multi-level modeling, Telos, and investigates what needs to be added to its axioms to get a true multi- level modeling language.
Abstract: Multi-level modeling aims to reduce redundancy in data models by defining properties at the right abstraction level and inheriting them to more specific levels. We revisit one of the earliest such approaches, Telos, and investigate what needs to be added to its axioms to get a true multi-level modeling language. Unlike previous approaches, we define levels not with numeric potencies but with hierarchies of so-called most general instances.

31 citations


Book ChapterDOI
14 Nov 2016
TL;DR: A novel, automated method for visualizing requirements—by showing the concepts the text references and their relationships—at different levels of granularity is introduced and experimented on.
Abstract: The majority of practitioners express software requirements using natural text notations such as user stories. Despite the readability of text, it is hard for people to build an accurate mental image of the most relevant entities and relationships. Even converting requirements to conceptual models is not sufficient: as the number of requirements and concepts grows, obtaining a holistic view of the requirements becomes increasingly difficult and, eventually, practically impossible. In this paper, we introduce and experiment with a novel, automated method for visualizing requirements—by showing the concepts the text references and their relationships—at different levels of granularity. We build on two pillars: (i) clustering techniques for grouping elements into coherent sets so that a simplified overview of the concepts can be created, and (ii) state-of-the-art, corpus-based semantic relatedness algorithms between words to measure the extent to which two concepts are related. We build a proof-of-concept tool and evaluate our approach by applying it to requirements from four real-world data sets.

23 citations


Book ChapterDOI
14 Nov 2016
TL;DR: A UML stereotype for component diagrams that allows for representing ontologies as a set of interconnected Ontology Design Patterns, aimed at supporting the communication between domain experts and ontology engineers is presented.
Abstract: The contribution of this paper is twofold: (i) a UML stereotype for component diagrams that allows for representing ontologies as a set of interconnected Ontology Design Patterns, aimed at supporting the communication between domain experts and ontology engineers; (ii) an analysis of possible approaches to ontology reuse and the definition of four methods according to their impact on the sustainability and stability of the resulting ontologies and knowledge bases. To conceptually prove the effectiveness of our proposals, we present two real LOD projects.

Book ChapterDOI
14 Nov 2016
TL;DR: This keynote explores the argument by delimiting the notion and scope of conceptual modeling, and by introducing and discussing two possible scenarios of fruitful application to better understand why conceptual modeling can help to manage the social challenges of the world of the emerging information era.
Abstract: Our strong capability of conceptualization makes us, human beings, different from any other species in our planet. We, as conceptual modelers, should apply in the right direction such fascinating capability to make it play an essential role in the design of the world to come. What does it mean that “right direction” requiresa challenging discussion. Halfway between the need of having a sound philosophical characterization and an effective, practical computer science application, conceptual modeling emerges as the ideal discipline needed for understanding life and improving our life style. This keynote explores this argument by delimiting the notion and scope of conceptual modeling, and by introducing and discussing two possible scenarios of fruitful application. The first one is oriented to better understand why conceptual modeling can help to manage the social challenges of the world of the emerging information era, and how this world that comes could benefit from it. The second one focuses on how understanding the human genome can open new ways to go beyond what we can consider “traditional Homo Sapiens capabilities”, with especial implications in the health domain and the new medicine of precision.

Book ChapterDOI
14 Nov 2016
TL;DR: The aim of this paper is to highlight the problems that bioinformaticians have to face when they integrate information from different genomic databases, and to identify and characterize those problems in order to understand which ones hinder the research process.
Abstract: Due to the complexity of genomic information and the broad amount of data produced every day, the genomic information accessible on the web has become very difficult to integrate, which hinders the research process. Using the knowledge from the Data Quality field and after a specific study of a set of genomic databases we have found problems related to six Data Quality dimensions. The aim of this paper is to highlight the problems that bioinformaticians have to face when they integrate information from different genomic databases. The contribution of this paper is to identify and characterize those problems in order to understand which ones hinder the research process, increasing the time-waste that this task means for researchers.

Book ChapterDOI
14 Nov 2016
TL;DR: This paper proposes a modeling framework (a set of metamodels and a set of design catalogues) for requirements analysis of data analytics systems that consists of three complementary modeling views: business view, analytics design view, and data preparation view.
Abstract: Data analytics is an essential element for success in modern enterprises. Nonetheless, to effectively design and implement analytics systems is a non-trivial task. This paper proposes a modeling framework (a set of metamodels and a set of design catalogues) for requirements analysis of data analytics systems. It consists of three complementary modeling views: business view, analytics design view, and data preparation view. These views are linked together and act as a bridge between enterprise strategies, analytics algorithms, and data preparation activities. The framework comes with a set of catalogues that codify and represent an organized body of business analytics design knowledge. The framework has been applied to three real-world case studies and findings are discussed.

Book ChapterDOI
14 Nov 2016
TL;DR: A novel evaluation method that builds on the assessment of multiple annotators to define probabilistic notions of precision and recall in process model matching techniques and finds that it assigns different ranks to the matching techniques from the contest and allows to gain more detailed insights into their performance.
Abstract: Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to evaluate the performance of process model matching techniques. Often, not even humans can agree on a set of correct correspondences. Current evaluation methods, however, require a binary gold standard, which clearly defines which correspondences are correct. The disadvantage of this evaluation method is that it does not take the true complexity of the matching problem into account and does not fairly assess the capabilities of a matching technique. In this paper, we propose a novel evaluation method for process model matching techniques. In particular, we build on the assessment of multiple annotators to define probabilistic notions of precision and recall. We use the dataset and the results of the Process Model Matching Contest 2015 to assess and compare our evaluation method. We find that our probabilistic evaluation method assigns different ranks to the matching techniques from the contest and allows to gain more detailed insights into their performance.

Book ChapterDOI
14 Nov 2016
TL;DR: This paper introduces personalization for Walmart online grocery, and presents a multi-level basket recommendation system that incorporates both the customers’ general and subtle preferences into decisions.
Abstract: Food is so personal. Each individual has her own shopping characteristics. In this paper, we introduce personalization for Walmart online grocery. Our contribution is twofold. First, we study shopping behaviors of Walmart online grocery customers. In contrast to traditional online shopping, grocery shopping demonstrates more repeated and frequent purchases with large orders. Secondly, we present a multi-level basket recommendation system. In this system, unlike typical recommender systems which usually concentrate on single item or bundle recommendations, we analyze a customer’s shopping basket holistically to understand her shopping tasks. We then use multi-level cobought models to recommend items for each of the purposes. At the stage of selecting particular items, we incorporate both the customers’ general and subtle preferences into decisions. We finally recommend the customer a series of items at checkout. Offline experiments show our system can reach 11 % item hit rate, 40 % subcategory hit rate and 70 % category hit rate. Online tests show it can reach more than 25 % order hit rate.

Book ChapterDOI
14 Nov 2016
TL;DR: This work presents a semi-automated treatment of regulatory texts by automating in unison, the key steps in fact-orientation and relation extraction and utilizes the domain models in learning to identify rules from the text.
Abstract: Modern enterprises need to treat regulatory compliance in a holistic and maximally automated manner, given the stakes and complexity involved. The ability to derive the models of regulations in a given domain from natural language texts is vital in such a treatment. Existing approaches automate regulatory rule extraction with a restricted use of domain models counting on the knowledge and efforts of domain experts. We present a semi-automated treatment of regulatory texts by automating in unison, the key steps in fact-orientation and relation extraction. In addition, we utilize the domain models in learning to identify rules from the text. The key benefit of our approach is that it can be applied to any legal text with a considerably reduced burden on domain experts. Early results are encouraging and pave the way for further explorations.

Book ChapterDOI
14 Nov 2016
TL;DR: It is shown how a number of well-formedness conditions concerning an assignment of referring expressions can be efficiently diagnosed, and how the above types attached to classes allow a concrete relational schema and SQL queries over it to be derived from a combination of the conceptual schema and queries overIt.
Abstract: We apply recent work on referring expression types to the issue of identification in Conceptual Modelling. In particular, we consider how such types yield a separation of concerns in a setting where an Information System based on a conceptual schema is to be mapped to a relational schema plus SQL queries. We start from a simple object-centered representation (as in semantic data models), where naming is not an issue because everything is self-identified (possibly using surrogates). We then allow the analyst to attach to every class a preferred “referring expression type”, and to specify uniqueness constraints in the form of generalized functional dependencies. We show (1) how a number of well-formedness conditions concerning an assignment of referring expressions can be efficiently diagnosed, and (2) how the above types attached to classes allow a concrete relational schema and SQL queries over it to be derived from a combination of the conceptual schema and queries over it.

Book ChapterDOI
14 Nov 2016
TL;DR: The results suggest that Azzurra is better suited than BPMN for unstructured business processes, and the verdict is still out as to the best way to represent them.
Abstract: Imperative process languages, such as BPMN, describe business processes in terms of collections of activities and control flows among them. Despite their popularity, such languages remain useful mostly for structured processes whose flow of activities is well-known and does not vary greatly. For unstructured processes, on the other hand, the verdict is still out as to the best way to represent them. In our previous work, we have proposed Azzurra, a specification language for business processes founded on social concepts, such as roles, agents and commitments. In this paper, we present the results of an experiment that comparatively evaluates Azzurra and BPMN in terms of their ability to represent structured and unstructured processes. Our results suggest that Azzurra is better suited than BPMN for unstructured business processes.

Book ChapterDOI
14 Nov 2016
TL;DR: This work explores the applicability of goal models to conceptualize strategic business problems and capture viable alternatives in support of formal strategic decision-making, and shows through a comparative study how analysis can be conducted on a realistic case adopted from the literature using existing goal modeling techniques.
Abstract: Business strategies aim to operationalize an enterprise’s mission and visions by defining initiatives and choosing among alternative courses of action through some form of strategic analysis. However, existing analysis techniques (e.g., SWOT analysis, Five Forces Model) are invariably informal and sketchy, in sharp contrast to the formal and algorithmic counterparts developed in Conceptual Modeling and Software Engineering. Among such techniques, goal models and goal reasoning have become very prominent over the past twenty years, helping to model stakeholder requirements and the alternative ways these can be fulfilled. In this work we explore the applicability of goal models to conceptualize strategic business problems and capture viable alternatives in support of formal strategic decision-making. We show through a comparative study how analysis can be conducted on a realistic case adopted from the literature using existing goal modeling techniques, and identify their weaknesses and limitations that need to be addressed in order to accommodate strategic business analysis.

Book ChapterDOI
14 Nov 2016
TL;DR: The NomosT architecture and the process by which a user constructs a model of law semi-automatically, by first annotating the text of a law and then generating from it a model, suggest that tool supported generation of models of law reduces substantially human effort without affecting the quality of the output.
Abstract: Laws and regulations impact the design of software systems, as they introduce new requirements and constrain existing ones. The analysis of a software system and the degree to which it complies with applicable laws can be greatly facilitated by models of applicable laws. However, laws are inherently voluminous, often consisting of hundreds of pages of text, and so are their models, consisting of thousands of concepts and relationships. This paper studies the possibility of building models of law semi-automatically by using the NomosT tool. Specifically, we present the NomosT architecture and the process by which a user constructs a model of law semi-automatically, by first annotating the text of a law and then generating from it a model. We then evaluate the performance of the tool relative to building a model of a piece of law manually. In addition, we offer statistics on the quality of the final output that suggest that tool supported generation of models of law reduces substantially human effort without affecting the quality of the output.

Book ChapterDOI
14 Nov 2016
TL;DR: A pattern-based ontology engineering approach is employed, which employs the Unified Foundational Ontology, for the development of a sound Core Value Ontology.
Abstract: The creation of value is an important concern in organizations. However, current Enterprise Modeling languages all interprete value differently, which has a negative impact on the semantic quality of the model instantiations. This issue need to be solved to increase the relevance of these instantiations for business stakeholders. Therefore, the goal of this paper is the development of a sound Core Value Ontology. In order to do that, we employ a pattern-based ontology engineering approach, which employs the Unified Foundational Ontology.

Book ChapterDOI
14 Nov 2016
TL;DR: An approach for selecting explicitly KPIs and Key Result Indicators and an iterative process that guides the discovery and definition of indicators is proposed and applied to a real case study on water management.
Abstract: Key Performance Indicators (KPIs) operationalize ambiguous enterprise goals into quantified variables with clear thresholds. Their usefulness has been established in multiple domains yet it remains a difficult and error-prone task to find suitable KPIs for a given strategic goal. A careful analysis of the literature on both strategic modeling, planning and management reveals that this difficulty is due to a number of factors. Firstly, there is a general lack of adequate conceptualizations that capture the subtle yet important differences between performance and result indicators. Secondly, there is a lack of integration between modelling and data analysis techniques that interleaves analysis with the modeling process. In order to tackle these deficiencies, we propose an approach for selecting explicitly KPIs and Key Result Indicators (KRIs). Our approach is comprised of (i) a novel modeling language that exploits the essential elements of indicators, covering KPIs, KRIs and measures, (ii) a data mining-based analysis technique for providing data-driven information about the elements in the model, thereby enabling domain experts to validate the KPIs selected, and (iii) an iterative process that guides the discovery and definition of indicators. In order to validate our approach, we apply our proposal to a real case study on water management.

Book ChapterDOI
14 Nov 2016
TL;DR: This paper presents a goal modeling language which is integrated with a method for multi-perspective enterprise modeling, such that context-enriched models of goal systems can be constructed.
Abstract: Conceptual models of goal systems promise to provide an apt basis for planning, analyzing, monitoring, and (re-)considering goals as part of management processes in the organization. But although a great deal of conceptual goal modeling languages are available, these take only limited account of the organizational dimension of goals, including authorization rights, responsibilities, resources, and, in particular, related decision processes. This paper presents a goal modeling language which is integrated with a method for multi-perspective enterprise modeling, such that context-enriched models of goal systems can be constructed. Aside from organizational aspects, particular emphasis is placed on conceptualizations that clearly distinguish different (meta) levels of abstraction.

Book ChapterDOI
14 Nov 2016
TL;DR: A new modeling language is presented which extends ArchiMate that proposes a set of core elements for modeling OT components, based on existing OT standards and ontologies, and makes it possible to associate these components to business and IT elements.
Abstract: Enterprise Modeling is used to analyze and improve IT, as well as to make IT more suitable to the needs of the business. However, asset intensive organizations have an ample set of operational technologies (OT) that Enterprise Modeling does not account for. When trying to model such enterprises, there is no accurate form of showing components that belong to the world of OT nor is there a way to bridge the division between OT and IT. Existing languages fall short due to their limited focus that does not consider modeling operational technologies and even less relating them to the IT and Business dimensions. To address these issues, in this paper we present a new modeling language which extends ArchiMate. This language proposes a set of core elements for modeling OT components, based on existing OT standards and ontologies, and makes it possible to associate these components to business and IT elements. Introducing this language makes it possible to apply existing modeling and analysis techniques from Enterprise Modeling in settings that cover Business, OT and IT.

Book ChapterDOI
Mikio Aoyama1
14 Nov 2016
TL;DR: A unified knowledge framework of the bodies of knowledge across the disciplines is proposed, and it is proposed that business analysis, BPM, and business architecture should have the same raison d’etre.
Abstract: Several similar but different disciplines have evolved in the arena of requirements engineering and business analysis, including business analysis, BPM (Business Process Management), and business architecture. Yet, they are forming bodies of knowledges in each discipline. Each discipline has its raison d’etre. However, such diversity of the bodies of knowledge causes a confusion. This article is intended to review the bodies of knowledge, and proposes a unified knowledge framework of the bodies of knowledge across the disciplines.

Book ChapterDOI
14 Nov 2016
TL;DR: In this article, the authors use constrained goal models (CGMs) to represent the requirements of a system, and capture requirements changes in terms of incremental operations on a goal model.
Abstract: We are interested in supporting software evolution caused by changing requirements and/or changes in the operational environment of a software system. For example, users of a system may want new functionality or performance enhancements to cope with growing user population (changing requirements). Alternatively, vendors of a system may want to minimize costs in implementing requirements changes (evolution requirements). We propose to use Constrained Goal Models (CGMs) to represent the requirements of a system, and capture requirements changes in terms of incremental operations on a goal model. Evolution requirements are then represented as optimization goals that minimize implementation costs or customer value. We can then exploit reasoning techniques to derive optimal new specifications for an evolving software system. CGMs offer an expressive language for modelling goals that comes with scalable solvers that can solve hybrid constraint and optimization problems using a combination of Satisfiability Modulo Theories (SMT) and Optimization Modulo Theories (OMT) techniques. We evaluate our proposal by modeling and reasoning with a goal model for the meeting scheduling exemplar.

Book ChapterDOI
14 Nov 2016
TL;DR: This work investigates the more challenging real-world case in which both types of constraints co-occur, and characterize the associated implication problem axiomatically and algorithmically in linear input time.
Abstract: Cardinality constraints and functional dependencies together can express many semantic properties for applications in which data is certain. However, modern applications need to process large volumes of uncertain data. So far, cardinality constraints and functional dependencies have only been studied in isolation over uncertain data. We investigate the more challenging real-world case in which both types of constraints co-occur. While more expressive constraints could easily be defined, they would not enjoy the computational properties we show to hold for our combined class. Indeed, we characterize the associated implication problem axiomatically and algorithmically in linear input time. We also show how to summarize any given set of our constraints as an Armstrong instance. These instances help data analysts consolidate meaningful degrees of certainty by which our constraints hold in the underlying application domain.

Book ChapterDOI
14 Nov 2016
TL;DR: This paper proposes different views aiming at highlighting information that is relevant for a particular stakeholder, helping him to query requirements artifacts, and offers three kinds of visualizations capturing language and domain elements, while providing a gradual model overview.
Abstract: Requirements documents and models need to be used by many stakeholders with different technological proficiency during software development. Each stakeholder may need to understand the entire (or simply part of the) requirements artifacts. To empower these stakeholders, views of the requirements should be configurable to their particular needs. This paper uses information visualization techniques to help in this process. It proposes different views aiming at highlighting information that is relevant for a particular stakeholder, helping him to query requirements artifacts. We offer three kinds of visualizations capturing language and domain elements, while providing a gradual model overview: the big picture view, the syntax-based view, and the concern-based view. We instantiate these views with i* models and introduce an implementation prototype in the iStarLab tool.

Book ChapterDOI
14 Nov 2016
TL;DR: An approach for context pattern discovery and build a context-aware workflow execution engine to instantiate and execute context-based workflow instances and a framework for context- aware e-contract enactment system is provided.
Abstract: An e-contract is a contract that is specified, modeled and executed by a software system. E-contract business processes are modeled using workflows and their enactment is mostly dependent on the execution context. Existing e-contract systems lack context-awareness, and thus often face difficulties in enacting when context and requirements of e-contracts change at run-time. In this paper, we (a) present an approach for context pattern discovery and build a context-aware workflow execution engine, (b) develop an approach for context-aware execution-workflow to instantiate and execute context-based workflow instances and (c) provide a framework for context-aware e-contract enactment system. We also demonstrate the viability of our approach using a government contract.