scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Business Process Model Merging: An Approach to Business Process Consolidation

TL;DR: An algorithm for computing merged models and an algorithm for extracting digests from a merged model are presented, which show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes.
Abstract: This article addresses the problem of constructing consolidated business process models out of collections of process models that share common fragments. The article considers the construction of unions of multiple models (called merged models) as well as intersections (called digests). Merged models are intended for analysts who wish to create a model that subsumes a collection of process models -- typically representing variants of the same underlying process -- with the aim of replacing the variants with the merged model. Digests, on the other hand, are intended for analysts who wish to identify the most recurring fragments across a collection of process models, so that they can focus their efforts on optimizing these fragments. The article presents an algorithm for computing merged models and an algorithm for extracting digests from a merged model. The merging and digest extraction algorithms have been implemented and tested against collections of process models taken from multiple application domains. The tests show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes. Furthermore, a case study conducted in a large insurance company has demonstrated the usefulness of the merging and digest extraction operators in a practical setting.
Citations
More filters
Journal ArticleDOI
TL;DR: The practical relevance of BPM and rapid developments over the last decade justify a comprehensive survey and an overview of the state-of-the-art in BPM.
Abstract: Business Process Management (BPM) research resulted in a plethora of methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. This survey aims to structure these results and provide an overview of the state-of-the-art in BPM. In BPM the concept of a process model is fundamental. Process models may be used to configure information systems, but may also be used to analyze, understand, and improve the processes they describe. Hence, the introduction of BPM technology has both managerial and technical ramifications and may enable significant productivity improvements, cost savings, and flow-time reductions. The practical relevance of BPM and rapid developments over the last decade justify a comprehensive survey.

739 citations


Cites background from "Business Process Model Merging: An ..."

  • ...In [227] three requirements are listed for model merging....

    [...]

  • ...In [228, 229] an approach is presented that does not produce a configurable model and does not aim to address the three requirements listed in [227]....

    [...]

Journal ArticleDOI
TL;DR: This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art.
Abstract: It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade, leading to an array of approaches to business process variability modeling. In general, each of these approaches extends a conventional process modeling language with constructs to capture customizable process models. A customizable process model represents a family of process variants in a way that a model of each variant can be derived by adding or deleting fragments according to customization options or according to a domain model. This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art. The survey puts into evidence an abundance of customizable process-modeling languages, which contrasts with a relative scarcity of available tool support and empirical comparative evaluations.

358 citations

Journal ArticleDOI
TL;DR: An overview of the management techniques that currently exist, as well as the open research challenges that they pose can be found in this paper, where the authors also provide an overview of their work.

157 citations

DOI
01 Jan 2014
TL;DR: The Evolutionary Tree Miner framework is presented, which is implemented as a plug-in for the process mining toolkit ProM and is able to balance these different quality metrics and be able to produce (a collection of) process models that have a specific balance of these quality dimensions, as specified by the user.
Abstract: Process mining automatically produces a process model while considering only an organization’s records of its operational processes. Over the last decade, many process discovery techniques have been developed, and many authors have compared these techniques by focusing on the properties of the models produced. However, none of the current techniques guarantee to produce sound (i.e., syntactically correct) process models. Furthermore, none of the current techniques provide insights into the trade-offs between the different quality dimensions of process models. In this thesis we present the Evolutionary Tree Miner (ETM) framework. Its main feature is the guarantee that the discovered process models are sound. Another feature is that the ETM framework also incorporates all four well-known quality dimensions in process discovery (replay fitness, precision, generalization and simplicity). Additional quality metrics can be easily added to the Evolutionary Tree Miner. The Evolutionary Tree Miner framework is able to balance these different quality metrics and is able to produce (a collection of) process models that have a specific balance of these quality dimensions, as specified by the user. The third main feature of the Evolutionary Tree Miner is that it is easily extensible. In this thesis we discuss extensions for the discovery of a collection of process models with different quality trade-offs, the discovery of (a collection of) process models using a given process model, and the discovery of a configurable process model that describes multiple event-logs. The Evolutionary Tree Miner is implemented as a plug-in for the process mining toolkit ProM. The Evolutionary Tree Miner and all of its extensions are evaluated using both artificial and real-life data sets.

117 citations


Cites background or methods from "Business Process Model Merging: An ..."

  • ...Most approaches [63, 104, 118] provide a similarity metric based on the minimum number of edit operations required to transform one model into another model....

    [...]

  • ...[118] describe an alternative approach that allows merging process models into a configurable process model, even if the input process models are in different formalisms....

    [...]

References
More filters
Journal ArticleDOI
01 Dec 2001
TL;DR: A taxonomy is presented that distinguishes between schema-level and instance-level, element- level and structure- level, and language-based and constraint-based matchers and is intended to be useful when comparing different approaches to schema matching, when developing a new match algorithm, and when implementing a schema matching component.
Abstract: Schema matching is a basic problem in many database application domains, such as data integration, E-business, data warehousing, and semantic query processing. In current implementations, schema matching is typically performed manually, which has significant limitations. On the other hand, previous research papers have proposed many techniques to achieve a partial automation of the match operation for specific application domains. We present a taxonomy that covers many of these existing approaches, and we describe the approaches in some detail. In particular, we distinguish between schema-level and instance-level, element-level and structure-level, and language-based and constraint-based matchers. Based on our classification we review some previous match implementations thereby indicating which part of the solution space they cover. We intend our taxonomy and review of past work to be useful when comparing different approaches to schema matching, when developing a new match algorithm, and when implementing a schema matching component.

3,693 citations

Book
01 Dec 2006
TL;DR: Detailed descriptions and explanations of the most well-known and frequently used compression methods are covered in a self-contained fashion, with an accessible style and technical level for specialists and nonspecialists.
Abstract: Data compression is one of the most important fields and tools in modern computing. From archiving data, to CD ROMs, and from coding theory to image analysis, many facets of modern computing rely upon data compression. Data Compression provides a comprehensive reference for the many different types and methods of compression. Included are a detailed and helpful taxonomy, analysis of most common methods, and discussions on the use and comparative benefits of methods and description of "how to" use them. The presentation is organized into the main branches of the field of data compression: run length encoding, statistical methods, dictionary-based methods, image compression, audio compression, and video compression. Detailed descriptions and explanations of the most well-known and frequently used compression methods are covered in a self-contained fashion, with an accessible style and technical level for specialists and nonspecialists. Topics and features: coverage of video compression, including MPEG-1 and H.261 thorough coverage of wavelets methods, including CWT, DWT, EZW and the new Lifting Scheme technique complete audio compression QM coder used in JPEG and JBIG, including new JPEG 200 standard image transformations and detailed coverage of discrete cosine transform and Haar transform coverage of EIDAC method for compressing simple images prefix image compression ACB and FHM curve compression geometric compression and edgebreaker technique.Data Compression provides an invaluable reference and guide for all computer scientists, computer engineers, electrical engineers, signal/image processing engineers and other scientists needing a comprehensive compilation for a broad range of compression methods.

1,745 citations


"Business Process Model Merging: An ..." refers methods in this paper

  • ...Next, we merged each of these model pairs and calculated the compression ratio [Salomon 2006], which in our context is the ratio between the size of the merged model and the size of the input models, that is, CR(G1, G2) =|CG|/(|G1|+|G2|), where CG = Merge(G1, G2)....

    [...]

Proceedings ArticleDOI
26 Feb 2002
TL;DR: This paper presents a matching algorithm based on a fixpoint computation that is usable across different scenarios and conducts a user study, in which the accuracy metric was used to estimate the labor savings that the users could obtain by utilizing the algorithm to obtain an initial matching.
Abstract: Matching elements of two data schemas or two data instances plays a key role in data warehousing, e-business, or even biochemical applications. In this paper we present a matching algorithm based on a fixpoint computation that is usable across different scenarios. The algorithm takes two graphs (schemas, catalogs, or other data structures) as input, and produces as output a mapping between corresponding nodes of the graphs. Depending on the matching goal, a subset of the mapping is chosen using filters. After our algorithm runs, we expect a human to check and if necessary adjust the results. As a matter of fact, we evaluate the 'accuracy' of the algorithm by counting the number of needed adjustments. We conducted a user study, in which our accuracy metric was used to estimate the labor savings that the users could obtain by utilizing our algorithm to obtain an initial matching. Finally, we illustrate how our matching algorithm is deployed as one of several high-level operators in an implemented testbed for managing information models and mappings.

1,613 citations


"Business Process Model Merging: An ..." refers methods in this paper

  • ...This difference is relevant because schema matching techniques rely heavily on semantic information captured in the edge labels [Do and Rahm 2002; Melnik et al. 2002]....

    [...]

Proceedings ArticleDOI
02 May 2004
TL;DR: WordNet::Similarity as mentioned in this paper is a Perl package that makes it possible to measure the semantic similarity and relatedness between a pair of concepts (or synsets) using WordNet.
Abstract: WordNet::Similarity is a freely available software package that makes it possible to measure the semantic similarity and relatedness between a pair of concepts (or synsets). It provides six measures of similarity, and three measures of relatedness, all of which are based on the lexical database WordNet. These measures are implemented as Perl modules which take as input two concepts, and return a numeric value that represents the degree to which they are similar or related.

1,608 citations