scispace - formally typeset
Search or ask a question

Showing papers on "Process modeling published in 2016"


Journal ArticleDOI
11 Feb 2016
TL;DR: The state-of-the-art in process monitoring and control for metal selective laser melting (SLM) processes is reviewed in this paper, where the authors present a review of the current state of the art.
Abstract: Additive manufacturing and specifically metal selective laser melting (SLM) processes are rapidly being industrialized. In order for this technology to see more widespread use as a production modality, especially in heavily regulated industries such as aerospace and medical device manufacturing, there is a need for robust process monitoring and control capabilities to be developed that reduce process variation and ensure quality. The current state of the art of such process monitoring technology is reviewed in this paper. The SLM process itself presents significant challenges as over 50 different process input variables impact the characteristics of the finished part. Understanding the impact of feed powder characteristics remains a challenge. Though many powder characterization techniques have been developed, there is a need for standardization of methods most relevant to additive manufacturing. In-process sensing technologies have primarily focused on monitoring melt pool signatures, either from a Lagrangian reference frame that follows the focal point of the laser or from a fixed Eulerian reference frame. Correlations between process measurements, process parameter settings, and quality metrics to date have been primarily qualitative. Some simple, first-generation process control strategies have also been demonstrated based on these measures. There remains a need for connecting process measurements to process models to enable robust model-based control.

364 citations


Journal ArticleDOI
TL;DR: The proposed framework unifies a number of approaches for correlation analysis proposed in literature, proposing a general solution that can perform those analyses and many more and has been implemented in ProM and combines process and data mining techniques.

212 citations


Journal ArticleDOI
24 Feb 2016
TL;DR: In this article, the authors discuss models required to span the scope of additive manufacturing processes with a particular focus towards predicting as-built material characteristics and residual stresses of the final build.
Abstract: Additive manufacturing (AM), widely known as 3D printing, is a direct digital manufacturing process, where a component can be produced layer by layer from 3D digital data with no or minimal use of machining, molding, or casting. AM has developed rapidly in the last 10 years and has demonstrated significant potential in cost reduction of performance-critical components. This can be realized through improved design freedom, reduced material waste, and reduced post processing steps. Modeling AM processes not only provides important insight in competing physical phenomena that lead to final material properties and product quality but also provides the means to exploit the design space towards functional products and materials. The length- and timescales required to model AM processes and to predict the final workpiece characteristics are very challenging. Models must span length scales resolving powder particle diameters, the build chamber dimensions, and several hundreds or thousands of meters of heat source trajectories. Depending on the scan speed, the heat source interaction time with feedstock can be as short as a few microseconds, whereas the build time can span several hours or days depending on the size of the workpiece and the AM process used. Models also have to deal with multiple physical aspects such as heat transfer and phase changes as well as the evolution of the material properties and residual stresses throughout the build time. The modeling task is therefore a multi-scale, multi-physics endeavor calling for a complex interaction of multiple algorithms. This paper discusses models required to span the scope of AM processes with a particular focus towards predicting as-built material characteristics and residual stresses of the final build. Verification and validation examples are presented, the over-spanning goal is to provide an overview of currently available modeling tools and how they can contribute to maturing additive manufacturing.

206 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss primary detrimental hurdles that have plagued effective modeling of additive manufacturing methods for metallic materials while also providing logical speculation into preferable research directions for overcoming these hurdles, including high performance computing, multiscale modeling, materials characterization, process modeling, experimentation, and validation for final product performance of additively manufactured metallic components.
Abstract: Additive manufacturing (AM) methods for rapid prototyping of 3D materials (3D printing) have become increasingly popular with a particular recent emphasis on those methods used for metallic materials. These processes typically involve an accumulation of cyclic phase changes. The widespread interest in these methods is largely stimulated by their unique ability to create components of considerable complexity. However, modeling such processes is exceedingly difficult due to the highly localized and drastic material evolution that often occurs over the course of the manufacture time of each component. Final product characterization and validation are currently driven primarily by experimental means as a result of the lack of robust modeling procedures. In the present work, the authors discuss primary detrimental hurdles that have plagued effective modeling of AM methods for metallic materials while also providing logical speculation into preferable research directions for overcoming these hurdles. The primary focus of this work encompasses the specific areas of high-performance computing, multiscale modeling, materials characterization, process modeling, experimentation, and validation for final product performance of additively manufactured metallic components.

194 citations


Journal ArticleDOI
14 Sep 2016
TL;DR: Landlab exposes a standardized model interoperability interface, and is able to couple to third-party models and software, and offers tools to allow the creation of cellular automata, and allows native coupling of such models to more traditional continuous differential equation-based modules.
Abstract: . The ability to model surface processes and to couple them to both subsurface and atmospheric regimes has proven invaluable to research in the Earth and planetary sciences. However, creating a new model typically demands a very large investment of time, and modifying an existing model to address a new problem typically means the new work is constrained to its detriment by model adaptations for a different problem. Landlab is an open-source software framework explicitly designed to accelerate the development of new process models by providing (1) a set of tools and existing grid structures – including both regular and irregular grids – to make it faster and easier to develop new process components, or numerical implementations of physical processes; (2) a suite of stable, modular, and interoperable process components that can be combined to create an integrated model; and (3) a set of tools for data input, output, manipulation, and visualization. A set of example models built with these components is also provided. Landlab's structure makes it ideal not only for fully developed modelling applications but also for model prototyping and classroom use. Because of its modular nature, it can also act as a platform for model intercomparison and epistemic uncertainty and sensitivity analyses. Landlab exposes a standardized model interoperability interface, and is able to couple to third-party models and software. Landlab also offers tools to allow the creation of cellular automata, and allows native coupling of such models to more traditional continuous differential equation-based modules. We illustrate the principles of component coupling in Landlab using a model of landform evolution, a cellular ecohydrologic model, and a flood-wave routing model.

152 citations


Journal ArticleDOI
TL;DR: An approach for conformance checking based on MP-Declare, a multi-perspective version of the declarative process modeling language Declare, which has been implemented in the process mining tool ProM and has been experimented using artificial and real-life event logs.
Abstract: We introduce a semantics for Multi Perspective Declare (MP-Declare).We introduce an abstract syntax for MP-Declare.We provide a set of algorithms for conformance checking based on MP-DeclareThe approach has been implemented in the process mining tool ProM.The approach has been demonstrated with real life data. Process mining is a family of techniques that aim at analyzing business process execution data recorded in event logs. Conformance checking is a branch of this discipline embracing approaches for verifying whether the behavior of a process, as recorded in a log, is in line with some expected behavior provided in the form of a process model. Recently, techniques for conformance checking based on declarative specifications have been developed. Such specifications are suitable to describe processes characterized by high variability. However, an open challenge in the context of conformance checking with declarative models is the capability of supporting multi-perspective specifications. This means that declarative models used for conformance checking should not only describe the process behavior from the control flow point of view, but also from other perspectives like data or time. In this paper, we close this gap by presenting an approach for conformance checking based on MP-Declare, a multi-perspective version of the declarative process modeling language Declare. The approach has been implemented in the process mining tool ProM and has been experimented using artificial and real-life event logs.

128 citations


Book ChapterDOI
18 Sep 2016
TL;DR: This paper describes an initial application of deep learning with recurrent neural networks to the problem of predicting the next process event, which is both a novel method in process prediction, and also a novel application for deep learning methods.
Abstract: Predicting the final state of a running process, the remaining time to completion or the next activity of a running process are important aspects of runtime process management. Runtime management requires the ability to identify processes that are at risk of not meeting certain criteria in order to offer case managers decision information for timely intervention. This in turn requires accurate prediction models for process outcomes and for the next process event, based on runtime information available at the prediction and decision point. In this paper, we describe an initial application of deep learning with recurrent neural networks to the problem of predicting the next process event. This is both a novel method in process prediction, which has previously relied on explicit process models in the form of Hidden Markov Models (HMM) or annotated transition systems, and also a novel application for deep learning methods.

98 citations


Journal ArticleDOI
TL;DR: In this article, the authors reviewed and discussed findings obtained by different research groups worldwide about post-combustion Ca-Looping process modelling, including particle reaction and reactor models for carbonation and calcination steps, assessing the impact of the conditions used for their determination into their reactivity predictions.

94 citations


Proceedings ArticleDOI
13 Jul 2016
TL;DR: This paper presents an Industry 4.0 process modeling language (I4PML) that is an extension (UML profile with stereotypes) of OMG's BPMN (Business Process Model and Notation) standard and describes a method for the specification of Industry 5.0 applications using UML and I4P ML.
Abstract: The term Industry 4.0 derives from the new (fourth) industrial revolution enabling suppliers and manufacturers to leverage new technological concepts like Internet of Things, Big Data, and Cloud Computing: New or enhanced products and services can be created, cost be reduced and productivity be increased. Similar terms are Smart Factory or Smart Manufacturing. The ideas, concepts and technologies are not hype anymore — they are at least partly reality, but many software specification and development aspects are still not sufficiently covered, e.g. standardization, specification and modeling languages. This paper presents an Industry 4.0 process modeling language (I4PML) that is an extension (UML profile with stereotypes) of OMG's BPMN (Business Process Model and Notation) standard. We also describe a method for the specification of Industry 4.0 applications using UML and I4PML.

88 citations


Journal ArticleDOI
TL;DR: BPMN Miner is able to detect and filter out noise in the event log arising for example from data entry errors, missing event records or infrequent behavior and are more accurate and less complex than those derived with flat process discovery techniques.

86 citations


Journal ArticleDOI
TL;DR: The proposed uBPMN not only allows for modeling ubiquitous business processes but also lays the groundwork for potentially deploying a variety of ubiquitous computing technologies.
Abstract: Context: Business Process Model and Notation (BPMN) is the de facto standard for business process modeling. It was developed by the Object Management Group with support of the major organizations in the fields of software engineering and information systems. Despite its wide use, when it comes to representing ubiquitous business processes, this business process modeling language is lacking. Objective: To address BPMN’s deficiency in representing ubiquitous business processes, we extend it and present uBPMN (or ubiquitous BPMN). Method: First, we analyze the modeling requirements for representing ubiquitous business processes. Based on the requirements, we conservatively extend the Meta-Object Facility meta-model and the XML Schema Definition of BPMN as well as extend the notation. The extension, that we call uBPMN follows the same outline as set by the Object Management Group for BPMN. Results: The proposed uBPMN not only allows for modeling ubiquitous business processes but also lays the groundwork for potentially deploying a variety of ubiquitous computing technologies. We illustrate all of uBPMN’s capabilities and benefits with real-life examples. Conclusion: uBPMN extends BPMN v2.0 with new capabilities to deal with ubiquitous computing technologies.

Journal ArticleDOI
TL;DR: It is argued that there are still substantial challenges to be addressed along the lines of model structure selection, identifiability, experiment design, nonlinear parameter estimation, model validation, model improvement, online model adaptation, model portability, modeling of complex systems, numerical methods, software environments, and implementation aspects.
Abstract: This position paper gives an overview of the discussion that took place at FIPSE 2 at Aldemar Resort, east of Heraklion, Crete, in June 21–23, 2014 This is the second conference in the series “Future Innovation in Process Systems Engineering” (http://fi-in-pseorg), which takes place every other year in Greece, with the objective to discuss open research challenges in three topics in Process Systems Engineering One of the topics of FIPSE 2 was the issue of “Linking Models and Experiments”, which is described in this publication Process models have been used extensively in academia and industry for several decades Yet, this paper argues that there are still substantial challenges to be addressed along the lines of model structure selection, identifiability, experiment design, nonlinear parameter estimation, model validation, model improvement, online model adaptation, model portability, modeling of complex systems, numerical methods, software environments, and implementation aspects Although there has

Journal ArticleDOI
01 May 2016
TL;DR: An empirical investigation consisting of an exploratory study and a follow-up study focusing on the system analysts’ sense-making of declarative process models that are specified in Declare indicates that two main strategies for reading Declare models exist and suggests that single constraints can be handled well by most subjects, while combinations of constraints pose significant challenges.
Abstract: Declarative approaches to business process modeling are regarded as well suited for highly volatile environments, as they enable a high degree of flexibility. However, problems in understanding and maintaining declarative process models often impede their adoption. Likewise, little research has been conducted into the understanding of declarative process models. This paper takes a first step toward addressing this fundamental question and reports on an empirical investigation consisting of an exploratory study and a follow-up study focusing on the system analysts' sense-making of declarative process models that are specified in Declare. For this purpose, we distributed real-world Declare models to the participating subjects and asked them to describe the illustrated process and to perform a series of sense-making tasks. The results of our studies indicate that two main strategies for reading Declare models exist: either considering the execution order of the activities in the process model, or orienting by the layout of the process model. In addition, the results indicate that single constraints can be handled well by most subjects, while combinations of constraints pose significant challenges. Moreover, the study revealed that aspects that are similar in both imperative and declarative process modeling languages at a graphical level, while having different semantics, cause considerable troubles. This research not only helps guiding the future development of tools for supporting system analysts, but also gives advice on the design of declarative process modeling notations and points out typical pitfalls to teachers and educators of future systems analysts.

01 Jan 2016
TL;DR: In this article, a technique for generating natural language texts from business process models is proposed, which are superior to manually created process descriptions in terms of completeness, structure, and linguistic complexity.
Abstract: The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.

Journal ArticleDOI
TL;DR: It is demonstrated through a real life case study that mining local patterns allows us to get insights in processes where regular start-to-end process discovery techniques are only able to learn unstructured, flower-like, models.
Abstract: In this paper we describe a method to discover frequent behavioral patterns in event logs. We express these patterns as \emph{local process models}. Local process model mining can be positioned in-between process discovery and episode / sequential pattern mining. The technique presented in this paper is able to learn behavioral patterns involving sequential composition, concurrency, choice and loop, like in process mining. However, we do not look at start-to-end models, which distinguishes our approach from process discovery and creates a link to episode / sequential pattern mining. We propose an incremental procedure for building local process models capturing frequent patterns based on so-called process trees. We propose five quality dimensions and corresponding metrics for local process models, given an event log. We show monotonicity properties for some quality dimensions, enabling a speedup of local process model discovery through pruning. We demonstrate through a real life case study that mining local patterns allows us to get insights in processes where regular start-to-end process discovery techniques are only able to learn unstructured, flower-like, models.

Journal ArticleDOI
TL;DR: In this article, the authors describe a method to discover frequent behavioral patterns in event logs and express these patterns as local process models, which can be positioned in between process discovery and episode/sequential pattern mining.

Journal ArticleDOI
TL;DR: This paper investigates a multiple view aware approach to trace clustering, based on a co-training strategy, and shows that the presented algorithm is able to discover a clustering pattern of the log, such that related traces result appropriately clustered.
Abstract: Process mining refers to the discovery, conformance, and enhancement of process models from event logs currently produced by several information systems (e.g. workflow management systems). By tightly coupling event logs and process models, process mining makes it possible to detect deviations, predict delays, support decision making, and recommend process redesigns.Event logs are data sets containing the executions (called traces) of a business process. Several process mining algorithms have been defined to mine event logs and deliver valuable models (e.g. Petri nets) of how logged processes are being executed. However, they often generate spaghetti-like process models, which can be hard to understand. This is caused by the inherent complexity of real-life processes, which tend to be less structured and more flexible than what the stakeholders typically expect. In particular, spaghetti-like process models are discovered when all possible behaviors are shown in a single model as a result of considering the set of traces in the event log all at once.To minimize this problem, trace clustering can be used as a preprocessing step. It splits up an event log into clusters of similar traces, so as to handle variability in the recorded behavior and facilitate process model discovery. In this paper, we investigate a multiple view aware approach to trace clustering, based on a co-training strategy. In an assessment, using benchmark event logs, we show that the presented algorithm is able to discover a clustering pattern of the log, such that related traces result appropriately clustered. We evaluate the significance of the formed clusters using established machine learning and process mining metrics.

Book ChapterDOI
10 Oct 2016
TL;DR: This paper presents a full-fledged approach for the discovery of multi-perspective declarative process models from event logs that allows the user to discoverDeclarative models taking into consideration all the information an event log can provide.
Abstract: Process discovery is one of the main branches of process mining that allows the user to build a process model representing the process behavior as recorded in the logs. Standard process discovery techniques produce as output a procedural process model (e.g., a Petri net). Recently, several approaches have been developed to derive declarative process models from logs and have been proven to be more suitable to analyze processes working in environments that are less stable and predictable. However, a large part of these techniques are focused on the analysis of the control flow perspective of a business process. Therefore, one of the challenges still open in this field is the development of techniques for the analysis of business processes also from other perspectives, like data, time, and resources. In this paper, we present a full-fledged approach for the discovery of multi-perspective declarative process models from event logs that allows the user to discover declarative models taking into consideration all the information an event log can provide. The approach has been implemented and experimented in real-life case studies.

Journal ArticleDOI
01 Sep 2016
TL;DR: An efficient and effective process mining framework that provides extensive support for the discovery of patterns related to resource assignment is developed and is validated in terms of performance and applicability.
Abstract: Process mining aims at discovering processes by extracting knowledge from event logs. Such knowledge may refer to different business process perspectives. The organisational perspective deals, among other things, with the assignment of human resources to process activities. Information about the resources that are involved in process activities can be mined from event logs in order to discover resource assignment conditions, which is valuable for process analysis and redesign. Prior process mining approaches in this context present one of the following issues: (i) they are limited to discovering a restricted set of resource assignment conditions; (ii) they do not aim at providing efficient solutions; or (iii) the discovered process models are difficult to read due to the number of assignment conditions included.In this paper we address these problems and develop an efficient and effective process mining framework that provides extensive support for the discovery of patterns related to resource assignment. The framework is validated in terms of performance and applicability. A process mining approach for the organisational perspective is proposed.It supports the discovery of resource assignment patterns and how involvement of resources influences the control-flow.The framework consists of an event log pre-processing phase to increase efficiency.A model post-processing phase improves effectiveness by removing redundant rules.

Book
12 Feb 2016
TL;DR: This volume offers a definitive guide to the use of patterns, which synthesize the wide range of approaches to modeling business processes and describes three major pattern catalogs, presented from control-flow, data, and resource perspectives.
Abstract: The study of business processes has emerged as a highly effective approach to coordinating an organization's complex service- and knowledge-based activities. The growing field of business process management (BPM) focuses on methods and tools for designing, enacting, and analyzing business processes. This volume offers a definitive guide to the use of patterns, which synthesize the wide range of approaches to modeling business processes. It provides a unique and comprehensive introduction to the well-known workflow patterns collection -- recurrent, generic constructs describing common business process modeling and execution scenarios, presented in the form of problem-solution dialectics. The underlying principles of the patterns approach ensure that they are independent of any specific enabling technology, representational formalism, or modeling approach, and thus broadly applicable across the business process modeling and business process technology domains. The authors, drawing on extensive research done by the Workflow Patterns Initiative, offer a detailed introduction to the fundamentals of business process modeling and management; describe three major pattern catalogs, presented from control-flow, data, and resource perspectives; and survey related BPM patterns. The book, a companion to the authoritative Workflow Patterns website, will be an essential resource for both academics and practitioners working in business process modeling and business process management.

Journal ArticleDOI
TL;DR: An overview of the prevailing approaches to design a business process architecture is provided, which showed that practitioners have a preference for using approaches that are based on reference models and approaches that is based on the identification of business functions or business objects.
Abstract: With the uptake of business process modelling in practice, the demand grows for guidelines that lead to consistent and integrated collections of process models. The notion of a business process architecture has been explicitly proposed to address this. This paper provides an overview of the prevailing approaches to design a business process architecture. Furthermore, it includes evaluations of the usability and use of the identified approaches. Finally, it presents a framework for business process architecture design that can be used to develop a concrete architecture. The use and usability were evaluated in two ways. First, a survey was conducted among 39 practitioners, in which the opinion of the practitioners on the use and usefulness of the approaches was evaluated. Second, four case studies were conducted, in which process architectures from practice were analysed to determine the approaches or elements of approaches that were used in their design. Both evaluations showed that practitioners have a preference for using approaches that are based on reference models and approaches that are based on the identification of business functions or business objects. At the same time, the evaluations showed that practitioners use these approaches in combination, rather than selecting a single approach.

Journal ArticleDOI
TL;DR: In this paper, the preference for different process representation forms depending on the task setting and cognitive style of the user was examined empirically, and it was found that users consistently prefer diagrams over other representation formats.
Abstract: Process models describe someone's understanding of processes. Processes can be described using unstructured, semi-formal or diagrammatic representation forms. These representations are used in a variety of task settings, ranging from understanding processes to executing or improving processes, with the implicit assumption that the chosen representation form will be appropriate for all task settings. We explore the validity of this assumption by examining empirically the preference for different process representation forms depending on the task setting and cognitive style of the user. Based on data collected from 120 business school students, we show that preferences for process representation formats vary dependent on application purpose and cognitive styles of the participants. However, users consistently prefer diagrams over other representation formats. Our research informs a broader research agenda on task-specific applications of process modeling. We offer several recommendations for further research in this area.

Journal ArticleDOI
TL;DR: A Sobol set assisted artificial neural network replaces the computationally expensive kinetic model of long chain branched poly vinyl acetate as the fast and efficient surrogate model and introduces a logical way of designing ANN architectures where the outperformance of multiple layer networks justifies the elimination of heuristics approach.

Journal ArticleDOI
TL;DR: The proposed framework is independent of any existing formalism, and provides a conceptually rich and exhaustive ontology and semantics of norms needed for business process compliance checking, and can be used to compare different compliance management frameworks (CMFs).
Abstract: By definition, regulatory rules (in legal context called norms) intend to achieve specific behaviour from business processes, and might be relevant to the whole or part of a business process. They can impose conditions on different aspects of process models, e.g., control-flow, data and resources etc. Based on the rules sets, norms can be classified into various classes and sub-classes according to their effects. This paper presents an abstract framework consisting of a list of norms and a generic compliance checking approach on the idea of (possible) execution of processes. The proposed framework is independent of any existing formalism, and provides a conceptually rich and exhaustive ontology and semantics of norms needed for business process compliance checking. Apart from the other uses, the proposed framework can be used to compare different compliance management frameworks (CMFs).

Book ChapterDOI
10 Oct 2016
TL;DR: DIME is an integrated solution for the rigorous model-driven development of sophisticated web applications based on the Dynamic Web Application (DyWA) framework that is designed to accelerate the realization of requirements in agile development environments.
Abstract: We present DIME, an integrated solution for the rigorous model-driven development of sophisticated web applications based on the Dynamic Web Application (DyWA) framework, that is designed to accelerate the realization of requirements in agile development environments. DIME provides a family of Graphical Domain-Specific Languages (GDSLs), each of which is tailored towards a specific aspect of typical web applications, including persistent entities (i.e., a data model), business logic in form of various types of process models, the structure of the user interface, and access control. They are modeled on a high level of abstraction in a simplicity-driven fashion that focuses on describing what application is sought, instead of how the application is realized. The choice of platform, programming language, and frameworks is moved to the corresponding (full) code generator.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: BPMN4CPS is proposed, which provides a set of extensions for BPMN to properly and accurately model CPS processes and a case study of an ambulance drone system is presented.
Abstract: Modeling is one of the most important topics in the domain of Cyber-Physical Systems (CPS). In the field of process modelling, Business Process Modeling and Notation (BPMN) is the most used standard. However, BPMN remains limited to cater for the specific characteristics and properties of CPS such as real world properties. In this paper, we propose BPMN4CPS, which provides a set of extensions for BPMN to properly and accurately model CPS processes. In order to illustrate the applicability of BPMN4CPS, we present a case study of an ambulance drone system.

Journal ArticleDOI
TL;DR: In this article, the authors propose different alternative approaches to speed up the alignment of process models and logs, which can reduce the number of alignment computations while still returning near-optimal repairs.
Abstract: The abundance of event data in today’s information systems makes it possible to “confront” process models with the actual observed behavior. Process mining techniques use event logs to discover process models that describe the observed behavior, and to check conformance of process models by diagnosing deviations between models and reality. In many situations, it is desirable to mediate between a preexisting model and observed behavior. Hence, we would like to repair the model while improving the correspondence between model and log as much as possible. The approach presented in this article assigns predefined costs to repair actions (allowing inserting or skipping of activities). Given a maximum degree of change, we search for models that are optimal in terms of fitness—that is, the fraction of behavior in the log not possible according to the model is minimized. To compute fitness, we need to align the model and log, which can be time consuming. Hence, finding an optimal repair may be intractable. We propose different alternative approaches to speed up repair. The number of alignment computations can be reduced dramatically while still returning near-optimal repairs. The different approaches have been implemented using the process mining framework ProM and evaluated using real-life logs.

Journal ArticleDOI
01 Aug 2016
TL;DR: Behavioral Process Mining is proposed as an alternative approach to enlighten relevant subprocesses, representing meaningful collaboration work practices, based on the application of hierarchical graph clustering to the set of instance graphs generated by a process.
Abstract: Real world applications provide many examples of unstructured processes, where process execution is mainly driven by contingent decisions taken by the actors, with the result that the process is rarely repeated exactly in the same way. In these cases, traditional Process Discovery techniques, aimed at extracting complete process models from event logs, reveal some limits. In fact, when applied to logs of unstructured processes, Process Discovery techniques usually return complex, "spaghetti-like" models, which usually provide limited support to analysts. As a remedy, in the present work we propose Behavioral Process Mining as an alternative approach to enlighten relevant subprocesses, representing meaningful collaboration work practices. The approach is based on the application of hierarchical graph clustering to the set of instance graphs generated by a process. We also describe a technique for building instance graphs from traces. We assess advantages and limits of the approach on a set of synthetic and real world experiments.

Journal Article
TL;DR: In this paper, the authors analyzed 585 BPMN 2.0 process models from six companies and found that split and join representations, message flow, the lack of proper model decomposition, and labeling related to quality issues.
Abstract: Many organizations use business process models to document business operations and formalize business requirements in software-engineering projects. The Business Process Model and Notation (BPMN), a specification by the Object Management Group, has evolved into the leading standard for process modeling. One challenge is BPMN's complexity: it offers a huge variety of elements and often several representational choices for the same semantics. This raises the question of how well modelers can deal with these choices. Empirical insights into BPMN use from the practitioners' perspective are still missing. To close this gap, researchers analyzed 585 BPMN 2.0 process models from six companies. They found that split and join representations, message flow, the lack of proper model decomposition, and labeling related to quality issues. They give five specific recommendations on how to avoid these issues.

Book ChapterDOI
14 Nov 2016
TL;DR: In this article, the authors propose an alternative approach that separates the concern of producing accurate models with ensuring their structuredness, sometimes sacrificing the former to ensure the latter, and apply a well-known heuristic that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one.
Abstract: This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.