scispace - formally typeset
Search or ask a question
Author

Artem Polyvyanyy

Bio: Artem Polyvyanyy is an academic researcher from University of Melbourne. The author has contributed to research in topics: Process modeling & Process mining. The author has an hindex of 28, co-authored 118 publications receiving 2234 citations. Previous affiliations of Artem Polyvyanyy include Queensland University of Technology & University of Potsdam.


Papers
More filters
Book ChapterDOI
16 Sep 2010
TL;DR: This paper provides two improvements to the Refined Process Structure Tree (RPST), and extends the applicability of the RPST to arbitrary directed graphs such that every node is on a path from some source to some sink.
Abstract: A business process is often modeled using some kind of a directed flow graph, which we call a workflow graph. The Refined Process Structure Tree (RPST) is a technique for workflow graph parsing, i.e., for discovering the structure of a workflow graph, which has various applications. In this paper, we provide two improvements to the RPST. First, we propose an alternative way to compute the RPST that is simpler than the one developed originally. In particular, the computation reduces to constructing the tree of the triconnected components of a workflow graph in the special case when every node has at most one incoming or at most one outgoing edge. Such graphs occur frequently in applications. Secondly, we extend the applicability of the RPST. Originally, the RPST was applicable only to graphs with a single source and single sink such that the completed version of the graph is biconnected. We lift both restrictions. Therefore, the RPST is then applicable to arbitrary directed graphs such that every node is on a path from some source to some sink. This includes graphs with multiple sources and/or sinks and disconnected graphs.

126 citations

Journal ArticleDOI
TL;DR: Split Miner as discussed by the authors combines a novel approach to filter the directly-follows graph induced by an event log, with an approach to identify combinations of split gateways that accurately capture the concurrency, conflict and causal relations between neighbors in the graph.
Abstract: The problem of automated discovery of process models from event logs has been intensively researched in the past two decades. Despite a rich field of proposals, state-of-the-art automated process discovery methods suffer from two recurrent deficiencies when applied to real-life logs: (i) they produce large and spaghetti-like models; and (ii) they produce models that either poorly fit the event log (low fitness) or over-generalize it (low precision). Striking a trade-off between these quality dimensions in a robust and scalable manner has proved elusive. This paper presents an automated process discovery method, namely Split Miner, which produces simple process models with low branching complexity and consistently high and balanced fitness and precision, while achieving considerably faster execution times than state-of-the-art methods, measured on a benchmark covering twelve real-life event logs. Split Miner combines a novel approach to filter the directly-follows graph induced by an event log, with an approach to identify combinations of split gateways that accurately capture the concurrency, conflict and causal relations between neighbors in the directly-follows graph. Split Miner is also the first automated process discovery method that is guaranteed to produce deadlock-free process models with concurrency, while not being restricted to producing block-structured process models.

115 citations

Journal ArticleDOI
TL;DR: It is argued that a behavioural abstraction may be leveraged to measure the compliance of a process log - a collection of cases to utilise causal behavioural profiles that capture the behavioural characteristics of process models and cases, and can be computed efficiently.

102 citations

Journal Article
TL;DR: In this article, a behavioural abstraction is leveraged to measure the compliance of a process log, i.e., a collection of cases, and different compliance measures based on these profiles are proposed.
Abstract: Process compliance measurement is getting increasing attention in companies due to stricter legal requirements and market pressure for operational excellence. In order to judge on compliance of the business processing, the degree of behavioural deviation of a case, i.e., an observed execution sequence, is quantified with respect to a process model (referred to as fitness, or recall). Recently, different compliance measures have been proposed. Still, nearly all of them are grounded on state-based techniques and the trace equivalence criterion, in particular. As a consequence, these approaches have to deal with the state explosion problem. In this paper, we argue that a behavioural abstraction may be leveraged to measure the compliance of a process log – a collection of cases. To this end, we utilise causal behavioural profiles that capture the behavioural characteristics of process models and cases, and can be computed efficiently. We propose different compliance measures based on these profiles, discuss the impact of noise in process logs on our measures, and show how diagnostic information on non-compliance is derived. As a validation, we report on findings of applying our approach in a case study with an international service provider.

100 citations

Journal Article
TL;DR: This paper proposes an automated approach for querying a business process model repository for structurally and semantically relevant models and provides abusiness process model search engine implementation for evaluation of the proposed approach.
Abstract: Determining similarity between business process models has recently gained interest in the business process management community. So far similarity was addressed separately either at semantic or structural aspect of process models. Also, most of the contributions that measure similarity of process models assume an ideal case when process models are enriched with semantics - a description of meaning of process model elements. However, in real life this results in a heavy human effort consuming pre-processing phase which is often not feasible. In this paper we propose an automated approach for querying a business process model repository for structurally and semantically relevant models. Similar to the search on the Internet, a user formulates a BPMN-Q query and as a result receives a list of process models ordered by relevance to the query. We provide a business process model search engine implementation for evaluation of the proposed approach.

81 citations


Cited by
More filters
Posted Content
01 Jan 2012
TL;DR: The 2008 crash has left all the established economic doctrines - equilibrium models, real business cycles, disequilibria models - in disarray as discussed by the authors, and a good viewpoint to take bearings anew lies in comparing the post-Great Depression institutions with those emerging from Thatcher and Reagan's economic policies: deregulation, exogenous vs. endoge- nous money, shadow banking vs. Volcker's Rule.
Abstract: The 2008 crash has left all the established economic doctrines - equilibrium models, real business cycles, disequilibria models - in disarray. Part of the problem is due to Smith’s "veil of ignorance": individuals unknowingly pursue society’s interest and, as a result, have no clue as to the macroeconomic effects of their actions: witness the Keynes and Leontief multipliers, the concept of value added, fiat money, Engel’s law and technical progress, to name but a few of the macrofoundations of microeconomics. A good viewpoint to take bearings anew lies in comparing the post-Great Depression institutions with those emerging from Thatcher and Reagan’s economic policies: deregulation, exogenous vs. endoge- nous money, shadow banking vs. Volcker’s Rule. Very simply, the banks, whose lending determined deposits after Roosevelt, and were a public service became private enterprises whose deposits determine lending. These underlay the great moderation preceding 2006, and the subsequent crash.

3,447 citations

Journal ArticleDOI
TL;DR: When I started out as a newly hatched PhD student, one of the first articles I read and understood was Ray Reiter’s classic article on default logic, and I became fascinated by both default logic and, more generally, non-monotonic logics.
Abstract: When I started out as a newly hatched PhD student, back in the day, one of the first articles I read and understood (or at least thought that I understood) was Ray Reiter’s classic article on default logic (Reiter, 1980).This was some years after the famous ‘non-monotonic logic’ issue of Artificial Intelligence in which that article appeared, but default logic was still one of the leading approaches, a tribute to the simplicity and power of the theory. As a result of reading the article, I became fascinated by both default logic and, more generally, non-monotonic logics. However, despite my fascination, these approaches never seemed terribly useful for the kinds of problem that I was supposed to be studying—problems like those in medical decision making—and so I eventually lost interest. In fact non-monotonic logics seemed to me, and to many people at the time I think, not to be terribly useful for anything. They were interesting, and clearly relevant to the long-term goals of Artificial Intelligence as a discipline, but not of any immediate practical importance. This verdict, delivered at the end of the 1980s, continued, I think, to be true for the next few years while researchers working in non-monotonic logics studied problems that to outsiders seemed to be ever more obscure. However, by the end of the 1990s, it was becoming clear, even to folk as short-sighted as I, that non-monotonic logics were getting to the point at which they could be used to solve practical problems. Knowledge in action shows quite how far these techniques have come. The reason that non-monotonic logics were invented was, of course, in order to use logic to reason about the world. Our knowledge of the world is typically incomplete, and so, in order to reason about it, one has to make assumptions about things one does not know. This, in turn, requires mechanisms for both making assumptions and then retracting them if and when they turn out not to be true. Non-monotonic logics are intended to handle this kind of assumption making and retracting, providing a mechanism that has the clean semantics of logic, but which has a non-monotonic set of conclusions. Much of the early work on non-monotonic logics was concerned with theoretical reasoning, that is reasoning about the beliefs of an agent—what the agent believes to be true. Theoretical reasoning is the domain of all those famous examples like ‘Typically birds fly. Tweety is a bird, so does Tweety fly?’, and the fact that so much of non-monotonic reasoning seemed to focus on theoretical reasoning was why I lost interest in it. I became much more concerned with practical reasoning—that is reasoning about what an agent should do—and non-monotonic reasoning seemed to me to have nothing interesting to say about practical reasoning. Of course I was wrong. When one tries to formulate any kind of description of the world as the basis for planning, one immediately runs into applications of non-monotonic logics, for example in keeping track of the state of a changing world. It is this use of non-monotonic logic that is at the heart of Knowledge in action. Building on the McCarthy’s situation calculus, Knowledge in action constructs a theory of action that encompasses a very large part of what an agent requires to reason about the world. As Reiter says in the final chapter,

899 citations

01 Jan 2007
TL;DR: A translation apparatus is provided which comprises an inputting section for inputting a source document in a natural language and a layout analyzing section for analyzing layout information.
Abstract: A translation apparatus is provided which comprises: an inputting section for inputting a source document in a natural language; a layout analyzing section for analyzing layout information including cascade information, itemization information, numbered itemization information, labeled itemization information and separator line information in the source document inputted by the inputting section and specifying a translation range on the basis of the layout information; a translation processing section for translating a source document text in the specified translation range into a second language; and an outputting section for outputting a translated text provided by the translation processing section.

740 citations

Journal ArticleDOI
TL;DR: The practical relevance of BPM and rapid developments over the last decade justify a comprehensive survey and an overview of the state-of-the-art in BPM.
Abstract: Business Process Management (BPM) research resulted in a plethora of methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. This survey aims to structure these results and provide an overview of the state-of-the-art in BPM. In BPM the concept of a process model is fundamental. Process models may be used to configure information systems, but may also be used to analyze, understand, and improve the processes they describe. Hence, the introduction of BPM technology has both managerial and technical ramifications and may enable significant productivity improvements, cost savings, and flow-time reductions. The practical relevance of BPM and rapid developments over the last decade justify a comprehensive survey.

739 citations

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the importance of maintaining a proper alignment between event logs and process models and elaborate on the realization of such alignments and their application to conformance checking and performance analysis.
Abstract: Process mining techniques use event data to discover process models, to check the conformance of predefined process models, and to extend such models with information about bottlenecks, decisions, and resource usage. These techniques are driven by observed events rather than hand-made models. Event logs are used to learn and enrich process models. By replaying history using the model, it is possible to establish a precise relationship between events and model elements. This relationship can be used to check conformance and to analyze performance. For example, it is possible to diagnose deviations from the modeled behavior. The severity of each deviation can be quantified. Moreover, the relationship established during replay and the timestamps in the event log can be combined to show bottlenecks. These examples illustrate the importance of maintaining a proper alignment between event log and process model. Therefore, we elaborate on the realization of such alignments and their application to conformance checking and performance analysis. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.

632 citations