scispace - formally typeset
Open AccessJournal ArticleDOI

Process querying: Enabling business intelligence through query-based process analytics

TLDR
In this article, the authors propose a framework for devising process querying methods, i.e., techniques for the (automated) management of repositories of designed and executed processes, as well as models that describe relationships between processes.
Abstract
The volume of process-related data is growing rapidly: more and more business operations are being supported and monitored by information systems. Industry 4.0 and the corresponding industrial Internet of Things are about to generate new waves of process-related data, next to the abundance of event data already present in enterprise systems. However, organizations often fail to convert such data into strategic and tactical intelligence. This is due to the lack of dedicated technologies that are tailored to effectively manage the information on processes encoded in process models and process execution records. Process-related information is a core organizational asset which requires dedicated analytics to unlock its full potential. This paper proposes a framework for devising process querying methods, i.e., techniques for the (automated) management of repositories of designed and executed processes, as well as models that describe relationships between processes. The framework is composed of generic components that can be configured to create a range of process querying methods. The motivation for the framework stems from use cases in the field of Business Process Management. The design of the framework is informed by and validated via a systematic literature review. The framework structures the state of the art and points to gaps in existing research. Process querying methods need to address these gaps to better support strategic decision-making and provide the next generation of Business Intelligence platforms.

read more

Content maybe subject to copyright    Report

Process Querying: Enabling Business Intelligence
through Query-Based Process Analytics
Artem Polyvyanyy
a,
, Chun Ouyang
a
, Alistair Barros
a
, Wil M. P. van der Aalst
b,a
a
Queensland University of Technology, Brisbane, Australia
b
Eindhoven University of Technology, Eindhoven, The Netherlands
Abstract
The volume of process-related data is growing rapidly: more and more business operations are being supported and monitored by information
systems. Industry 4.0 and the corresponding industrial Internet of Things are about to generate new waves of process-related data, next to the
abundance of event data already present in enterprise systems. However, organizations often fail to convert such data into strategic and tactical
intelligence. This is due to the lack of dedicated technologies that are tailored to effectively manage the information on processes encoded in
process models and process execution records. Process-related information is a core organizational asset which requires dedicated analytics to
unlock its full potential. This paper proposes a framework for devising process querying methods, i.e., techniques for the (automated) management
of repositories of designed and executed processes, as well as models that describe relationships between processes. The framework is composed
of generic components that can be configured to create a range of process querying methods. The motivation for the framework stems from use
cases in the field of Business Process Management. The design of the framework is informed by and validated via a systematic literature review.
The framework structures the state of the art and points to gaps in existing research. Process querying methods need to address these gaps to better
support strategic decision-making and provide the next generation of Business Intelligence platforms.
Keywords: Process querying, process management, process analytics, process intelligence, process science, business intelligence
1. Introduction
Business Process Management (BPM) is the discipline that
combines approaches for the design, execution, control, mea-
surement, and optimization of business processes. Most of the
larger organizations adopted BPM principles (e.g., designing
processes explicitly). A growing, but still limited, number of
organizations uses explicit BPM systems, i.e., information sys-
tems directly driven and controlled by explicit process mod-
els. Business Intelligence (BI) systems focus on the dissemina-
tion of business-related data without considering process mod-
els. Hence, one can easily witness the gap between data-driven
BI approaches and process-centric BPM approaches. Process
mining approaches aim to bridge this gap [1]. Like other BPM
approaches, process mining is process-centric. However, unlike
most BPM approaches, it is driven by factual event data rather
than hand-made models.
Process mining is closely related to the term process ana-
lytics [2, 3] which refers to approaches, techniques, and tools to
provide process participants, decision makers, and other stake-
holders with insights about the efficiency and effectiveness of
operational processes. The search, correlation, aggregation,
analysis and visualization of process events can support insights
and improvements in performance, quality, compliance, fore-
casting and planning, of processes operating in dynamic com-
mercial settings. Most of the commercial tools e.g., Splunk,
SAP Business Process Improvement, Pentaho, and Adobe Ana-
lytics, focus on purely structural associations in organizational
Corresponding author
Email addresses: artem.polyvyanyy@qut.edu.au (Artem
Polyvyanyy), c.ouyang@qut.edu.au (Chun Ouyang),
alistair.barros@qut.edu.au (Alistair Barros),
w.m.p.v.d.aalst@tue.nl (Wil M. P. van der Aalst)
information, where process execution is measured via coarse-
grained events (e.g., start and end of process execution) in line
with classical performance-oriented business intelligence anal-
ysis of organizational units, resources, products, services, etc.
This is in stark contrast with process mining approaches that
provide fact-based insights to support process improvements
[1]. Process discovery techniques can be used to learn pro-
cess models from event logs. However, process mining extends
far beyond process discovery and includes topics like confor-
mance checking, bottlenecks analysis, decision mining, organi-
zational mining, predictive process analytics, etc. All of these
process mining approaches have in common that they seek the
confrontation between event data (i.e., observed behavior) and
process models (hand-made or discovered automatically).
Through these developments, at least three broad contexts
for process analytics can be identified to frame further devel-
opment of supportive techniques. Firstly, temporal contexts are
important where past and present process data are retrieved and
the future behavior of processes can be projected. Secondly,
process behavior needs to be understood in different organiza-
tional contexts, not only the operational level, but also at strate-
gic and tactical levels, given reflections of processes in higher-
level architecture models. Thirdly, productivity contexts nowa-
days focus not only on transactional considerations, through
policy and performance compliance checks, but also on trans-
formational opportunities, whereby insights into how processes
can be standardized, reused, and rapidly adapted, are crucial.
Process querying studies (automated) methods for manag-
ing, e.g., filtering or manipulating, repositories of models that
describe observed and/or envisioned processes, and relation-
ships between the processes. A process querying method is a
technique that, given a process repository and a process query,

Polyvyanyy et al. / Decision Support Systems, The Authors’ Version (2017) 1–18 2
Design and
Develop
Process querying
framework
Demonstrate
and Evaluate
Validation of the
framework via a
systematic
literature review
How to
Knowledge
Identify Problem
and Motivate
Lack of
consensus in
research on
methods for
process querying
Define
Objectives of a
Solution
Develop a
framework for
process querying
methods
Inference
Theory
Disciplinary
Knowledge
Communicate
The paper at
hand
Figure 1. DSRM process for the process querying framework.
systematically implements the query in the repository, where
a process query is a (formal) instruction to manage a process
repository. The paper addresses major limitations of techniques
for process querying, which often analyze business processes
on a single model scope and ignore process semantics aspects.
Note that a recent survey demonstrates the lack of, and the need
for, dedicated precise process querying methods grounded in
execution semantics rather than the structure of business pro-
cess models [4].
Concretely, this paper proposes the Process Querying Frame-
work, which aims to guide development of process querying
methods. Given a process repository and a process query that
specifies a formal instruction to manage the given repository,
the corresponding process querying problem consists of imple-
menting the instruction on the repository. The proposed frame-
work is an abstract system in which components providing gene-
ric functionality can be selectively replaced resulting in a new
process querying method. The framework emphasizes unified
process querying based on searching process structure and be-
havior, which includes the designed and observed behavior. Pro-
cesses often exhibit complex alignments with higher manifes-
tations of processes through strategic and tactical models in the
organizational pyramid. Moreover, results of process querying
methods must be effectively interpreted by stakeholders. The
proposed in this paper framework addresses these concerns.
To develop the framework, we use (an adapted version of)
the Design Science Research Methodology (DSRM) by Peffers
et al [5] that follows the guidelines by Hevner et al [6] for the re-
quired elements of design research. The framework is a viable
artifact that is produced as an outcome of this design endeavor
(Guideline 1: Design of an Artifact, refer to [6] for details).
The corresponding DSRM process is depicted in Figure 1. The
process is initiated by the problem of lack of consensus in de-
signs of methods for process querying. First, we perform a
systematic CRUD (Create, Read, Update, and Delete) analy-
sis over process repositories to identify a list of use cases for
managing process repositories (Guideline 5: Research Rigor).
The obtained use cases justify relevance of the problem (Guide-
line 2: Problem Relevance). The core objective of this work is
the development of a framework for devising process querying
methods. Hence, in the second step, we employ the identified
use cases and CRUD operations to elicit requirements and cat-
egories of process querying problems, which in this paper are
referred to as process query intents. Third, based on the de-
duced requirements, we rigorously define the process query-
ing problem and process querying method, and use these no-
tions as the basis for the design of the framework (Guideline
5: Research Rigor). Fourth, we validate the proposed frame-
work via a systematic literature review (Guideline 3: Design
Evaluation; Guideline 5: Research Rigor). The insights gained
from this evaluation are used to iterate the design of the frame-
work to cater for the features of the state of the art techniques
for managing process repositories while still satisfying the re-
quirements deduced from the use cases (Guideline 6: Design
as a Search Process). The conducted systematic review demon-
strates that the developed framework is consistent with process
querying methods in prior literature. Finally, we document and
discuss all the steps taken to design the framework, which is
also the main contribution of this work (Guideline 4: Research
Contributions), in the paper at hand (Guideline 7: Communica-
tion of Research).
The remainder of the paper is organized as follows. The
next section discusses how processes manifest horizontally
within the Business Process Management (BPM) lifecycle and
vertically at different levels of abstraction of the organizational
pyramid, and looks at use cases for managing process repos-
itories. Based on the insights gained in Section 2, Section 3
gives rigorous definitions of the process querying problem and
process querying method. Section 4 discusses the design of the
process querying framework, which is based on the formal no-
tions proposed in Section 3. Then, Section 5 suggests how the
proposed framework can be positioned in light of the broader
process analytics and BI. Section 6 validates the design of the
framework via an extensive literature review and states the re-
search gaps. Section 7 concludes the paper.
2. Process Querying Requirements
This section provides an exposition of process querying require-
ments, to develop the process querying framework, which is
proposed in Sections 3 and 4. Section 2.1 provides a contex-
tual background of BPM generally used to understand different
forms of processes, how they relate to each other, and how they
are managed in the BPM lifecycle. The functional requirements
for process querying are then posited along the fundamental
operations relevant to data querying, i.e., Create, Read, Up-
date, and Delete (CRUD), applied to processes managed in pro-
cess repositories (Section 2.2). Non-functional requirements
for high performance query execution are also discussed (Sec-
tion 2.3). The process of requirements elicitation is based on
considering how these operations support the needs of process
management as understood through relevant BPM use cases,
as profiled in a comprehensive BPM survey [7]. The require-
ments, focused on CRUD operations, are referred to as process
query intents, each bearing specific insights, per process create,
read, update, and delete, which need to be supported through
the proposed process querying framework.
2.1. Contexts for Process Management
We begin by considering the contextual insights in which
processes are managed, and, thus, where process querying is
applicable. From a broad, organizational perspective, various
parts of systems, including business processes, can be seen at
different levels of business to IT systems architecture, typi-
cally depicted as a pyramid [8]. Seen from this perspective,

Polyvyanyy et al. / Decision Support Systems, The Authors’ Version (2017) 1–18 3
Strategic
Business
Architecture
Tactical Business Architecture
Enterprise Architecture
Operational Architecture
IT Solution Architecture and Systems
Business Models
Solution Design Models, Configurable Software Architecture, Task and Workflow Models, System Logs
Process Architecture, Detailed Business Process Models, Resource Models, Target Operating Models
Business to IT Integrated Models
Business Capability Maps
Strategic planning
Tactical planning
Cross systems operational
planning across business and IT
Operational planning
for supporting
business operations
IT planning for developing
or procuring IT solutions
Figure 2. Processes at different levels of the organizational pyramid.
a given process does not exist in isolation, but manifests in
variety of forms and in different systems, at strategic, tactical
and operational levels of an organization. Processes may be
captured through dedicated modeling languages and techniques
and managed through BPM systems, e.g., Petri nets, BPMN,
UML Activity Diagrams, or they may be represented in other
forms, e.g., task lists in task management systems and transac-
tion processes in enterprise systems. Alternatively, they are less
explicit at higher levels of the pyramid. Instead, at these levels,
processes are instrumental to other methods or representations,
used for broader considerations of systems planning and coordi-
nation. Figure 2 shows an organizational pyramid, illustrating a
useful, structural context for process management—stretching
from business strategy down to IT systems.
The highest-level notion of processes plays a vital role in
strategic planning and the high-level representation of organi-
zations, through business models [9]. Represented typically
through strategic value chains (general activity dependencies
with no control flow), processes combine with policies, target
customers, product and service offerings, organizational struc-
tures and partners, to detail business models. Strategic value
chains reflect not so much process flows but value accretion,
together with key interactions with organizational and partner
roles. When linked to processes at lower levels, they allow
lower levels of processes in business and IT systems to be
steered through policies and other strategic considerations of
organizations.
Over the years, enterprise architecture has become an im-
portant bridge between tactical and operational levels, because
it allows further details of systems (e.g., services, processes,
and applications) to be aligned, thus supporting systems plan-
ning and governance from a cross-systems, i.e., enterprise
purview. Enterprise architecture frameworks such as the Zach-
man Framework [10], TOGAF [11], and RM-ODP [12] inte-
grate a number of modeling techniques and languages in sup-
port of this, with processes playing a central role in yielding
architecture coherence. For example, in Archimate [13] used in
TOGAF, processes are defined across business, application and
IT infrastructure layers, and are inter-linked across these while
also anchoring into other aspects such as services, resources
and information. At the operational architecture level, process
models take on a normative role, as opposed to being descrip-
tive at higher levels and executable through IT systems (to use
the broad positioning of processes from [7]). They guide the op-
erations of specific business areas and are developed through in-
dividual projects. Models are captured through multi-level pro-
cess architecture (from operational value chains to detailed pro-
cesses) entailing many-to-many relationships between elements
across levels and, thus, complex alignment challenges [14, 15].
At the lowest level, processes are a core part of IT systems
design and implementations. This involves configurable, so-
lution design models, executable models, and software appli-
cations with coded processes. Executable processes are also
in the form of process or document workflows, tasks lists and
other forms supported by BPM systems such as workflow man-
agement systems and task managers. Software design models
also rely on process concepts to capture and configure soft-
ware component dependencies, e.g., ERP solution maps and
software component interactions (see exemplar software archi-
tecture of SAPs Business ByDesign [16]). Ultimately, pro-
cess instances are recorded as event sequences in logs. Events
capture timestamped data about executed activities and event
traces are aligned to process conceptions of software interac-
tions, e.g., transactions steps of asynchronously running busi-
ness objects in ERP systems [17].
As we can see, processes are effectively refined across the
architecture levels even if they are captured through different
techniques and languages having either no, partial, or precise,
semantics; correspondingly they are informal (high-level de-
scriptive processes), semi-formal (lower level descriptive pro-
cesses) or formal (normative and executable processes). Ide-
ally, they should be aligned with processes at across all levels,
therefore, requiring correlation of processes through query lan-
guages (akin to data correlation support in database query lan-
guages, e.g., SQL joins and correlated sub-queries).
Complementary to this structural context of BPM, is a func-
tional context seen through the classical BPM lifecycle [7],
with its comparatively narrower focus: process (re)design, im-
plementation/configuration, and execution/adaptation. The fo-
cus of the BPM lifecycle tends to be on lower levels of architec-

Polyvyanyy et al. / Decision Support Systems, The Authors’ Version (2017) 1–18 4
ture involving processes managed through BPM systems. Mod-
els may be (re)designed to capture requirements, refined and
configured as executable models for orchestration through IT
systems or as implementation logic in software code. In the ex-
ecution/adaptation phase, processes are orchestrated using exe-
cution systems and event logs are generated. Through runtime
execution and analysis of event data, processes may be adapted
for “in-situ” improvements and overcoming errors. The exe-
cution/adaptation phase feeds back into the (re)design phase,
whereby event data analysis is used to create long lasting design
improvements of process models. Thus, the BPM lifecycle pro-
vides a broader context for process querying requirements, with
various steps in the lifecycle offering indispensable insights for
how various process create, read, update, and delete operations
are combined in support of complex process management tasks.
2.2. Functional Requirements of Process Querying
To elicit requirements for different classes of management
methods over processes, in this section we perform the CRUD
analysis over an artifact of a process repository; note that the
notion of a process repository is formalized in Section 3. The
need for different CRUD operations over process repositories
is justified by mapping them onto (a subset of) the comprehen-
sive set of BPM use cases described in [7]. These use cases
refer to the creation of process models and data, and their usage
to improve, enact, and manage processes. The BPM uses cases
were obtained by identifying interactions between artifacts such
as descriptive, normative, configurable, and executable models,
IT systems, event data, and a range of analysis results. Almost
300 BPM papers were mapped onto these use cases to justify
their importance. The twenty use cases reported in [7] are not
intended to be definitive or complete. Nevertheless, they help
to structure the possible CRUD operations over process reposi-
tories. In the context of process querying, we refer to these op-
erations as process query intents, which can be seen as seman-
tic classes of management instructions for process repositories.
We see process query intents as one of the configuration points
of the devised process querying framework, refer to Section 4
for further details.
Design Model. Process models are produced in a number of
ways including creating models for the first time via design,
selection of existing model, and reuse [18] of existing mod-
els either via model merge or model composition. The Design
Model use case concerns the creation of models from “scratch”
by humans, capturing a current-state (as-is) or future-state (to-
be) of processes. To support this, a process query intent, Create
Process, should allow the insertion of newly designed models
in a process repository, e.g., when a model being captured is
saved in a process modeling tool. In addition, the intent, Update
Process, should support updates of model designs, where the
model being updated already exists in the repository. Similarly,
the Delete Process, should support in-situ deletions of mod-
els during design, whereby the entire model is selected through
the query conditions/parameters. Note, the distinction between
deleting an entire model and only deleting parts of a model,
where the latter can be rendered through an update query.
Select Model. The Select Model use case relates to the retrieval
of process models from a repository based on structural or be-
havioral match of processes. Correspondingly, Read Process
should select process models satisfying structural (graph struc-
tures) and behavioral (activity traces) based conditions. Al-
though this use case concerns models, we extend Read Pro-
cess to cover process models, process fragments, process in-
stances, e.g., sequences of events in logs, and individual events.
In terms of the behavior, the ability to select event traces should
cover executed process, simulated behavior (based on selected
workload and resource configurations) and permissible behav-
ior (modeled but not yet executed). For example, it should
be possible to provide as input behavioral activity traces and
retrieve both models (based on permissible behavior) and in-
stances (executed behavior) from a repository describing the
given traces.
Specific details of processes may be projected in query re-
sults, e.g., query results may need to only include start and end
activities of matched processes. Given that processes rarely ex-
ist in isolation but are linked to other processes, e.g., through
use cases, collaborative process, and processes at different lev-
els of abstraction in the systems architecture, the Read Process
intent should support process correlations in queries. An exam-
ple is to find all executable processes linked to a particular part
of an operational value chain, e.g., a stage of a value chain.
In terms of systems architecture, this would be expanded to
finding all executable processes linked to operational processes
which are linked to the corresponding part of the value chain.
Complementary to exact matching, similarity match [19] is also
critical for various management goals of processes, e.g., find-
ing similarity of a set of processes to a given process or finding
processes that are similar. It should be possible to reference
similarity search functions as part of Read Process both prior
to, and after search filters are evaluated (much like aggregate
functions apply in SQL statements).
Models merging and model composition use cases are variants
of model productions from existing models, where the resulting
models rely on selection of processes from several models.
Merge Models. The Merge Models use case involves the cre-
ation of a new model based on combining parts of different
models. Examples include extending a process model with parts
of other models or taking different models and merging them
into one model. This use case is based on an elementary step of
merging through automated techniques [20], as opposed to the
preparatory and intermediate steps of identifying models, up-
dating them for fitness, etc., which involve update or deletion
of parts of models. Thus, the Merge Model use case, supported
through a specialized form of the Create Process, should take
as input a set of models, allow a merge function to be used on
these models, and insert the resultant model into a repository.
This intent could be practically implemented in a modeling tool
as part of a merge utility, where the merge function automati-
cally generates a create process, which can then be updated and
saved (committed) by a user. Since correctness of the result-
ing model is not guaranteed through merging, the Update Pro-
cess applies, to support subsequent refinements of the merged

Polyvyanyy et al. / Decision Support Systems, The Authors’ Version (2017) 1–18 5
model, e.g., removal or updated connection of activities to en-
sure correct execution.
Compose Model. The Compose Model use case involves the
creation of a new model based on different, and, typically, reus-
able models. Like merge models, a specific variant of Create
Process, applicable for composition, should take as input a set
of models, allow an algorithmic composition of these models,
and insert the composition into the repository. Unlike, merge
model, the composed models parts can be related to the original
models, and a corresponding correlation should be explicitly
captured. Given the more structured nature of composition, the
subsequent refinement of models for correctness through the
Update Process queries is less likely to be required.
Following on from use cases concerning model production, we
now consider use cases related to process execution. These in-
volve the creation of executable models, typically from non-
executable models, e.g., normative process models designed at
the operational architecture level, the execution of models, the
creation of process instance through events, process monitoring
and runtime process adaptation.
Refine Model. The need for the detailing of a higher-level
model into an executable model, through the Refine Model use
case, can, in fact, be generalized for model refinement across
different levels of systems architecture, refer to Figure 2. Each
level uses modeling techniques and languages with different de-
grees of semantics, with executable models needing to be pre-
cise and free of errors so that they can be executed. More-
over, platform-specific technical configuration details need to
be present, e.g., message correlations, data object mapping to
required schemas, for execution readiness. As such, Create
Process should support the initial creation (including version-
ing) of a refined model and linking it and specific parts of re-
finement with the parent model(s). To support this use case,
the Update Process applies for interim saves during refinement
steps and flagging models that they are in a verified, error-free
form for execution.
Enact Model. The Enact Model use case, relating to the in-
terpretation of executable models by BPM systems, has some-
what a subtle application for process querying. Model exe-
cution centers on the selection, scheduling, and execution of
activities, through execution engines. While the core execu-
tion components control the reading, scheduling and internal
state management of individual activities, the goals of efficient
memory management and reduced latency require that parts
of models pre-fetched into memory, ahead of execution, akin
to database query execution strategies. For this, Read Pro-
cess should be used to support sequential “pre-fetch” of process
models in fragments aligned with platform specific constraints,
e.g., memory blocks. Thus, we envisage process querying to
be better exploited by process execution engines, down to low-
level technical concerns, i.e., read queries are generated through
execution components of BPM systems. Note, given that dif-
ferent parts of processes are possible for downstream execution
through choice constructs in models, activity traces based on
model structure and behavior could be used as part of pre-fetch
optimization strategies, in line with database query execution
optimization strategies [21].
Log Event Data. When processes are executed the instances
that are generated and run are recorded as events in system logs,
addressed through the Log Event Data use case. This can be
supported through Create Process and Update Process query
intents, which should specify instructions to create event logs,
traces, and events, as well as update traces by inserting fresh
events into traces in tandem with process execution. Times-
tamps and other logistical details provided by execution engines
can be logged in events.
Monitor. The Monitor use case, relating to the active reading of
system logs at runtime for evaluating responsiveness, through-
put, resource cost, and other performance indicators, can be
supported via Read Process queries, applied over different pro-
cess instance sets: individual, case based, systems wide, etc.
Adapt While Running. The Adapt While Running use case
addresses ad-hoc, or permanent, model changes required dur-
ing runtime due to emergent requirements, e.g., making se-
quentially ordered activities run in parallel in urgent situations.
These practices can be supported through Update Process
queries. Note that the impact of runtime adaptations is on pro-
cess instances, as newly generated events will result from
changed models, which are inter-linked to events of the previ-
ous models. Thus, updated models need to be versioned within
a current change release cycle, as well as against all the previ-
ous changes. A specific challenge is to ensure that updates are
made with the proper integrity, and models used as input for
change are checked and compared against the intended (to-be)
processes to ensure proper continuity of execution, e.g., stop-
ping a process in the middle of iterated activities (within a loop)
could result in integrity issues.
In addition to being designed, configured, implemented, and ex-
ecuted, processes can be used for non-trivial analysis related to
performance and verification. Of these, we consider the former
for query requirements.
Analyze Performance Based on Model. The Analyze Perfor-
mance Based on Model use case concerns the simulation of
executable process models for performance analysis in terms
of response times, latencies, resource utilization, throughput,
etc. Simulation techniques focus on specialized analysis such as
queueing networks or Markov chains to compute the expected
performance. The results of model simulation, resulting in sim-
ulated process instances, can be stored in repositories through
the use of Create Process and Update Process queries, gen-
erated by simulation tools. In addition, Read Process queries
should be used through these tools to retrieve simulated pro-
cesses for comparing how different systems configurations im-
pact the performance of models. Read Process should also sup-
port diagnosis of individual processes and the aggregate analy-
sis of sets of process instances, e.g., process analytics functions.
Further use cases concerning process analysis involve both pro-
cess models and event data, concerning conformance checking
and performance analysis.

Citations
More filters
Journal ArticleDOI

Industry 4.0: A bibliometric analysis and detailed overview

TL;DR: This paper summarizes the growth structure of Industry 4.0 during the last 5 years and provides the concise background overview of Industry 5.0 related works and various application areas.
Journal ArticleDOI

Industry 4.0: Emerging themes and future research avenues using a text mining approach

TL;DR: The aim of this research is to identify the main overarching themes discussed in the past and track their evolution over time, and propose a future research agenda for each overarching theme that considers the multidisciplinary nature of research efforts made on the topic.
Journal ArticleDOI

Challenges of smart business process management: An introduction to the special issue

TL;DR: A framework is introduced that distinguishes three levels of business process management: multiprocess management, process model management, and process instance management and identifies major contributions of prior research.
Journal ArticleDOI

Industry’s 4.0 transformation process: how to start, where to aim, what to be aware of

TL;DR: In this paper, the authors describe how industry 4.0 has fused digitalisation with traditional industrial processes bridging the physical and virtual worlds and opening unimagined possibilities for 21st century business growth.
Journal ArticleDOI

Monotone Precision and Recall Measures for Comparing Executions and Specifications of Dynamic Systems

TL;DR: In this article, a new framework for the definition of behavioral quotients is proposed, which can capture precision and recall measures between a collection of recorded executions and a system specification, and they demonstrate the application of the quotients for capturing precision, recall, and recall.
References
More filters
Journal ArticleDOI

Design science in information systems research

TL;DR: The objective is to describe the performance of design-science research in Information Systems via a concise conceptual framework and clear guidelines for understanding, executing, and evaluating the research.
Journal Article

Analyzing the past to prepare for the future: writing a literature review

TL;DR: A review of prior, relevant literature is an essential feature of any academic project that facilitates theory development, closes areas where a plethora of research exists, and uncovers areas where research is needed.
Journal ArticleDOI

A Design Science Research Methodology for Information Systems Research

TL;DR: The designed methodology effectively satisfies the three objectives of design science research methodology and has the potential to help aid the acceptance of DS research in the IS discipline.
Book

Principles of Model Checking

TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Journal ArticleDOI

A framework for information systems architecture

TL;DR: Information systems architecture is defined by creating a descriptive framework from disciplines quite independent of information systems, then by analogy specifies information systems architecture based upon the neutral, objective framework.
Related Papers (5)
Frequently Asked Questions (13)
Q1. What are the contributions mentioned in the paper "Process querying: enabling business intelligence through query-based process analytics" ?

This paper proposes a framework for devising process querying methods, i. e., techniques for the ( automated ) management of repositories of designed and executed processes, as well as models that describe relationships between processes. 

Some approaches to managing vast collections of processes include the use of symbolic techniques (e.g., binary decision diagrams), manipulations with structural regularities in behavior models, and rigorous abstractions of processes. 

Because queries can formulate elaborate instructions that induce manipulations over large data sets, the user requires support to facilitate understanding of query results. 

The Analyze Performance Based on Model use case concerns the simulation of executable process models for performance analysis in terms of response times, latencies, resource utilization, throughput, etc. Simulation techniques focus on specialized analysis such as queueing networks or Markov chains to compute the expected performance. 

Executable processes are also in the form of process or document workflows, tasks lists and other forms supported by BPM systems such as workflow management systems and task managers. 

The Analyze Performance Using Event Data use case covers runtime monitoring and post runtime analysis to check processes for execution characteristics such as response times, latencies, and throughput of processes. 

The Check Conformance Using Event Data use case covers design-time, runtime, and post runtime analysis to check that processes comply with business rules, business requirements and model specifications. 

The BPM uses cases were obtained by identifying interactions between artifacts such as descriptive, normative, configurable, and executable models, IT systems, event data, and a range of analysis results. 

Process querying studies (automated) methods for managing, e.g., filtering or manipulating, repositories of models that describe observed and/or envisioned processes, and relationships between the processes. 

Most of the approaches convert process logs into graphs and then apply FPSPARQL (an extension of SPARQL) [80] or graph-based search techniques [82, 83] to implement process querying. 

Concrete instantiations of the framework may rely on manual, semi-, or fullyautomated components responsible for the formalization of process querying instructions. 

By guiding business analysts and domain experts through preconfigured and intuitive (semi-automated) questionnaire instructions, one can attempt to translate business questions into lowlevel process query procedures that contribute towards answering the business question. 

As the authors can see, processes are effectively refined across the architecture levels even if they are captured through different techniques and languages having either no, partial, or precise, semantics; correspondingly they are informal (high-level descriptive processes), semi-formal (lower level descriptive processes) or formal (normative and executable processes).