Fuzzy mining: adaptive process simplification based on multi-perspective metrics
read more
Citations
Discovering block-structured process models from event logs - a constructive approach
Time prediction based on process mining
Process mining in healthcare
Business process analysis in healthcare environments: A methodology based on process mining
Process mining
References
Data clustering: a review
Workflow mining: discovering process models from event logs
Graph Clustering by Flow Simulation
Partitioning sparse matrices with eigenvectors of graphs
Related Papers (5)
Conformance checking of processes based on monitoring real behavior
Process Mining Manifesto
Frequently Asked Questions (15)
Q2. What are the future works mentioned in the paper "Fuzzy mining - adaptive process simplification based on multi- perspective metrics" ?
Further work will concentrate on extending the set of metric implementations and improving the simplification algorithm. The success of process mining will depend on whether it is able to balance these conflicting goals sensibly.
Q3. What is the important implementation for binary significance?
Like for unary significance, the log-based frequency significance metric is also the most important implementation for binary significance.
Q4. Why is it important to remove edges from the model first?
Removing edges from the model first is important – due to the less-structured nature of real-life processes and their measurement of long-term relationships, the initial model contains deceptive ordering relations, which do not correspond to valid behavior and need to be discarded.
Q5. What are the phases of the process model that remove edges?
The first two phases, conflict resolution and edge filtering, remove edges (i.e., precedence relations) between activitynodes, while the final aggregation and abstraction phase removes and/or clusters lesssignificant nodes.
Q6. How long did it take to simplify the model?
Deriving all metrics from the mentioned log was performed in less than ten seconds, while simplifying the resulting model took less than two seconds on a 1.8 GHz dual-core machine.
Q7. What is the reliable method of mining logs?
It has been mined from machine test logs using the Heuristics Miner, one of the traditional process mining techniques which is most resilient towards noise in logs [14].
Q8. What is the main concept of the approach to process mining?
Process mining techniques which are suitable for less-structured environments need to be able to provide a high-level view on the process, abstracting from undesired details.
Q9. What is the significance of a relation in a process model?
By dividing the significance of an ordering relation A → B with the sum of all its competing relations’ significances, the authors get the importance of this relation in its local context.
Q10. What distinguishes Fuzzy Mining from other mining techniques?
the foundation on multi-perspective metrics, i.e. looking at all aspects of the process at once, its interactive and explorative nature, and the integrated simplification algorithm clearly distinguishes Fuzzy Mining from all previous process mining techniques.
Q11. What are the useful tools for analyzing them?
These are notoriously flexible and unstructured environments, and the authors hold their approach to be one of the most useful tools for analyzing them so far.
Q12. How do the authors simplify the process model?
the authors apply three transformation methods to the process model, which will successively simplify specific aspects of it.
Q13. What are the popular solutions for supporting processes?
Yet the most popular solutions for supporting processes do not enforce any defined behavior at all, but merely offer functionality like sharing data and passing messages between users and resources.
Q14. What is the funding for this research?
This research is supported by the Technology Foundation STW, applied science division of NWO and the technology programme of the Dutch Ministry of Economic Affairs.
Q15. What is the metric for evaluating event classes?
the data type correlation metric evaluates event classes, where subsequent events share a large amount of data types (i.e., attribute keys), as highly correlated.