scispace - formally typeset
Search or ask a question
Author

Dino Alic

Bio: Dino Alic is an academic researcher from University of Sarajevo. The author has contributed to research in topics: Process mining & Process modeling. The author has an hindex of 2, co-authored 5 publications receiving 8 citations.

Papers
More filters
Proceedings ArticleDOI
01 May 2016
TL;DR: An empirical comparison of functional and object-oriented programming languages using analog examples in C#, F#, Haskell, and Java showed that the programming language Java is faster than the other three languages whose performances were measured.
Abstract: The choice of the first programming language and the corresponding programming paradigm is an important part of the software development process. Knowing the advantages and constraints of individual programming paradigms is important as it can be crucial for successful software implementation. In this paper we conduct an empirical comparison of functional and object-oriented programming languages using analog examples in C#, F#, Haskell, and Java. Three algorithms were implemented: algorithm for solving N queens problem, algorithm for generating n-th left-truncatable prime and merge sort algorithm in C#, F#, Haskell and Java programming languages. An overview of programming languages efficiency is given by measuring two basic parameters: number of lines of code and program execution speed. Also, system resource usage is monitored during execution. Limited experiments showed that the programming language Java is faster than the other three languages whose performances were measured. Java was surprisingly fast on these problems that are more suitable for functional programming languages. Haskell was less memory intensive (up to two times less than Java) with similar execution times, while .NET languages were slower up to four times in comparison to Java. Object-oriented languages C# and Java had significantly more lines of code for all three algorithms when compared to functional programming language Haskell and the hybrid one F#.

7 citations

Book ChapterDOI
20 Jun 2019
TL;DR: The aim of the presented rule based algorithm is to find events belonging to the same case instance, for different sizes of log file events and different levels of errors within log files in the real process.
Abstract: Process mining is a technique for extracting process models from event logs. Process mining can be used to discover, monitor and to improve real business processes by extracting knowledge from event logs available in process-aware information systems. This paper is concerned with the problem of grouping events in instances and the preparation of data for the process mining analysis. Often information systems do not store a unique identifier of the case instance, or errors happen in the system during the recording of events in the log files. To be able to analyze the process, it is necessary that events are grouped into case instances. The aim of the presented rule based algorithm is to find events belonging to the same case instance. Performances of the algorithm, for different sizes of log file events and different levels of errors within log files in the real process, have been analyzed.

6 citations

Proceedings ArticleDOI
01 Oct 2016
TL;DR: Improvements and results achieved by proposed integration of Business Process Management and Electronic Document Management Systems and the model that is created enables monitoring of defined Key Performance Indicators in the identification process of the bottlenecks in the process.
Abstract: In today's global business environment, the importance of customer service, cost-competitiveness, and quality are key factors in determining an organization's success, or undesirable failure. Organizations try to optimize their processes to maximize their profits and make the very process faster. Users usually work with documents in the process. Working with documents makes the process more slowly, since the documents are important to be scanned and attached to the form. This work presents the optimization of such processes. It is achieved by automatic integration of Business Process Management and Electronic Document Management Systems. Improvements and results achieved by proposed integration are presented in this research. The model that is created enables monitoring of defined Key Performance Indicators in the identification process of the bottlenecks in the process. The process can be optimized by increasing the number of resources on the activities that are a bottleneck in the process. Such a solution has been tested in the process of opening a bank account.

3 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: An analysis of the impact of the human resources changes in Scrum teams showed a correlation between quality assurance and development team - when development team had extra utilization due to overtime, quality assurance team had an increase in overtime hours almost proportionately.
Abstract: This paper presents results of an analysis of the impact of the human resources changes in Scrum teams. Four Scrum teams were tracked (two developments and two quality assurance) along with their productivity and performance. Analysis showed that human resources changes have a significant impact on the entire team and its behavior. Their effort increased by adding overtime hours. In the same time, their performance and effective work decreased, which is reflected on the quantity of work that can be billed to the client. The analysis shows that it takes, in average, three sprints (each lasting fourteen days) for new team members to fully adjust to the team development process and acquire a business knowledge needed for maximum productivity. Teams whose members have been working together longer period and who have more senior members can adjust to team shifts more quickly. The analysis also showed a correlation between quality assurance and development team - when development team had extra utilization due to overtime, quality assurance team had an increase in overtime hours almost proportionately.
Book ChapterDOI
25 Jun 2018
TL;DR: Two approaches in microservices-based software design are discussed, from the perspective of failure possibility, in more tight relation with real software systems, while the second one has more theoretic background.
Abstract: This paper discusses two approaches in microservices-based software design, from the perspective of failure possibility. The first approach accepts the fact that complex distributed software systems with many communicating components, such as microservices-based software, could fail (it is not important when), and is focused on the resilient software design. Resilient software design provides strategies and mechanisms for dealing with failures. While robust system just continues functioning in the presence of a failure, resilient system is prepared to adapt yourself while continuing functioning. Second approach is to try to build ideal software that will never fail. Lot of theory behind behavioral type systems is devoted to this – choreographic programming for example. Choreographic programming relies on choreographies as global descriptions of system implementations – behavior of all entities (e.g. microservices) in a system - are given in a single program. The first approach is in more tight relation with real software systems, while the second one has more theoretic background. In this paper authors discuss on the pros and cons of aforementioned methods and presents the ideas for its fusion (e.g. to use patterns for microservices).

Cited by
More filters
Journal Article
TL;DR: In this paper, the authors demystify the acronyms in this domain, describe the state-of-theart technology, and argue that BPM could benefit from formal methods/languages (cf. Petri nets, process algebras, etc.).
Abstract: Business Process Management (BPM) includes methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. It can be considered as an extension of classical Workflow Management (WFM) systems and approaches. Although the practical relevance of BPM is undisputed, a clear definition of BPM and related acronyms such as BAM, BPA, and STP are missing. Moreover, a clear scientific foundation is missing. In this paper, we try to demystify the acronyms in this domain, describe the state-of-the-art technology, and argue that BPM could benefit from formal methods/languages (cf. Petri nets, process algebras, etc.).

116 citations

Journal ArticleDOI
TL;DR: This study develops a model for streamlining a hybrid LCI by automating various components of the approach based on the path exchange hybrid analysis method and includes a series of inter-related modules developed using object-oriented programming in Python.
Abstract: Life cycle assessment (LCA) is inherently complex and time consuming. The compilation of life cycle inventories (LCI) using a traditional process analysis typically involves the collection of data for dozens to hundreds of individual processes. More comprehensive LCI methods, such as input-output analysis and hybrid analysis can include data for billions of individual transactions or transactions/processes, respectively. While these two methods are known to provide a much more comprehensive overview of a product’s supply chain and related environmental flows, they further compound the complex and time-consuming nature of an LCA. This has limited the uptake of more comprehensive LCI methods, potentially leading to ill-informed environmental decision-making. A more accessible approach for compiling a hybrid LCI is needed to facilitate its wider use. This study develops a model for streamlining a hybrid LCI by automating various components of the approach. The model is based on the path exchange hybrid analysis method and includes a series of inter-related modules developed using object-oriented programming in Python. Individual modules have been developed for each task involved in compiling a hybrid LCI, including data processing, structural path analysis and path exchange or hybridisation. The production of plasterboard is used as a case study to demonstrate the application of the automated hybrid model. Australian process and input-output data are used to determine a hybrid embodied greenhouse gas emissions value. Full automation of the node correspondence process, where nodes relating to identical processes across process and input-output data are identified, remains a challenge. This is due to varied dataset coverage, different levels of disaggregation between data sources and lack of detail of activities and coverage for specific processes. However, by automating other aspects of the compilation of a hybrid LCI, the comprehensive supply chain coverage afforded by hybrid analysis is able to be made more accessible to the broader LCA community. This study shows that it is possible to automate various aspects of a hybrid LCI in order to address traditional barriers to its uptake. The object-oriented approach used enables the data or other aspects of the model to be easily updated to contextualise an analysis in order to calculate hybrid values for any environmental flow for any variety of products in any region of the world. This will improve environmental decision-making, critical for addressing the pressing global environmental issues of our time.

36 citations

Book ChapterDOI
20 Jun 2019
TL;DR: The aim of the presented rule based algorithm is to find events belonging to the same case instance, for different sizes of log file events and different levels of errors within log files in the real process.
Abstract: Process mining is a technique for extracting process models from event logs. Process mining can be used to discover, monitor and to improve real business processes by extracting knowledge from event logs available in process-aware information systems. This paper is concerned with the problem of grouping events in instances and the preparation of data for the process mining analysis. Often information systems do not store a unique identifier of the case instance, or errors happen in the system during the recording of events in the log files. To be able to analyze the process, it is necessary that events are grouped into case instances. The aim of the presented rule based algorithm is to find events belonging to the same case instance. Performances of the algorithm, for different sizes of log file events and different levels of errors within log files in the real process, have been analyzed.

6 citations

Journal ArticleDOI
TL;DR: In this paper , the authors present a framework combining life cycle assessment and dynamic modeling using a nested systems theory, which provides a more holistic and integrated approach for modeling and improving the environmental performance of built stocks and their occupants.
Abstract: Cities are complex sociotechnical systems, of which buildings and infrastructure assets (built stocks) constitute a critical part. As the main global users of primary energy and emitters of associated greenhouse gases, there is a need for the introduction of measures capable of enhancing the environmental performance of built stocks in cities and mitigating negative externalities such as pollution and greenhouse gas emissions. To date, most environmental modeling and assessment approaches are often fragmented across disciplines and limited in scope, failing to provide a comprehensive evaluation. These approaches tend to focus either on one scale relevant to a discipline (e.g., buildings, roads, parks) or particular environmental flows (e.g., energy, greenhouse emissions). Here, we present a framework aimed at overcoming many of these limitations. By combining life cycle assessment and dynamic modeling using a nested systems theory, this framework provides a more holistic and integrated approach for modeling and improving the environmental performance of built stocks and their occupants, including material stocks and flows, embodied, operational, and mobility‐related environmental flows, as well as cost, and carbon sequestration in materials and green infrastructure. This comprehensive approach enables a very detailed parametrization that supports testing different policy scenarios at a material, element, building, and neighborhood level, and across different environmental flows. We test parts of our modeling framework on a proof‐of‐concept case study neighborhood in Melbourne, Australia, demonstrating its breadth. The proposed modeling framework can enable an advanced assessment of built stocks that enhances our capacity to improve the life cycle environmental performance of cities.

6 citations

Proceedings ArticleDOI
18 Aug 2021
TL;DR: In this paper, a framework is proposed to identify event-level clusters in a business process log by decomposing into several sub-logs based upon the similarity of the sequences between events.
Abstract: Process mining techniques extract useful knowledge from event logs to analyse and improve the quality of process execution. However, size and complexity of the real-world event logs make it difficult to apply standard process mining techniques, thus process discovery results in spaghetti-like models which are difficult to analyse. Several event abstraction techniques are developed to group-up low-level activities into higher level activities, but abstraction ignores the low level critical process details in the real-world business scenarios. Also, trace clustering techniques have been extensively used in literature to cluster the processes executions which are homogeneous in nature, but event-level clustering is not yet considered for process mining. In this paper, a novel framework is proposed to identify event-level clusters in a business process log by decomposing into several sub-logs based upon the similarity of the sequences between events. Our technique provides clustering without abstraction of very large complex event logs. Proposed algorithm Common Events Identifier (CEI) is applied on a real-world telecommunication log and the results are compared with two well-known trace clustering techniques from the literature. Our results achieved high accuracy of clustering and improved the quality of resulting process models using the given size and complexity of the event log. We further demonstrated that the proposed techniques improved process discovery and conformance results for a given event log.

4 citations