scispace - formally typeset
Search or ask a question

Showing papers by "Santonu Sarkar published in 2020"


Journal ArticleDOI
TL;DR: An empirical analysis of three major techniques for mining invariants in cloud-based utility computing systems: clustering, association rules, and decision list is performed and a general heuristic for selecting likely invariants from a dataset is proposed.
Abstract: Likely system invariants model properties that hold in operating conditions of a computing system. Invariants may be mined offline from training datasets, or inferred during execution. Scientific work has shown that invariants’ mining techniques support several activities, including capacity planning and detection of failures, anomalies and violations of Service Level Agreements. However their practical application by operation engineers is still a challenge. We aim to fill this gap through an empirical analysis of three major techniques for mining invariants in cloud-based utility computing systems: clustering, association rules, and decision list. The experiments use independent datasets from real-world systems: a Google cluster, whose traces are publicly available, and a Software-as-a-Service platform used by various companies worldwide. We assess the techniques in two invariants’ applications, namely executions characterization and anomaly detection, using the metrics of coverage, recall and precision. A sensitivity analysis is performed. Experimental results allow inferring practical usage implications, showing that relatively few invariants characterize the majority of operating conditions, that precision and recall may drop significantly when trying to achieve a large coverage, and that techniques exhibit similar precision, though the supervised one a higher recall. Finally, we propose a general heuristic for selecting likely invariants from a dataset.

7 citations


Proceedings ArticleDOI
11 Oct 2020
TL;DR: This paper considers a comprehensive, year long fatality analysis reporting system (FARS) data to analyze the role of various factors related to humans, weather and physical conditions involved in traffic accidents and builds an intelligent risk prediction model that can help decision-makers to ensure road safety.
Abstract: Among all the transportation systems that people use, the public traffic-ways are most common and dangerous resulting in a significant number of fatalities per day worldwide. Statistics have shown that the mortality rates related to traffic accident are more among youth. Although various road safety strategies and rules are developed by the government and law-enforcement agencies to combat the situation, these methods mainly target design, operation, and usability of traffic-ways. Most of the recent data-driven analysis papers model the traffic patterns or predict accidents from the past data. In this paper, we consider a comprehensive, year long fatality analysis reporting system (FARS) data to analyze the role of various factors related to humans, weather and physical conditions (e.g., road surface, light condition etc.) involved in traffic accidents. We build an intelligent risk prediction model that can help decision-makers to ensure road safety. The proposed model estimates (i.) the accident risk over a future time frame, and (ii.) the risk associated with the drivers present on the traffic-way based on the driver’s behavior, history, environmental conditions and physical conditions related to traffic-way.

3 citations


Proceedings ArticleDOI
11 Oct 2020
TL;DR: The learning of converting a distributed software to a microservice-based architecture using containers using containers is shared and a set of recommendations that the practitioners can follow are proposed to convert a monolithic application to a containerized architecture.
Abstract: Container-based virtualization is becoming a preferred choice to deploy services since it is lightweight and supports on-demand scalability as well as availability. The Process Automation Industry has accepted this technology to make their applications service oriented. However, container-based microservice architecture is effective only when the original software strictly followed modularity principles during its design. In this article, we share our learning of converting a distributed software to a microservice-based architecture using containers. Though the existing system has a modular design and deployed as distributed components, analysis of the current architecture shows that the application is monolithic (though modularized) and the components are strongly coupled in an indirect manner. As a result, it turns to be impossible to attain microservice-based architecture without changing the architecture. Next, we propose a microservice-based containerized TO-BE architecture of the application, and demonstrate that this TO-BE architecture does not incur any significant overhead. Finally, we propose a set of recommendations that the practitioners can follow to convert a monolithic application to a containerized architecture.

3 citations


01 Jan 2020
TL;DR: This work has taken two versions of CPNs one corresponds to the source program and the other, the target programs, and developed a sound method for proving several code optimizing transformations involving loop.
Abstract: Translation validation is the process of proving semantic equivalence between source and source-translation, i.e., checking the semantic equivalence between the target code (which is a translation of the source program being compiled) and the source code. In this paper, we propose a translation validation technique for Petri net based models of programs which verify several code optimizing transformations involving loop. These types of transformation have been used in several application domains such as scheduling phase of High level synthesis, high performance computations etc. Our Petri net based equivalence checker checks the computational equivalence between two one-safe colour Petri nets. In this work, we have taken two versions of CPNs one corresponds to the source program and the other, the target programs. Using path based analysis technique, we have developed a sound method for proving several code optimizing transformations involving loop. We have also compared our results with other Petri net based equivalence checkers.The experimental result shows the efficacy of the method.

1 citations


Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this article, a preliminary power prediction model is proposed for building power-efficient GPU applications, which combines the information derived from static analysis of a CUDA program and a machine learning-based model.
Abstract: Graphics Processing Unit (GPU) has emerged as a popular computing device to achieve Exa-scale performance in High-Performance Computing (HPC) applications. While the power-performance ratio is relatively high for a GPU, it still draws a significant amount of power during computation. In this paper, we propose a preliminary power prediction model which can be used by developers for building power-efficient GPU applications. Using this proposed work, developers can estimate the power consumption of a GPU application during implementation without having to execute it on actual hardware. Our model combines the information derived from static analysis of a CUDA program and a machine learning-based model. We have utilised decision tree technique to validate results across three different GPU architectures: Kepler, Maxwell and Volta. Observed $R^{2}$ score value using the decision tree model is 0.8973 for Volta architecture.

Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this article, the authors proposed a cloud broker service framework that will shield consumers from the vagaries of multiple clouds by creating converged determinism on top of non-uniform and non-deterministic provider clouds.
Abstract: When the cloud computing industry started its commercial acceleration a few years back, a new player entered the computing supply chain - the cloud broker. With the transformation of core business models through hybrid cloud and multi-cloud environments being witnessed across industries today, next generation cloud brokerage is poised to disrupt cloud adoption. In this paper, we propose a cloud broker service framework that will shield consumers from the vagaries of multiple clouds. To achieve this insulation, cloud brokers will disguise themselves as end-providers by creating and operating a virtual data center construct that we call Meta Cloud, which is assembled on top of participating supplier clouds. The Meta Cloud will bring simplification and unified dependability by creating converged determinism on top of non-uniform and nondeterministic provider clouds. Hand in hand with Meta Clouds, is the concept of Meta Services that realize intelligent frameworks to enable creation of value-added multi-cloud services. Based on observations from other industries, Meta Cloud brokerage can trigger a dramatic upswing in multi-cloud adoption. This paper contributes to the technical intrinsic of this impending evolution. We identify the research questions that need to be answered to construct Meta Cloud brokerage. We design the architecture of the proposed Meta Cloud broker and model the Meta Cloud and Meta Service constructs to advance classical cloud brokerage to next-generation brokerage. We also build experimental results to demonstrate the validity of the our design.

Proceedings ArticleDOI
27 Feb 2020
TL;DR: A pre-organization summary of the workshop to be held on February 27, 2020, at IIIT Jabbalpur (India), co-located with the 13th Innovations in Software Engineering Conference (ISEC 2020) is presented.
Abstract: With advancement in technology-driven decision making, the software-intensive systems for decisions have become more robust, dynamic, adaptive, context-aware, dependable. Architectural designs of such systems crave for new approaches where the data-driven decision making has to be incorporated in the solution. Methods for recommendation mechanism, prediction of operation failures, dealing with unsafe conditions etc are going to be part of the solution itself. Integrating such features to conceive an intelligent system that will directly influence the business solution is mostly appreciated. This would not have been possible without the direct interference of Artificial Intelligence which has been a standard procedure of industrial repertoire since 1980s. The direct impact of AI on social and economic life has been been felt mostly in last decade (since 2007) with the advent of smart phone, which contribute largely to "big data". The era of "big data" has witnessed the efficacy of Machine Learning and there is a need of the hour to combine data-driven machine intelligence with human intelligence (insights and domain knowledge) to effectively make the software development (requirement, design, testing, deployment and operation management) intelligent. The research community has shown a keen interest in this emerging field. In this report, we present a pre-organization summary of the workshop to be held on February 27, 2020, at IIIT Jabbalpur (India), co-located with the 13th Innovations in Software Engineering Conference (ISEC 2020).