scispace - formally typeset
Search or ask a question

Showing papers by "Santonu Sarkar published in 2019"


Journal ArticleDOI
TL;DR: The utility of the framework that is proposed can be generalized to address a class of problems where overall time and cost complexity for provisioning-time decision making needs to be controlled under a given set of constraints.
Abstract: Containerized deployment of microservices has gained immense traction across industries. To meet demand, traditional cloud providers offer container-as-a-service, where selection of the container and containerization of workloads remain developer's responsibility. This task is arduous for a developer since the choice of containers across different cloud providers is many. Furthermore, there does not exist any mechanism using which one can compare and contrast the capabilities of containers across different providers. In this scenario, we envisage the need for a smart cloud broker that can automatically deploy a chosen IT service into the best-fit container environment mapped to performance requirements, from among the set of available underpinning brokered container hosting systems spread across multiple cloud providers. We propose a novel fitness-aware containerization-as-a-service to achieve this. We show why a best-fit container selection process is operationally complex and time consuming, and how we heuristically prune the associated decision tree in two phases so that it becomes viable to implement this as an on-demand service. We propose a new metric called fitness quotient (FQ) to evaluate containers obtained from heterogeneous providers. We leverage machine learning techniques to inject automation into these two phases: unsupervised K-Means clustering in the first-level build-time phase to accurately classify IaaS cost and performance data, and polynomial regression during the second-level provisioning-time phase to discover relationships between SaaS performance and container strength.

7 citations


Proceedings ArticleDOI
08 Jul 2019
TL;DR: This paper builds a brokered bare metal supply chain model that identifies impacting variables, their characteristics, and inter-relationships and demonstrates that, given complex inter-variable relationships and environmental uncertainties, simulation runs are needed to complete the model and yield recommendations to optimize inventory and maximize profits.
Abstract: A modern cloud service that is getting popular is the supply of bare metal servers on demand to consumers who need to run high-performance algorithms for short periods of time. However, provisioning-time latencies are unpredictable in bare metal commerce, primarily because suppliers cannot exploit virtualization levers to adjust and optimize capacity. This impacts bare metal service leverage because, in the face of indeterministic fulfillment times, consumers often pre-provision for peak demand, which goes against the fundamental tenets and advantages of cloud adoption. To address this, we advocate that a cloud service broker offers time-sensitive bare metal provisioning services by building and maintaining an inventory of bare metal servers. This, however, leads to inventory optimization and profit maximization problems that resemble what traditional supply chains face, but with characteristics unique to on-demand compute economics. In this paper, we build a brokered bare metal supply chain model that identifies impacting variables, their characteristics, and inter-relationships. We argue and demonstrate that, given complex inter-variable relationships and environmental uncertainties, simulation runs are needed to complete the model and yield recommendations to optimize inventory and maximize profits.

6 citations


Proceedings ArticleDOI
22 Jul 2019
TL;DR: A mechanism to evaluate a robot program and compute a safety score for each action that the robot is about to perform is proposed and it is shown that using the code analyzer and safety score formalism it is possible to evaluate the application code for safety and enable avoidance of potentially unsafe behavior.
Abstract: In recent times, there has been significant interest in collaborative robots where the tasks performed by a robot are non-repetitive and complex, and humans and robots share an overlapping workspace. In such a case, the robot controller must necessarily be safety-aware. In this paper, we propose a mechanism to evaluate a robot program and compute a safety score for each action that the robot is about to perform. To this end, we have implemented a code analyzer that examines the robot’s Move instructions and assigns a safety score. A subjective logic based approach is used to compute the safety score for each instruction. We have evaluated the approach through ABB’s RobotStudio® simulator. We simulate two scenarios: First, where two robots share a workspace and second, where a robot moves along a path with an obstacle. The simulations show that using our code analyzer and safety score formalism; it is possible to evaluate the application code for safety and enable avoidance of potentially unsafe behavior.

2 citations


Proceedings ArticleDOI
14 Feb 2019
TL;DR: This work presents "ThrustHetero", a lightweight framework that provides an abstraction over several devices such as GPUs, Xeon Phis and multicore, yet allows developers to easily leverage the full compute capability of these devices.
Abstract: Heterogeneous compute architectures like Multi-Core CPUs, CUDA GPUs, and Intel Xeon Phis have become prevalent over the years. While heterogeneity makes architecture specific features available to the programmer, it also makes application development difficult, as one needs to plan for optimal usage of architectural features, suitable partitioning of the workload, communication and data transfer among the participating devices. A suitable design abstraction that hides such variabilities of the underlying devices and at the same time exploits their computing capabilities, can improve developer productivity. In this work, we present "ThrustHetero", a lightweight framework based on NVIDIA's Thrust, that provides an abstraction over several devices such as GPUs, Xeon Phis and multicore, yet allows developers to easily leverage the full compute capability of these devices. We also demonstrate a novel method for workload distribution in two stages - micro-benchmarking during framework installations to find good proportions and then using this information during application execution. We consider four classes of applications based on how they would perform on various computing architectures on the basis of the amount of branching present in the application. We show that the framework produces a good workload distribution proportions for each class of application and also show that the framework is scalable and portable. Further, we compare the performance and ease of development when using the framework with the native versions of various benchmarks and obtain favorable results.

1 citations