scispace - formally typeset
Search or ask a question

Showing papers by "Santonu Sarkar published in 2022"



Proceedings ArticleDOI
17 Oct 2022
TL;DR: In this paper , the authors describe a validator tool that allows a plant designer to define a set of invariants that must be satisfied so that an MTP can be deemed fit for integration.
Abstract: Modular plants build a production system by integrating a set of pre-designed modules. Integration of these modules, supplied by different vendors, is performed using various design tools during the engineering phase. The integration process must perform a rigorous validation of the correctness of a module specification (MTP). Otherwise, the integration process can fail without providing enough failure details. Consequently, Such a failure at the later stage can significantly impact implementation, testing, integration, and SAT. In this paper, we describe a validator tool, that allows a plant designer to define a set of invariants that must be satisfied so that an MTP can be deemed fit for integration. We have tested the validator on a set of MTPs and reported our findings. We expect that the use of such a validator can significantly reduce the possibility of introducing errors during the engineering phase.

Journal ArticleDOI
TL;DR: A heuristic‐based method is designed to solve the intractable RIAP in polynomial time and it is demonstrated that the algorithm consistently yields a high profit‐to‐investment ratio for a broker who seeks to operate a commerce of virtual machines with time‐sensitive provisioning.
Abstract: Application development paradigms and composition of technology services are decisively moving in the direction of hybrid and multi clouds. Enterprises are stitching new cloud‐native business models that leverage containerized multi‐tier microservice architecture, heterogeneity of cloud deployment models, and diversity of cloud providers. Scalability and resiliency are key components of this new world architecture, but these are also functions of the predictability of provisioning the underpinning compute instances on cloud. Thus, a major challenge to surmount before complex multicloud aware applications can be designed is the problem of unpredictable latencies associated with the provisioning of compute services on cloud. In the first part of this article, we develop a technique for time‐sensitive provisioning of virtual compute on demand, while also allowing deprovisioning on demand. Using the technique we propose, a cloud broker will be able to operate on a pool of reserved instances sourced from cloud providers and multiplex them profitably across cloud customers with associated provisioning time guarantees, but without usage commitment restrictions. We articulate this challenge in the form of the Reserved Instance Allocation Problem (RIAP), which we first prove to be NP‐hard. We then design a heuristic‐based method to solve the intractable RIAP in polynomial time. We evaluate the effectiveness of our heuristic‐based mechanism through a combination of deep simulations and practical validation on mainstream public clouds. We demonstrate that our algorithm consistently yields a high profit‐to‐investment ratio for a broker who seeks to operate a commerce of virtual machines with time‐sensitive provisioning. In the second part of this article, we tackle the problem of unpredictable provisioning latencies in bare metal commerce. We build and evaluate an allocation model to calculate optimal supporting bare metal inventory to maximize cost‐sensitive fulfillment of bare metal provisioning requests in a time‐sensitive manner.

Proceedings ArticleDOI
06 Sep 2022
TL;DR: This paper proposes a technique by which it models the abstract functionality of each module as a state machine, making it possible to perform error propagation analysis and generate a set of test cases during the engineering phase.
Abstract: Modular industrial process plants build a production system by integrating a set of predesigned modules supplied by different vendors. The plant process engineers do not have access to the internals of these predesigned modules. Even when a predesigned module is well-tested, the composition of a set of modules can be vulnerable to many unforeseen scenarios, resulting in system-level failures. Debugging the cause of failure becomes extremely difficult since the implementation details are not available to the engineers. This paper proposes a technique by which it models the abstract functionality of each module as a state machine. We model the plant execution as a set of communicating state machines, making it possible to perform error propagation analysis and generate a set of test cases during the engineering phase.

Book ChapterDOI
TL;DR: In this article , the authors present the active ongoing research in optimizing deep learning models for inference at the edge using connection pruning, model quantization, and knowledge distillation, and then they describe techniques to train/retrain the models at the resource-constrained edge device using new learning paradigms such as federated learning, weight imprinting, and training smaller models on fewer data.
Abstract: AbstractIoT edge devices sense and process data to support real-time decision-making in latency-sensitive and mission-critical applications such as autonomous driving, industry automation, safety compliance, and security-threat monitoring. Running AI at edge brings the ability to make intelligent real-time decisions on the device. Moreover, on-device AI is vital to preserving data privacy. Hence, edge AI is an active topic for research and engineering at the major technology corporations, numerous start-ups, and academia.Deep learning neural network models have made tremendous improvements in prediction accuracies tending to surpass human intelligence for several tasks. Typically, these models are large-sized and hence, not suitable for resource-constrained edge devices and real-time inference. It is also challenging to train deep learning models on the edge device because they require large amounts of data and compute resources to train the model.We present the active ongoing research in optimizing deep learning models for inference at the edge using connection pruning, model quantization, and knowledge distillation. Then, we describe techniques to train/retrain the deep learning models at the resource-constrained edge device using new learning paradigms such as federated learning, weight imprinting, and training smaller models on fewer data.KeywordsEdge AIModel optimizationModel pruningModel quantizationKnowledge distillationFederated learningModel retrainingWeight imprinting