scispace - formally typeset
Search or ask a question

Showing papers by "Santonu Sarkar published in 2017"


Journal ArticleDOI
TL;DR: In this article, the authors reported an ultrahigh specific discharge capacity (∼342 mA h g−1 at 0.1 A g− 1 current rate) and excellent electrochemical performance of a doped ammonium vanadium oxide (NVO) cathode.
Abstract: Sodium-ion battery technology, the existing electrodes, and electrolytes are still in the early stage of development, and more intense research is necessary before moving to mass production and application. High capacity and rate capable electrodes are a mandate, and very few combinations of cathode, anode, and electrolyte have been reported with an excellent full cell performance with a long cycle life. Herein, we report for the first time an ultrahigh specific sodium discharge capacity (∼342 mA h g−1 at 0.1 A g−1 current rate) and excellent electrochemical performance of a doped ammonium vanadium oxide (NVO) cathode. The cathode performance exhibited >99% capacity retention after 50 cycles at a very high current rate of 2 A g−1. Furthermore, the present report demonstrates the full cell performance with a doped ammonium vanadium oxide (NVO) cathode against a hydrogenated sodium titanium oxide (NTO) anode. The full cell is capable of retaining 94% capacity after 400 cycles and also demonstrated its potential application in an LED-based table lamp. The present study shows a way to produce a rechargeable sodium-ion battery full cell and can be deciphered to facilitate large-scale renewable energy (RE) storage.

33 citations


Journal ArticleDOI
TL;DR: This paper study SLA violations of a production SaaS platform, diagnose the causes, unearth several critical failure modes, and then, suggest various solution approaches to increase the availability of the platform as perceived by the end user.
Abstract: A software-as-a-service (SaaS) needs to provide its intended service as per its stated service-level agreements (SLAs). While SLA violations in a SaaS platform have been reported, not much work has been done to empirically characterize failures of SaaS. In this paper, we study SLA violations of a production SaaS platform, diagnose the causes, unearth several critical failure modes, and then, suggest various solution approaches to increase the availability of the platform as perceived by the end user. Our approach combines field failure data analysis (FFDA) and fault injection. Our study is based on 283 days of operational logs of the platform. During this time, the platform received business workload from 42 customers spread over 22 countries. We have first developed a set of home-grown FFDA tools to analyze the log, and second implemented a fault injector to automatically inject several runtime errors in the application code written in .NET/C#, and then, collate the injection results. We summarize our finding as: first, system failures have caused 93% of all SLA violations; second, our fault injector has been able to recreate a few cases of bursts of SLA violations that could not be diagnosed from the logs; and third, the fault injection mechanism could recreate several error propagation paths leading to data corruptions that the failure data analysis could not reveal. Finally, the paper presents some system-level implication of this study and how the joint use of fault injection and log analysis may help in improving the reliability of the measured platform.

19 citations


Proceedings ArticleDOI
01 Dec 2017
TL;DR: This paper analyzed a popular design abstraction framework called "Thrust" from NVIDIA, and proposed an extension called Thrust++ that provides abstraction over the memory hierarchy of an NVIDIA GPU that allows developers to make efficient use of shared memory and overall provides better control over the GPU memory hierarchy.
Abstract: A good design abstraction framework for high performance computing should provide a higher level programming abstraction that strikes a balance between the abstraction and visibility over the hardware so that the software developer can write a portable software without having to understand the hardware nuances, yet exploit the compute power optimally. In this paper we have analyzed a popular design abstraction framework called "Thrust" from NVIDIA, and proposed an extension called Thrust++ that provides abstraction over the memory hierarchy of an NVIDIA GPU. Thrust++ allows developers to make efficient use of shared memory and overall, provides better control over the GPU memory hierarchy while writing applications in Thrust style for the CUDA backend. We have shown that when applications are written for the CUDA backend using Thrust++, they have minimal performance degradation when compared to their equivalent CUDA versions. Further, Thrust++ provides almost 4x speedup when compared to Thrust, for certain compute intensive kernels that repeatedly use the reduce operation.

8 citations


Book ChapterDOI
03 Oct 2017
TL;DR: This paper presents a translation validation tool for verifying optimizing and parallelizing code transformations by checking equivalence between two PRES+ models, one representing the source code and the other representing its optimized and parallelized version.
Abstract: An application program can go through significant optimizing and parallelizing transformations, both automated and human guided, before being mapped to an architecture Formal verification of these transformations is crucial to ensure that they preserve the original behavioural specification PRES+ model (Petri net based Representation of Embedded Systems) encompassing data processing is used to model parallel behaviours more vividly This paper presents a translation validation tool for verifying optimizing and parallelizing code transformations by checking equivalence between two PRES+ models, one representing the source code and the other representing its optimized and (or) parallelized version

5 citations


Proceedings ArticleDOI
03 Apr 2017
TL;DR: It is argued that research topics, rather than individual publications, have wider relevance in the research ecosystem, for individuals as well as organizations.
Abstract: Predicting the future is hard, more so in active research areas. In this paper, we customize an established model for citation prediction of research papers and apply it on research topics. We argue that research topics, rather than individual publications, have wider relevance in the research ecosystem, for individuals as well as organizations. In this study, topics are extracted from a corpus of software engineering publications covering 55,000+ papers written by more than 70,000 authors across 56 publication venues, over a span of 38 years, using natural language processing techniques. We demonstrate how critical aspects of the original paper-based prediction model are valid for a topic-based approach. Our results indicate the customized model is able to predict citations for many of the topics considered in our study with reasonably high accuracy. Insights from these results indicate the promise of citation of prediction of research topics, and its utility for individual researchers, as well as research groups.

4 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: This paper has compared the performance of three Thrust applications with their corresponding native versions in CUDA, OpenMP, Xeon-Phi and the CPP backends and shown quantitatively that while it is easier to write an application using Thrust, the framework does not provide any abstraction over the memory hierarchy of the underlying backend to the programmer.
Abstract: High performance computing applications are far more difficult to write, therefore, practitioners expect a well-tuned software to last long and provide optimized performance even when the hardware is upgraded. It may also be necessary to write software using sufficient abstraction over the hardware so that it is capable of running on heterogeneous architecture. Therefore, it is required to have a proper programming abstraction paradigm that strikes a balance between the abstraction and visibility over the hardware so that the programmer can write a program without having to understand the hardware nuances, yet exploit the compute power optimally. In this paper we have analyzed the power of design abstraction and performance of a popular design abstraction framework called Thrust. We have shown quantitatively that while it is easier to write an application using Thrust compared to writing the same in the native CUDA or OpenMP backends, the framework does not provide any abstraction over the memory hierarchy of the underlying backend to the programmer. We have compared the performance of three Thrust applications with their corresponding native versions in CUDA, OpenMP, Xeon-Phi and the CPP backends and demonstrate that the current Thrust version performs poorly in most of the cases when the application is compute intensive. However, the framework provides close to the native performance for a non-compute intensive applications. We analyze the reasons for the performance and highlight the improvements necessary for the framework.

4 citations


Proceedings ArticleDOI
26 Jun 2017
TL;DR: This paper has compared the performance of a Thrust application code in CUDA, OpenMP and the CPP backends with respect to the native versions, written for these backends and found that the current Thrust version performs poorly in most of the cases.
Abstract: High performance computing applications are far more difficult to write, therefore, practitioners expect a well-tuned software to last long and provide optimized performance even when the hardware is upgraded. It may also be necessary to write software using sufficient abstraction over the hardware so that it is capable of running on heterogeneous architecture. A good design abstraction paradigm strikes a balance between the abstraction and visibility over the hardware. This allows the programmer to write applications without having to understand the hardware nuances while exploiting the computing power optimally. In this paper we have analyzed the power of design abstraction of a popular design abstraction framework called Thrust both from ease of programming and performance perspectives. We have shown that while Thrust framework is good in describing an algorithm compared to the native CUDA or OpenMP version but it has quite a few design limitations. With respect to CUDA it does not provide any abstraction over the shared, texture or constant memory usage to the programmer. We have compared the performance of a Thrust application code in CUDA, OpenMP and the CPP backends with respect to the native versions (implementing exactly same algorithm), written for these backends and found that the current Thrust version performs poorly in most of the cases. While we conclude that the framework is not ready for writing applications that can exploit the optimal performance from the hardware, we also highlight the improvements necessary for the framework to make the performance comparable.

2 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: An end-to-end fully automated equivalence checker for validating optimizing and parallelizing transformations of PRES+ models from high-level language programs to validate the transformations using a state-of-the-art FSMD equivalenceChecker.
Abstract: Among the various models of computation (MoCs) which have been used to model parallel programs, Petri net has been one of the mostly adopted MoC. The traditional Petri net model is extended into the PRES+ model which is specially equipped to precisely represent parallel programs running on heterogeneous and embedded systems. With the inclusion of multicore and multiprocessor systems in the domain of embedded systems, it has become important to validate the optimizing and parallelizing transformations which system specifications go through before deployment. Although PRES+ model based equivalence checkers for validating such transformations already exist, construction of the PRES+ models from the original and the translated programs was carried out manually in these equivalence checkers, thereby leaving scope for inaccurate representation of the programs due to human intervention. Furthermore, PRES+ model tends to grow more rapidly with the program size when compared to other MoCs, such as FSMD. To alleviate these drawbacks, we propose a method for automated construction of PRES+ models from high-level language programs and use an existing translation scheme to convert PRES+ models to FSMD models to validate the transformations using a state-of-the-art FSMD equivalence checker. Thus, we have composed an end-to-end fully automated equivalence checker for validating optimizing and parallelizing transformations as demonstrated by our experimental results.

1 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: This paper proposes a framework backed by a model, to automatically determine the HA-enabled solution with the least TCO for a given uptime SLA and slippage penalty, and attempts to establish that this work is best implemented as a brokered service that recommends an uptime-optimized cloud architecture.
Abstract: Enterprise workloads usually call for an uptime service level agreement (SLA) at the pain of contractual penalty in the event of slippage. Often, the strategy is to introduce ad-hoc HA (High Availability) mechanisms in response. Implemented solutions that we surveyed do not mathematically map their availability model to the required uptime SLA and to any expected penalty payout. In most client cases that we observed, this either resulted in an over-engineered solution that had more redundancies than was required, or in an inadequate solution that could potentially slip on the system uptime SLA stipulated in the contract. In this paper, we propose a framework backed by a model, to automatically determine the HA-enabled solution with the least TCO (total cost of ownership) for a given uptime SLA and slippage penalty. We attempt to establish that our work is best implemented as a brokered service that recommends an uptime-optimized cloud architecture.