scispace - formally typeset
Search or ask a question

Showing papers by "Santonu Sarkar published in 2014"


Journal ArticleDOI
TL;DR: In this paper, a high rate and high energy density cathode based on intercalated ammonium vanadate is reported for the first time, which is mainly based on the use of interactive binder like carboxy methyl cellulose (CMC) or alginate binder instead of poly(vinylidene difluride) along with 2D morphology.

64 citations




Proceedings ArticleDOI
31 May 2014
TL;DR: This paper characterizes operational failures of a production Custom Package Good Software-as-a-Service (SaaS) platform and presents the lessons learned and how the findings and the implemented analysis tool allow platform developers to improve platform code, system settings and customer management.
Abstract: This paper characterizes operational failures of a production Custom Package Good Software-as-a-Service (SaaS) platform. Events log collected over 283 days of in-field operation are used to characterize platform failures. The characterization is performed by estimating (i) common failure types of the platform, (ii) key factors impacting platform failures, (iii) failure rate, and (iv) how user workload (files submitted for processing) impacts on the failure rate. The major findings are: (i) 34.1% of failures are caused by unexpected values in customers' data, (ii) nearly 33% of the failures are because of timeout, and (iii) the failure rate increases if the workload intensity (transactions/second) increases, while there is no statistical evidence of being influenced by the workload volume (size of users' data). Finally, the paper presents the lessons learned and how the findings and the implemented analysis tool allow platform developers to improve platform code, system settings and customer management.

21 citations


Proceedings ArticleDOI
03 Nov 2014
TL;DR: This paper proposes the application of a conceptual clustering technique for filtering alerts and shows the results obtained for seven months of security alerts generated in a real large scale SaaS Cloud system.
Abstract: In response to attack against corporative and enterprise networks, administrators deploy intrusion detection systems, monitors, vulnerability scans and log systems. These systems monitor and record host and network device activities searching for signs of anomalies and security incidents. Doing that, these systems generally produce a huge number of alerts that overwhelms security analysts. This paper proposes the application of a conceptual clustering technique for filtering alerts and shows the results obtained for seven months of security alerts generated in a real large scale SaaS Cloud system. The technique has been useful to support manual analysis activities conducted by the operations team of the reference Cloud system.

21 citations


Journal ArticleDOI
01 Jan 2014
TL;DR: Empirically study how the energy efficiency of a map-reduce job varies with increase in parallelism and network bandwidth on a HPC cluster and suggest strategies for configuring the degree of parallelism, network bandwidth and power management features in a HPS cluster for energy efficient execution of map- reduce jobs.
Abstract: Map-Reduce programming model is commonly used for efficient scientific computations, as it executes tasks in parallel and distributed manner on large data volumes. The HPC infrastructure can effectively increase the parallelism of map-reduce tasks. However such an execution will incur high energy and data transmission costs. Here we empirically study how the energy efficiency of a map-reduce job varies with increase in parallelism and network bandwidth on a HPC cluster. We also investigate the effectiveness of power-aware systems in managing the energy consumption of different types of map-reduce jobs. We comprehend that for some jobs the energy efficiency degrades at high degree of parallelism, and for some it improves at low CPU frequency. Consequently we suggest strategies for configuring the degree of parallelism, network bandwidth and power management features in a HPC cluster for energy efficient execution of map-reduce jobs.

16 citations


Proceedings ArticleDOI
08 Dec 2014
TL;DR: This paper investigates the use of different text weighting schemes to filter an average volume of 1,000 alerts/day produced by a security information and event management tool in a production SaaS Cloud and develops a entropy scheme to pinpoint relevant information across the amount of daily textual alerts.
Abstract: Security alerts collected under real workload conditions represent a goldmine of information to protect integrity and confidentiality of a production Cloud. Nevertheless, the volume of runtime alerts overwhelms operations teams and makes forensics hard and time consuming. This paper investigates the use of different text weighting schemes to filter an average volume of 1,000 alerts/day produced by a security information and event management (SIEM) tool in a production SaaS Cloud. As a result, a filtering approach based on the log. Entropy scheme, has been developed to pinpoint relevant information across the amount of daily textual alerts. The proposed filter is valuable to support operations team and allowed identifying real incidents that affected several nodes and required manual response.

15 citations


Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors investigated the electrochemical performance of LiFePO4-Li3V2(PO4)3 composites and provided a great physical insight into the stability of solid electrolyte interface and subsequent decrease in charge transfer resistance of composite materials.
Abstract: Several chemical compositions of (1 – x)LiFePO4–xLi3V3(PO4)3 with Li3V2(PO4)3 decorated LiFePO4 morphology are synthesized via modified solid-state synthesis. The current study is undertaken to establish a relation between composite formation and their electrochemistry as an excellent cathode material. A detailed physical and structural investigation revealed the formation of a Li3V2(PO4)3 decorated LiFePO4 composite. To investigate the electrochemical performance of LiFePO4–Li3V2(PO4)3 composites, a series of compositions are prepared with end member of LiFePO4 and Li3V2(PO4)3. A specific composition of 0.97LiFePO4–0.03Li3V3(PO4)3 shows a high reversible capacity of ∼163.8 mAh g–1 at 1 C current rate in the potential window of 2–4.5 V. The present study could provide a great physical insight into the stability of solid electrolyte interface and subsequent decrease in charge transfer resistance of composite materials and exhibits an excellent electrode kinetics and electrochemical stability compared to pr...

14 citations


Journal ArticleDOI
TL;DR: In this article, a carbon coated submicron-LiFePO4 composite was successfully prepared via a solution-based method followed by carbonization process and the composite exhibits excellent electrochemical properties, including superior high rate cyclic performance and cyclic stability at relatively high charge-discharge current rate at 20°C.

14 citations


Proceedings ArticleDOI
13 May 2014
TL;DR: This paper proposes a framework and a tool to automatically discover invariants from application logs and to online detect their violation and shows the usefulness of the approach to detect runtime issues from logs in the form of violations of selected invariants.
Abstract: The increasing popularity of Software as a Service (SaaS) stresses the need of solutions to predict failures and avoid service interruptions, which invariably result in SLA violations and severe loss of revenue. A promising approach to continuously monitor the correct functioning of the system is to check the execution conformance to a set of invariants, i.e., properties that must hold when the system is deemed to run correctly. In this paper we propose a framework and a tool to automatically discover invariants from application logs and to online detect their violation. The framework has been applied on 9 months of log events from a real-world SaaS application. Results show that the proposed tool is able to automatically select 12 invariants with a stringent goodness of fit criteria out of more than 500 potential relationships. We also show the usefulness of our approach to detect runtime issues from logs in the form of violations of selected invariants, corresponding to silent errors that usually go unnoticed by the system maintenance personnel, even if they could represent symptoms of upcoming service failures.

11 citations


Proceedings ArticleDOI
03 Nov 2014
TL;DR: The accuracy and the completeness of an anomaly detection system based on invariants is discussed and the rationality of the approach is shown and the impact of the invariant mining strategy on the detection capabilities is discussed.
Abstract: Invariants represent properties of a system that are expected to hold when everything goes well. Thus, the violation of an invariant most likely corresponds to the occurrence of an anomaly in the system. In this paper, we discuss the accuracy and the completeness of an anomaly detection system based on invariants. The case study we have taken is a back-end operation of a SaaS platform. Results show the rationality of the approach and discuss the impact of the invariant mining strategy on the detection capabilities, both in terms of accuracy and of time to reveal violations.

Proceedings ArticleDOI
09 Oct 2014
TL;DR: This paper proposes a method of analyzing an existing sequential source code that contains data-parallel loops, and gives a reasonably accurate prediction of the extent of speedup possible from this algorithm.
Abstract: Parallelization of an existing sequential application to achieve a good speed-up on a data-parallel infrastructure is quite difficult and time consuming effort. One of the important steps towards this is to assess whether the existing application in its current form can be parallelized to get the desired speedup. In this paper, we propose a method of analyzing an existing sequential source code that contains data-parallel loops, and give a reasonably accurate prediction of the extent of speedup possible from this algorithm. The proposed method performs static and dynamic analysis of the sequential source code to determine the time required by various portions of the code, including the data-parallel portions. Subsequently, it uses a set of novel invariants to calculate various bottlenecks that exists if the program is to be transferred to a GPGPU platform and predicts the extent of parallelization necessary by the GPU in order to achieve the desired end-to-end speedup. Our approach does not require creation of GPU code skeletons of the data parallel portions in the sequential code, thereby reducing the performance prediction effort. We observed a reasonably accurate speedup prediction when we tested our approach on multiple well-known Rodinia benchmark applications, a popular matrix multiplication program and a fast Walsh transform program.

Patent
23 Sep 2014
TL;DR: In this article, the authors proposed a method for determining co-locatability of a plurality of virtual machines on one or more physical infrastructures, which involves identifying workloads which have high variability from the time series data and determining the workload capacity threshold of identified workloads.
Abstract: This technology relates to a device and method for determining co-locatability of a plurality of virtual machines on one or more physical infrastructures. The plurality of virtual machines hosts a plurality of workloads. This involves identifying workloads which have high variability from the time series data and determining the workload capacity threshold of the identified workloads. Thereafter, the candidate workloads are selected among the identified workloads to colocate on a virtual machine based on the workload variability. After that, the total capacity required by each candidate workload pair to meet the service requirement is determined based on the workload capacity threshold. Then, an optimal sharing point of each workload of the pair with respect to the other workload of the pair is identified. Further, percentage compatibility of each workload pair is determined and finally, the candidate workloads are colocated based on the optimal sharing point and percentage compatibility.