scispace - formally typeset
Search or ask a question

Showing papers by "Santonu Sarkar published in 2013"


Journal ArticleDOI
TL;DR: In this article, LiV3O8 nanorods are synthesized using citrate assisted sol-gel method followed by room temperate quenching technique and detailed kinetic studies of these electrodes comprising of different weight percentage of conductive carbon and fixed binder concentration are pursued by cyclic voltammetry (CV), in situ electrochemical impedance spectroscopy (EIS) experiments.

73 citations


Journal ArticleDOI
30 Jun 2013
TL;DR: Nanodimensional materials such as transition metal oxides, polyanionic based materials and metal fluorides can be used as cathode while metal nanoparticles and alloys are preferred as anode for next-generation lithium ion batteries in order to obtain high reversible capacity, rate capability, safety and longer cycle life as discussed by the authors.
Abstract: Nanodimensional materials such as transition metal oxides, polyanionic based materials and metal fluorides can be used as cathode while metal nanoparticles, alloys and metal oxides are preferred as anode for next-generation lithium- ion batteries (LIBs) in order to obtain high reversible capacity, rate capability, safety, and longer cycle life. These nanomaterials can offer relatively short ionic and electronic pathways which leads to better transportation of both lithium ions and electrons to the particles core. This article emphasize on the effect of nanodimension on the electrochemical performance of cathode and anode materials. Their synthesis processes, electrochemical properties and electrode reaction mechanisms are briefly discussed and summarized. Furthermore, the article highlights recent past scientific works and new progresses in the field of LIBs. It also highlights the direction to overcome the existing issues of current lithium storage technology. In future, we may overcome all the existing issues of LIBs and can deliver excellent cathode and anode combinations to fulfill maximum practical efficiency with low cost and ultimate safety for high end applications.

14 citations


Proceedings ArticleDOI
24 Jun 2013
TL;DR: This paper proposes an algorithm that detects proactive triggers for remedial action, selects a VM (for migration) and also suggests a possible target PM, and shows the decrease in the number of SLA violations in a system using the approach over existing approaches that do not trigger migration in response to non-availability related SLA violation.
Abstract: SLA violations are typically viewed as service failures. If service fails once, it will fail again unless remedial action is taken. In a virtualized environment, a common remedial action is to restart or reboot a virtual machine (VM). In this paper we present, a VM live-migration policy that is aware of SLA threshold violations of workload response time, physical machine (PM) and VM utilization as well as availability violations at the PM and VM. In the migration policy we take into account PM failures and VM (software) failures as well as workload features such as burstiness (coefficient of variation or CoV >1) which calls for caution during the selection of target PM when migrating these workloads. The proposed policy also considers migration of a VM when the utilization of the physical machine hosting the VM approaches its utilization threshold. We propose an algorithm that detects proactive triggers for remedial action, selects a VM (for migration) and also suggests a possible target PM. We show the efficacy of our proposed approach by plotting the decrease in the number of SLA violations in a system using our approach over existing approaches that do not trigger migration in response to non-availability related SLA violations, via discrete event simulation of a relevant case study.

13 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: The Keep It Moving (KIM) software framework for the cloud controller that helps minimize service failures due to SLA violation of availability, utilization and response time in SaaS cloud data centers and formulate the selection of a target PM as a multi-objective optimization problem.
Abstract: Software failures, workload-related failures and job overload conditions bring about SLA violations in software-as-a-service (SaaS) systems. Existing work does not address mitigation of SLA violations completely as (i) none of them address mitigation of SLA violations in business specific scenarios (SaaS, in our case), (ii) while some do not address software and workload-related failures, other approaches do not address the problem of target PM selection for workload migration comprehensively (leaving out vital considerations like workload compatibility checks between migrating VM and VMs at the target PM) and (iii) a clear mathematical mapping between workload, resource demand and SLA is lacking. In this paper, we present the Keep It Moving (KIM) software framework for the cloud controller that helps minimize service failures due to SLA violation of availability, utilization and response time in SaaS cloud data centers. Though we consider migration to be the primary mitigation technique, we also try to mitigate SLA violations without migration. We achieve this by performing a capacity check on the host physical machine (PM) before the migration to identify if enough capacity is available on the current PM to address the upcoming SLA violations by restart/reboot or VM resizing. In certain cases such as workload-related failures due to corrupt files, we prefer workload rerouting to a replica VM over migration. We formulate the selection of a target PM as a multi-objective optimization problem. We validate our proposed approach by using a trace-based discrete event simulation of a virtualized data center where failure and workload characteristics are simulated from data extracted from a real SaaS business server logs. We found that a 60% reduction in SLA violation is possible using our approach as well as reducing VM downtime by approximately 10%.

13 citations


Proceedings ArticleDOI
22 Aug 2013
TL;DR: This paper focuses on x86 architectures and study empirically the performance improvements introduced by Intel's VT and PCI-SIG's SR-IOV on a Xen-based hypervisor and indicates that hardware assistance indeed eliminates most overheads, especially those relating to network I/O, but non-negligible CPU overheads still remain.
Abstract: An application's performance can suffer from significant computational overheads when it is moved from a native to a virtualized environment. Adoption of virtualization without understanding such overheads in detail can dramatically impact the overall performance of hosted applications. The rapid adoption of virtualization has fueled the development of new hardware technologies, which promise to optimize the performance and scalability of processor and network I/O virtualization. However, no comprehensive empirical study of the effectiveness of these hardware assistance technologies is publicly available. In this paper we focus on x86 architectures and study empirically the performance improvements introduced by Intel's VT and PCI-SIG's SR-IOV on a Xen-based hypervisor. Using a range of benchmark programs, we compare benchmark scores and resource utilization between native and virtual environments for two different testbeds, one with hardware assistance and one without. The results indicate that hardware assistance indeed eliminates most overheads, especially those relating to network I/O, but non-negligible CPU overheads still remain. Also, there is no hardware technology with specifically deals with disk I/O virtualization, and significant overheads do arise in workloads requiring intensive disk usage.

7 citations


Proceedings ArticleDOI
02 Dec 2013
TL;DR: A meta analysis of research publications in software engineering to help with research education in SE and identifies how different factors of publishing relate to the number of papers published as well as citations received for a researcher, and how the most successful researchers collaborate and co-cite one another.
Abstract: Research into software engineering (SE) education is largely concentrated on teaching and learning issues in coursework programs. This paper, in contrast, provides a meta analysis of research publications in software engineering to help with research education in SE. Studying publication patterns in a discipline will assist research students and supervisors gain a deeper understanding of how successful research has occurred in the discipline. We present results from a large scale empirical study covering over three and a half decades of software engineering research publications. We identify how different factors of publishing relate to the number of papers published as well as citations received for a researcher, and how the most successful researchers collaborate and co-cite one another. Our results show that authors with high publication rates do not concentrate on a few selected venues to publish, researchers with high publication rates behave differently from researchers of high citation rates (with the latter group co-authoring and citing their peers to a much lesser extent than the former), and collaborators citing each other's works is not a significant phenomenon in SE research.

2 citations


Proceedings ArticleDOI
22 Aug 2013
TL;DR: This paper analyzes a corpus of 19,000+ papers, written by 21,000+.
Abstract: In the three and half decades since the inception of organized research publication in software engineering, the discipline has gained a significant maturity. This journey to maturity has been guided by the synergy of ideas, individuals and interactions. In this journey software engineering has evolved into an increasingly empirical discipline. Empirical sciences involve significant collaboration, leading to large teams working on research problems. In this paper we analyze a corpus of 19,000+ papers, written by 21,000+ authors from 16 publication venues between 1975 to 2010, to understand what is the ideal team size that has produced maximum impact in software engineering research, and whether researchers in software engineering have maintained the same co-authorship relations over long periods of time as a means of achieving research impact.

1 citations


01 Jan 2013
TL;DR: This paper presents results from an empirical study of 19,000+ SE research papers, written by 21,000- authors from 16 publication venues between 1975 to 2010, to examine four research questions - around the ideal team size that has produced maximum impact in software engineering research.
Abstract: Software engineering (SE) as a discipline is in its fifth decade of existence. As is expected, the nature of SE has evolved over the decades, trans- forming from its origins in theoretical computer science to a more empirical identity. The last few decades have also seen increasing availability and easy accessibility of research publication data. This paper presents results from an empirical study of 19,000+ SE research papers, written by 21,000+ authors from 16 publication venues between 1975 to 2010. We examine four research questions - around the ideal team size that has produced maximum impact in software engineering research; whether researchers in software engineering have maintained the same co-authorship relations over long periods of time as a means of achieving research impact; how we can predict whether two researchers will collaborate in future based on their publication profiles, and whether there are distinct epochs in the publication history of software engineering research. Our results can inform some of the decisions that need to be taken in the ideation, planning, and execution of research agendas at the individual as well as organizational levels.