scispace - formally typeset
Search or ask a question
Author

Chandan Kalita

Bio: Chandan Kalita is an academic researcher from Indian Institute of Technology Guwahati. The author has contributed to research in topics: Memory bus & Copy-on-write. The author has an hindex of 2, co-authored 2 publications receiving 5 citations.

Papers
More filters
Posted Content
TL;DR: This paper presents the design and implementation of a file system on NVRAM called DurableFS, which provides atomicity and durability of file operations to applications, and shows that there is only a 7 %degradation in performance due to providing these guarantees.
Abstract: With the availability of hybrid DRAM-NVRAM memory on the memory bus of CPUs, a number of file systems on NVRAM have been designed and implemented. In this paper we present the design and implementation of a file system on NVRAM called DurableFS, which provides atomicity and durability of file operations to applications. Due to the byte level random accessibility of memory, it is possible to provide these guarantees without much overhead. We use standard techniques like copy on write for data, and a redo log for metadata changes to build an efficient file system which provides durability and atomicity guarantees at the time a file is closed. Benchmarks on the implementation shows that there is only a 7 %degradation in performance due to providing these guarantees.

4 citations

Journal ArticleDOI
TL;DR: This paper presents the design and implementation of a file system on NVRAM called DurableFS, which provides atomicity and durability of file operations to applications, and provides ACID properties to transactions involving multiple files.
Abstract: With the availability of hybrid DRAM and NVRAM memory on the memory bus of CPUs, a number of file systems on NVRAM have been designed and implemented. In this paper we present the design and implementation of a file system on NVRAM called DurableFS, which provides atomicity and durability of file operations to applications. It provides ACID properties to transactions involving multiple files. Due to the byte level random accessibility of memory, it is possible to provide these guarantees without much overhead. We use standard techniques like copy on write for data, and a redo log for metadata changes to build an efficient file system which provides durability and atomicity guarantees to transactions. Benchmarks on the implementation shows that there is only a 7% degradation in performance due to providing these guarantees.

2 citations


Cited by
More filters
01 Jan 2016
TL;DR: NoVA as mentioned in this paper is a file system designed to maximize performance on hybrid memory systems while providing strong consistency guarantees for data stored in nonvolatile memories (NVMs) by keeping separate logs for each inode and storing file data outside the log to minimize log size and reduce garbage collection costs.
Abstract: Fast non-volatile memories (NVMs) will soon appear on the processor memory bus alongside DRAM. The resulting hybrid memory systems will provide software with sub-microsecond, high-bandwidth access to persistent data, but managing, accessing, and maintaining consistency for data stored in NVM raises a host of challenges. Existing file systems built for spinning or solid-state disks introduce software overheads that would obscure the performance that NVMs should provide, but proposed file systems for NVMs either incur similar overheads or fail to provide the strong consistency guarantees that applications require. We present NOVA, a file system designed to maximize performance on hybrid memory systems while providing strong consistency guarantees. NOVA adapts conventional log-structured file system techniques to exploit the fast random access that NVMs provide. In particular, it maintains separate logs for each inode to improve concurrency, and stores file data outside the log to minimize log size and reduce garbage collection costs. NOVA's logs provide metadata, data, and mmap atomicity and focus on simplicity and reliability, keeping complex metadata structures in DRAM to accelerate lookup operations. Experimental results show that in write-intensive workloads, NOVA provides 22% to 216× throughput improvement compared to state-of-the-art file systems, and 3.1× to 13.5× improvement compared to file systems that provide equally strong data consistency guarantees.

9 citations

Dissertation
01 Sep 2018
TL;DR: From this analysis, a number of alternative storage architectures are proposed and explored, showing that a simpler, more direct path from applications to storage can have a positive impact on efficiency and predictability in such systems.
Abstract: As the speed, size, reliability and power efficiency of non-volatile storage media increases, and the data demands of many application domains grow, operating systems are being put under escalating pressure to provide high-speed access to storage. Traditional models of storage access assume devices to be slow, expecting plenty of slack time in which to process data between requests being serviced, and that all significant variations in timing will be down to the storage device itself. Modern high-speed storage devices break this assumption, causing storage applications to become processor-bound, rather than I/O-bound, in an increasing number of situations. This is especially an issue in real-time embedded systems, where limited processing resources and strict timing and predictability requirements amplify any issues caused by the complexity of the software storage stack. This thesis explores the issues related to accessing high-speed storage from real-time embedded systems, providing a thorough analysis of storage operations based on metrics relevant to the area. From this analysis, a number of alternative storage architectures are proposed and explored, showing that a simpler, more direct path from applications to storage can have a positive impact on efficiency and predictability in such systems.

4 citations

Proceedings ArticleDOI
28 Mar 2022
TL;DR: This work introduces SafePM, a memory safety mechanism that transparently and comprehensively detects both spatial and temporal memory safety violations for PM-based applications, and implements it based on the AddressSanitizer compiler pass, and integrates it with the PM development kit (PMDK) runtime library.
Abstract: Memory safety violation is a major root cause of reliability and security issues in software systems. Byte-addressable persistent memory (PM), just like its volatile counterpart, is also susceptible to memory safety violations. While there is a couple of decades of work in ensuring memory safety for programs based on volatile memory, the existing approaches are incompatible for PM since the PM programming model introduces a persistent pointer representation for persistent memory objects and allocators, where it is imperative to design a crash consistent safety mechanism. We introduce SafePM, a memory safety mechanism that transparently and comprehensively detects both spatial and temporal memory safety violations for PM-based applications. SafePM's design builds on a shadow memory approach, and augments it with crash consistent data structures and system operations to ensure memory safety even across system reboots and crashes. We implement SafePM based on the AddressSanitizer compiler pass, and integrate it with the PM development kit (PMDK) runtime library. We evaluate SafePM across three dimensions: overheads, effectiveness, and crash consistency. SafePM overall incurs reasonable overheads while providing comprehensive memory safety, and has uncovered real-world bugs in the widely-used PMDK library.

2 citations

Journal ArticleDOI
TL;DR: This paper characterize the performance of persistent memory devices, which use the 3DXPoint technology, in the context of the data acquisition system for one large Particle Physics experiment, DUNE.
Abstract: Emerging high-performance storage technologies are opening up the possibility of designing new distributed data acquisition (DAQ) system architectures, in which the live acquisition of data and their processing are decoupled through a storage element. An example of these technologies is 3D XPoint, which promises to fill the gap between memory and traditional storage and offers unprecedented high throughput for nonvolatile data. In this article, we characterize the performance of persistent memory devices that use the 3D XPoint technology, in the context of the DAQ system for one large Particle Physics experiment, DUNE. This experiment must be capable of storing, upon a specific signal, incoming data for up to 100 s, with a throughput of 1.5 TB/s, for an aggregate size of 150 TB. The modular nature of the apparatus allows splitting the problem into 150 identical units operating in parallel, each at 10 GB/s. The target is to be able to dedicate a single CPU to each of those units for DAQ and storage.

2 citations

Posted Content
TL;DR: In this article, the authors characterize the performance of persistent memory devices, which use the 3DXPoint technology, in the context of the data acquisition system for one large Particle Physics experiment, DUNE.
Abstract: Emerging high-performance storage technologies are opening up the possibility of designing new distributed data acquisition system architectures, in which the live acquisition of data and their processing are decoupled through a storage element. An example of these technologies is 3DXPoint, which promises to fill the gap between memory and traditional storage and offers unprecedented high throughput for data persistency. In this paper, we characterize the performance of persistent memory devices, which use the 3DXPoint technology, in the context of the data acquisition system for one large Particle Physics experiment, DUNE. This experiment must be capable of storing, upon a specific signal, incoming data for up to 100 seconds, with a throughput of 1.5 TB/s, for an aggregate size of 150 TB. The modular nature of the apparatus allows splitting the problem into 150 identical units operating in parallel, each at 10 GB/s. The target is to be able to dedicate a single CPU to each of those units for data acquisition and storage.