scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Fuzzy fairness controller for NVMe SSDs

TL;DR: This work proposes a fuzzy logic-based fairness control mechanism that characterizes the degree of flow intensity of a workload and assigns priorities to the workloads and observes that the proposed mechanism improves the fairness, weighted speedup, and harmonic speedup of SSD by 29.84, 11.24, and 24.90% on average over state of the art.
Abstract: Modern NVMe SSDs are widely deployed in diverse domains due to characteristics like high performance, robustness, and energy efficiency. It has been observed that the impact of interference among the concurrently running workloads on their overall response time differs significantly in these devices, which leads to unfairness. Workload intensity is a dominant factor influencing the interference. Prior works use a threshold value to characterize a workload as high-intensity or low-intensity; this type of characterization has drawbacks due to lack of information about the degree of low- or high-intensity. A data cache in an SSD controller - usually based on DRAMs - plays a crucial role in improving device throughput and lifetime. However, the degree of parallelism is limited at this level compared to the SSD back-end consisting of several channels, chips, and planes. Therefore, the impact of interference can be more pronounced at the data cache level. No prior work has addressed the fairness issue at the data cache level to the best of our knowledge. In this work, we address this issue by proposing a fuzzy logic-based fairness control mechanism. A fuzzy fairness controller characterizes the degree of flow intensity (i.e., the rate at which requests are generated) of a workload and assigns priorities to the workloads. We implement the proposed mechanism in the MQSim framework and observe that our technique improves the fairness, weighted speedup, and harmonic speedup of SSD by 29.84%, 11.24%, and 24.90% on average over state of the art, respectively. The peak gains in fairness, weighted speedup, and harmonic speedup are 2.02x, 29.44%, and 56.30%, respectively.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors present a methodological survey of cache management policies for these three types of internal caches in SSDs and derive a set of guidelines for a future cache designer, and enumerates a number of future research directions for designing an optimal SSD internal cache management policy.

5 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a methodological survey of cache management policies for these three types of internal caches in SSDs and derive a set of guidelines for a future cache designer, and enumerates a number of future research directions for designing an optimal SSD internal cache management policy.

5 citations

Journal ArticleDOI
TL;DR: A DRAM-based Over-Provisioning (OP) cache management mechanism, named Justitia, to reduce data cache contention and improve fairness for modern SSDs is proposed.
Abstract: Modern NVMe SSDs have been widely deployed in multi-tenant cloud computing environments or multi-programming systems. When multiple applications concurrently access one SSD hardware, unfairness within the shared SSD will slow down the application significantly and lead to a violation of service level objectives. However, traditional data cache management within SSDs mainly focuses on improving cache hit ratio, which causes data cache contention and sacrifices fairness among multiple applications. In this paper, we propose a DRAM-based Over-Provisioning (OP) cache management mechanism, named Justitia, to reduce data cache contention and improve fairness for modern SSDs. Justitia consists of two stages including Static-OP stage and Dynamic-OP stage. Through the novel OP mechanism in the two stages, Justitia reduces the max slowdown by 4.5x on average. At the same time, Justitia increases fairness by 20.6x and buffer hit ratio by 19.6% averagely, compared with the traditional shared mechanism.

2 citations

Journal ArticleDOI
TL;DR: The proposed write-related and read-related DRAM allocation strategy inside solid-state drives (SSDs) can reduce more reads/writes in NAND flash memory than other methods to improve the response time.
Abstract: Although NAND flash memory has the advantages of small size, low-power consumption, shock resistance, and fast access speed, NAND flash memory still faces the problems of “out-of-place updates,” “garbage collection,” and “unbalanced execution time” due to its hardware limitations. Usually, a flash translation layer (FTL) can maintain the mapping cache (in limited DRAM space) to store the frequently accessed address mapping for “out-of-place updates” and maintain the read/write buffer (in limited DRAM space) to store the frequently accessed data for “garbage collection” and “unbalanced execution time”. In this article, we will propose a write-related and read-related DRAM allocation strategy inside solid-state drives (SSDs). The design idea behind the write-related DRAM allocation method is to calculate the suitable DRAM allocation for the write buffer and the write mapping cache by building a statistical model with a minimum expected value of writes for NAND flash memory. To further reduce reads in NAND flash memory, the design idea behind the read-related DRAM allocation method is to adopt a cost-benefit policy to reallocate the proper DRAM space from the write buffer and the write mapping cache to the read buffer and the read mapping cache, respectively. According to the experimental results, we can demonstrate that the proposed write-related and read-related DRAM allocation strategy can reduce more reads/writes in NAND flash memory than other methods to improve the response time.
Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a conflict-free (CF) lane to eliminate conflicts by dividing I/O requests into conflictfree PU queues based on physical addresses, which correspond to the PU resources within the NVMe SSDs.
References
More filters
Proceedings ArticleDOI
01 Dec 2009
TL;DR: Fuzzy logic, a very powerful and efficient technique of tackling engineering problems, is used to construct an engine which significantly improves the block replacement decision in contemporary cache systems.
Abstract: As CPUs are becoming significantly faster, it is important that the performance of the memory system keeps pace; otherwise, the speed of the overall system will be compromised by the memory system bottleneck A key component in bridging such a gap is to reconsider the cache memory Most of the existing designs of the cache system base their replacement decisions on just one parameter: the access recency or frequency Lots of efforts have been made in an attempt to combine such parameters in the most desirable way using mathematical relations However, as different workloads have different characteristics, it is not possible to express such parameters relation with exact mathematical formulas Fuzzy logic, a very powerful and efficient technique of tackling engineering problems [1], is used to construct an engine which significantly improves the block replacement decision in contemporary cache systems In this paper, we describe the detailed implementation of a fuzzy logic cache replacement engine Our input to the underlying system are the block access recency, frequency, and block reuse-distance in which a dismissal index is generated indicating the probability of a block to be swapped out

5 citations


"Fuzzy fairness controller for NVMe ..." refers background or methods in this paper

  • ...ey have been used in diverse fields like computer architecture [6], computer networks [21], computer vision [29] and design of medical expert systems [9]....

    [...]

  • ...We useMamdani based fuzzy system because of its widespread acceptance and intuitiveness [6, 32]....

    [...]

  • ...In [6], block reuse distance along with access frequency and age are used as inputs of the Mamdani-based fuzzy system....

    [...]

  • ...[6, 30] have also proposed fuzzy replacement policies for CPU caches....

    [...]