scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Fuzzy fairness controller for NVMe SSDs

TL;DR: This work proposes a fuzzy logic-based fairness control mechanism that characterizes the degree of flow intensity of a workload and assigns priorities to the workloads and observes that the proposed mechanism improves the fairness, weighted speedup, and harmonic speedup of SSD by 29.84, 11.24, and 24.90% on average over state of the art.
Abstract: Modern NVMe SSDs are widely deployed in diverse domains due to characteristics like high performance, robustness, and energy efficiency. It has been observed that the impact of interference among the concurrently running workloads on their overall response time differs significantly in these devices, which leads to unfairness. Workload intensity is a dominant factor influencing the interference. Prior works use a threshold value to characterize a workload as high-intensity or low-intensity; this type of characterization has drawbacks due to lack of information about the degree of low- or high-intensity. A data cache in an SSD controller - usually based on DRAMs - plays a crucial role in improving device throughput and lifetime. However, the degree of parallelism is limited at this level compared to the SSD back-end consisting of several channels, chips, and planes. Therefore, the impact of interference can be more pronounced at the data cache level. No prior work has addressed the fairness issue at the data cache level to the best of our knowledge. In this work, we address this issue by proposing a fuzzy logic-based fairness control mechanism. A fuzzy fairness controller characterizes the degree of flow intensity (i.e., the rate at which requests are generated) of a workload and assigns priorities to the workloads. We implement the proposed mechanism in the MQSim framework and observe that our technique improves the fairness, weighted speedup, and harmonic speedup of SSD by 29.84%, 11.24%, and 24.90% on average over state of the art, respectively. The peak gains in fairness, weighted speedup, and harmonic speedup are 2.02x, 29.44%, and 56.30%, respectively.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors present a methodological survey of cache management policies for these three types of internal caches in SSDs and derive a set of guidelines for a future cache designer, and enumerates a number of future research directions for designing an optimal SSD internal cache management policy.

5 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a methodological survey of cache management policies for these three types of internal caches in SSDs and derive a set of guidelines for a future cache designer, and enumerates a number of future research directions for designing an optimal SSD internal cache management policy.

5 citations

Journal ArticleDOI
TL;DR: A DRAM-based Over-Provisioning (OP) cache management mechanism, named Justitia, to reduce data cache contention and improve fairness for modern SSDs is proposed.
Abstract: Modern NVMe SSDs have been widely deployed in multi-tenant cloud computing environments or multi-programming systems. When multiple applications concurrently access one SSD hardware, unfairness within the shared SSD will slow down the application significantly and lead to a violation of service level objectives. However, traditional data cache management within SSDs mainly focuses on improving cache hit ratio, which causes data cache contention and sacrifices fairness among multiple applications. In this paper, we propose a DRAM-based Over-Provisioning (OP) cache management mechanism, named Justitia, to reduce data cache contention and improve fairness for modern SSDs. Justitia consists of two stages including Static-OP stage and Dynamic-OP stage. Through the novel OP mechanism in the two stages, Justitia reduces the max slowdown by 4.5x on average. At the same time, Justitia increases fairness by 20.6x and buffer hit ratio by 19.6% averagely, compared with the traditional shared mechanism.

2 citations

Journal ArticleDOI
TL;DR: The proposed write-related and read-related DRAM allocation strategy inside solid-state drives (SSDs) can reduce more reads/writes in NAND flash memory than other methods to improve the response time.
Abstract: Although NAND flash memory has the advantages of small size, low-power consumption, shock resistance, and fast access speed, NAND flash memory still faces the problems of “out-of-place updates,” “garbage collection,” and “unbalanced execution time” due to its hardware limitations. Usually, a flash translation layer (FTL) can maintain the mapping cache (in limited DRAM space) to store the frequently accessed address mapping for “out-of-place updates” and maintain the read/write buffer (in limited DRAM space) to store the frequently accessed data for “garbage collection” and “unbalanced execution time”. In this article, we will propose a write-related and read-related DRAM allocation strategy inside solid-state drives (SSDs). The design idea behind the write-related DRAM allocation method is to calculate the suitable DRAM allocation for the write buffer and the write mapping cache by building a statistical model with a minimum expected value of writes for NAND flash memory. To further reduce reads in NAND flash memory, the design idea behind the read-related DRAM allocation method is to adopt a cost-benefit policy to reallocate the proper DRAM space from the write buffer and the write mapping cache to the read buffer and the read mapping cache, respectively. According to the experimental results, we can demonstrate that the proposed write-related and read-related DRAM allocation strategy can reduce more reads/writes in NAND flash memory than other methods to improve the response time.
Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a conflict-free (CF) lane to eliminate conflicts by dividing I/O requests into conflictfree PU queues based on physical addresses, which correspond to the PU resources within the NVMe SSDs.
References
More filters
01 Jan 2006
TL;DR: A fuzzy algorithm in which the decision parameters are treated as fuzzy variables is proposed and the results are compared with Optimal, LRU and LFU replacement algorithms.
Abstract: Summary Most researches concerning uniform caching base their replacement decision on just one parameter. This parameter in some cases may not do well because of the workload characteristics. Some others use more than one parameter. In this case, finding the relation between these parameters and how to combine them is another problem. A number of algorithms try to combine their decision parameter with some mathematical equations. But as different workloads have different characteristics, it is not possible to express the parameters relation with an exact mathematical formula. In real world situations, it would often be more realistic to find viable compromises between these parameters. For many problems, it makes sense to partially consider each of them. One especially straightforward method to achieve this is the modeling of these parameters through fuzzy logic. This paper proposes a fuzzy algorithm in which the decision parameters are treated as fuzzy variables. A simulation is also performed and the results are compared with Optimal, LRU and LFU replacement algorithms. The latter two algorithms are the most commonly used algorithms for replacement of cache objects and the first one is a theoretical optimal algorithm. It is concluded that the proposed fuzzy approach is very promising and it has the potential to be considered for future research.

16 citations


"Fuzzy fairness controller for NVMe ..." refers background or methods in this paper

  • ...[6, 30] have also proposed fuzzy replacement policies for CPU caches....

    [...]

  • ...A Sugeno-based fuzzy system is employed in [30] to determine the swap priority of a block-based on block reuse distance, age, and access frequency....

    [...]

Journal ArticleDOI
TL;DR: Simulation with real workload shows that the proposed scheme significantly outperforms the representative SSD buffer management schemes in terms of hit ratio and throughput.
Abstract: As flash memory becomes popular, flash memory-based solid-state drive (SSD) has been the major storage device. SSD has numerous merits such as high I/O speed, low energy consumption, strong shock resistance and small form factor. Meanwhile, some shortcomings still exist including erase-before-write and different cost for read, write and erase operation. Aiming at efficient buffer management of SSD, this paper proposes a novel approach based on particle swarm optimization (PSO) algorithm. The PSO algorithm is used to estimate the Predict Hot Fitness value of each logical page in the buffer to correctly identify them either as hot or cold by properly reflecting the spatial and temporal locality. The pages predicted as hot are kept in the buffer to maximize the hit ratio and utilization of the SSD buffer. Simulation with real workload shows that the proposed scheme significantly outperforms the representative SSD buffer management schemes in terms of hit ratio and throughput.

16 citations


"Fuzzy fairness controller for NVMe ..." refers background in this paper

  • ...Data cache management in SSD: A number of research-works on data cache management [11, 17, 37, 38] focus on cache space management to improve SSD throughput and lifetime....

    [...]

Proceedings Article
09 Jul 2018
TL;DR: This paper proposes a utilitarian performance isolation (UPI) scheme for shared SSD settings that exploits SSD's abundant parallelism to maximize the utility of all tenants while providing performance isolation.
Abstract: This paper proposes a utilitarian performance isolation (UPI) scheme for shared SSD settings. UPI exploits SSD's abundant parallelism to maximize the utility of all tenants while providing performance isolation. Our approach is in contrast to static resource partitioning techniques that bind parallelism, isolation, and capacity altogether. We demonstrate that our proposed scheme reduces the 99th percentile response time by 38.5% for a latency-critical workload, and the average response time by 16.1% for a high-throughput workload compared to the static approaches.

13 citations


"Fuzzy fairness controller for NVMe ..." refers background in this paper

  • ...In [4, 12, 16, 18], partitioning of back-end resources like channel, chip, blocks among the workloads is suggested....

    [...]

  • ...Performance isolation:Researchers have also explored partitioning the SSD resources to provide performance isolation to the concurrent workloads [4, 12, 16, 18, 20]....

    [...]

Proceedings ArticleDOI
30 May 1991

13 citations


"Fuzzy fairness controller for NVMe ..." refers background in this paper

  • ...is gives lower inference time as compared to a fuzzy system with a broad rule base [14]....

    [...]

Journal ArticleDOI
TL;DR: This article identifies two types of interference, namely, queuing delay (QD) interference and garbage collection (GC) interference, in a shared SSD and proposes a framework called VSSD, which is effective in eliminating the interference and achieving performance isolation between users.
Abstract: Performance isolation is critical in shared storage systems, a popular storage solution In a shared storage system, interference between requests from different users can affect the accuracy of I/O cost accounting, resulting in poor performance isolation Recently, NAND flash-memory-based solid-state drives (SSDs) have been increasingly used in shared storage systems However, interference in SSD-based shared storage systems has not been addressed In this article, two types of interference, namely, queuing delay (QD) interference and garbage collection (GC) interference, are identified in a shared SSD Additionally, a framework called VSSD is proposed to address these types of interference VSSD is composed of two components: the FACO credit-based I/O scheduler designed to address QD interference and the ViSA flash translation layer designed to address GC interference The VSSD framework aims to be implemented in the firmware running on an SSD controller With VSSD, interference in an SSD can be eliminated and performance isolation can be ensured Both synthetic and application workloads are used to evaluate the effectiveness of the proposed VSSD framework The performance results show the following First, QD and GC interference exists and can result in poor performance isolation between users on SSD-based shared storage systems Second, VSSD is effective in eliminating the interference and achieving performance isolation between users Third, the overhead of VSSD is insignificant

11 citations


"Fuzzy fairness controller for NVMe ..." refers background in this paper

  • ...In [4, 12, 16, 18], partitioning of back-end resources like channel, chip, blocks among the workloads is suggested....

    [...]

  • ...Performance isolation:Researchers have also explored partitioning the SSD resources to provide performance isolation to the concurrent workloads [4, 12, 16, 18, 20]....

    [...]