scispace - formally typeset
Search or ask a question

Showing papers by "Jack K. Wolf published in 2009"


Proceedings ArticleDOI
12 Dec 2009
TL;DR: This work empirically characterized flash memory technology from five manufacturers by directly measuring the performance, power, and reliability, and demonstrates that performance varies significantly between vendors, devices, and from publicly available datasheets.
Abstract: Despite flash memory's promise, it suffers from many idiosyncrasies such as limited durability, data integrity problems, and asymmetry in operation granularity. As architects, we aim to find ways to overcome these idiosyncrasies while exploiting flash memory's useful characteristics. To be successful, we must understand the trade-offs between the performance, cost (in both power and dollars), and reliability of flash memory. In addition, we must understand how different usage patterns affect these characteristics. Flash manufacturers provide conservative guidelines about these metrics, and this lack of detail makes it difficult to design systems that fully exploit flash memory's capabilities. We have empirically characterized flash memory technology from five manufacturers by directly measuring the performance, power, and reliability. We demonstrate that performance varies significantly between vendors, devices, and from publicly available datasheets. We also demonstrate and quantify some unexpected device characteristics and show how we can use them to improve responsiveness and energy consumption of solid state disks by 44% and 13%, respectively, as well as increase flash device lifetime by 5.2x.

483 citations


Proceedings ArticleDOI
28 Jun 2009
TL;DR: The best known lower bound on write deficiency is Ω(qk), which was improved to O(k log 2 k in this article, where k is the number of available level transitions in n cells.
Abstract: Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell can take on q different values or levels. While increasing the cell level is easy, reducing the level of a cell can be accomplished only by erasing an entire block. Since block erasures are highly undesirable, coding schemes—known as floating codes or flash codes—have been designed in order to maximize the number of times that information stored in a flash memory can be written (and re-written) prior to incurring a block erasure. An (n, k, t) q flash code ℂ is a coding scheme for storing k information bits in n cells in such a way that any sequence of up to t writes (where a write is a transition 0 → 1 or 1 → 0 in any one of the k bits) can be accommodated without a block erasure. The total number of available level transitions in n cells is n(q−1), and the write deficiency of ℂ, defined as δ(ℂ) = n(q−1)−t, is a measure of how close the code comes to perfectly utilizing all these transitions. For k ≫ 6 and large n, the best previously known construction of flash codes achieves a write defficiency of O(qk2). On the other hand, the best known lower bound on write deficiency is Ω(qk). In this paper, we present a new construction of flash codes that approaches this lower bound to within a factor logarithmic in k. To this end, we first improve upon the so-called “indexed” flash codes, due to Jiang and Bruck, by eliminating the need for index cells in the Jiang-Bruck construction. Next, we further increase the number of writes by introducing a new multi-stage (recursive) indexing scheme. We then show that the write defficiency of the resulting flash codes is O(qk log k) if q ⩾ log 2 k, and at most O(k log2 k) otherwise.

56 citations


Posted Content
TL;DR: In this article, the problem of designing codes for multiple-dimensional flash memory was considered, where k bits are stored using a block of n cells with q levels each. And the goal is to maximize the number of bit writes before an erase operation is required.
Abstract: Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell can take on q different levels corresponding to the number of electrons it contains. Increasing the cell level is easy; however, reducing a cell level forces all the other cells in the same block to be erased. This erasing operation is undesirable and therefore has to be used as infrequently as possible. We consider the problem of designing codes for this purpose, where k bits are stored using a block of n cells with q levels each. The goal is to maximize the number of bit writes before an erase operation is required. We present an efficient construction of codes that can store an arbitrary number of bits. Our construction can be viewed as an extension to multiple dimensions of the earlier work of Jiang and Bruck, where single-dimensional codes that can store only 2 bits were proposed.

38 citations


Proceedings ArticleDOI
30 Sep 2009
TL;DR: The performance of LDPC codes over the cascaded BSC-BAWGN channel belongs to a family of binary-input, memoryless, symmetric-output channels, one that is called the {CBMSC(p, σ)} family and is analyzed by characterizing the decodable region of an ensemble ofLDPC codes.
Abstract: We study the performance of LDPC codes over the cascaded BSC-BAWGN channel. This channel belongs to a family of binary-input, memoryless, symmetric-output channels, one that we call the {CBMSC(p, σ)} family. We analyze the belief propagation (BP) decoder over this channel by characterizing the decodable region of an ensemble of LDPC codes. We then give inner and outer bounds for this decodable region based on existing universal bounds on the performance of a BP decoder. We numerically evaluate the decodable region using density evolution. We also propose other message-passing schemes of interest and give their decodable regions. The performance of each proposed decoder over the CBMS channel family is evaluated through simulations. Finally, we explore capacity-approaching LDPC code ensembles for the {CBMSC(p, σ)} family.

20 citations


Posted Content
TL;DR: It is shown that coding can significantly reduce the number of block erasures required for data movement, and several optimal or nearly optimal data-movement algorithms based upon ideas from coding theory and combinatorics are presented.
Abstract: Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell is implemented as either NAND or NOR floating gate. NAND flash is currently the most widely used type of flash memory. In a NAND flash memory, every block of cells consists of numerous pages; rewriting even a single page requires the whole block to be erased and reprogrammed. Block erasures determine both the longevity and the efficiency of a flash memory. Therefore, when data in a NAND flash memory are reorganized, minimizing the total number of block erasures required to achieve the desired data movement is an important goal. This leads to the flash data movement problem studied in this paper. We show that coding can significantly reduce the number of block erasures required for data movement, and present several optimal or nearly optimal data-movement algorithms based upon ideas from coding theory and combinatorics. In particular, we show that the sorting-based (non-coding) schemes require at least O(nlogn) erasures to move data among n blocks, whereas coding-based schemes require only O(n) erasures. Furthermore, coding-based schemes use only one auxiliary block, which is the best possible, and achieve a good balance between the number of erasures in each of the n+1 blocks.

19 citations


Journal ArticleDOI
TL;DR: This paper analyzes the distance properties of two-dimensional (2-D) intersymbol interference (ISI) channels, in particular the 2-D partial response class-1 (PR1) channel which is an extension of the one-dimensional PR1 channel and proposes an efficient error event search algorithm operating on the error-state diagram that is applicable to any two-D channel.
Abstract: In this paper, we analyze the distance properties of two-dimensional (2-D) intersymbol interference (ISI) channels, in particular the 2-D partial response class-1 (PR1) channel which is an extension of the one-dimensional (1-D) PR1 channel. The minimum squared-Euclidean distance of this channel is proved to be 4 and a complete characterization of the squared-Euclidean distance 4 error events is provided. As for 1-D channels, we can construct error-state diagrams for 2-D channels to help characterize error events. We propose an efficient error event search algorithm operating on the error-state diagram that is applicable to any 2-D channel.

12 citations


Proceedings ArticleDOI
28 Jun 2009
TL;DR: In this paper, the authors show that coding can significantly reduce block erasures for data movement, and present several optimal or nearly optimal algorithms to minimize the block erasure in NAND flash memories.
Abstract: NAND flash memories are currently the most widely used flash memories. In a NAND flash memory, although a cell block consists of many pages, to rewrite one page, the whole block needs to be erased and reprogrammed. Block erasures determine the longevity and efficiency of flash memories. So when data is frequently reorganized, which can be characterized as a data movement process, how to minimize block erasures becomes an important challenge. In this paper, we show that coding can significantly reduce block erasures for data movement, and present several optimal or nearly optimal algorithms. While the sorting-based non-coding schemes require O(n log n) erasures to move data among n blocks, coding-based schemes use only O(n) erasures and also optimize the utilization of storage space.

11 citations


Posted Content
TL;DR: This paper improves upon the so-called “indexed” flash codes, due to Jiang and Bruck, by eliminating the need for index cells in the Jiang-Bruck construction and increases the number of writes by introducing a new multi-stage (recursive) indexing scheme.
Abstract: Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell can take on q different values or levels. While increasing the cell level is easy, reducing the level of a cell can be accomplished only by erasing an entire block. Since block erasures are highly undesirable, coding schemes - known as floating codes or flash codes - have been designed in order to maximize the number of times that information stored in a flash memory can be written (and re-written) prior to incurring a block erasure. An (n,k,t)_q flash code C is a coding scheme for storing k information bits in n cells in such a way that any sequence of up to t writes (where a write is a transition 0 -> 1 or 1 -> 0 in any one of the k bits) can be accommodated without a block erasure. The total number of available level transitions in n cells is n(q-1), and the write deficiency of C, defined as \delta(C) = n(q-1) - t, is a measure of how close the code comes to perfectly utilizing all these transitions. For k > 6 and large n, the best previously known construction of flash codes achieves a write deficiency of O(qk^2). On the other hand, the best known lower bound on write deficiency is \Omega(qk). In this paper, we present a new construction of flash codes that approaches this lower bound to within a factor logarithmic in k. To this end, we first improve upon the so-called "indexed" flash codes, due to Jiang and Bruck, by eliminating the need for index cells in the Jiang-Bruck construction. Next, we further increase the number of writes by introducing a new multi-stage (recursive) indexing scheme. We then show that the write deficiency of the resulting flash codes is O(qk\log k) if q \geq \log_2k, and at most O(k\log^2 k) otherwise.

3 citations