scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 2008"


Journal ArticleDOI
TL;DR: A practical secure communication protocol is developed, which uses a four-step procedure to ensure wireless information-theoretic security and is shown that the protocol is effective in secure key renewal-even in the presence of imperfect channel state information.
Abstract: This paper considers the transmission of confidential data over wireless channels. Based on an information-theoretic formulation of the problem, in which two legitimates partners communicate over a quasi-static fading channel and an eavesdropper observes their transmissions through a second independent quasi-static fading channel, the important role of fading is characterized in terms of average secure communication rates and outage probability. Based on the insights from this analysis, a practical secure communication protocol is developed, which uses a four-step procedure to ensure wireless information-theoretic security: (i) common randomness via opportunistic transmission, (ii) message reconciliation, (iii) common key generation via privacy amplification, and (iv) message protection with a secret key. A reconciliation procedure based on multilevel coding and optimized low-density parity-check (LDPC) codes is introduced, which allows to achieve communication rates close to the fundamental security limits in several relevant instances. Finally, a set of metrics for assessing average secure key generation rates is established, and it is shown that the protocol is effective in secure key renewal-even in the presence of imperfect channel state information.

1,759 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of error control in random linear network coding is addressed from a matrix perspective that is closely related to the subspace perspective of Rotter and Kschischang.
Abstract: The problem of error control in random linear network coding is addressed from a matrix perspective that is closely related to the subspace perspective of Rotter and Kschischang. A large class of constant-dimension subspace codes is investigated. It is shown that codes in this class can be easily constructed from rank-metric codes, while preserving their distance properties. Moreover, it is shown that minimum distance decoding of such subspace codes can be reformulated as a generalized decoding problem for rank-metric codes where partial information about the error is available. This partial information may be in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). Taking erasures and deviations into account (when they occur) strictly increases the error correction capability of a code: if mu erasures and delta deviations occur, then errors of rank t can always be corrected provided that 2t les d - 1 + mu + delta, where d is the minimum rank distance of the code. For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can properly exploit erasures and deviations. In a network coding application, where n packets of length M over F(q) are transmitted, the complexity of the decoding algorithm is given by O(dM) operations in an extension field F(qn).

668 citations


Patent
04 Sep 2008
TL;DR: In this article, a storage subsystem monitors one or more conditions related to the probability of a data error occurring, and adjusts an error correction setting, and thus the quantity of ECC data used to protect data received from a host system.
Abstract: A storage subsystem monitors one or more conditions related to the probability of a data error occurring. Based on the monitored condition or conditions, the storage subsystem adjusts an error correction setting, and thus the quantity of ECC data used to protect data received from a host system. To enable blocks of data to be properly checked when read from memory, the storage subsystem stores ECC metadata indicating the particular error correction setting used to store particular blocks of data. The storage subsystem may be in the form of a solid-state non-volatile memory card or drive that attaches to the host system.

328 citations


Journal ArticleDOI
TL;DR: In this paper, a method to design regular (2, dc)- LDPC codes over GF(q) with both good waterfall and error floor properties is presented, based on the algebraic properties of their binary image.
Abstract: In this paper, a method to design regular (2, dc)- LDPC codes over GF(q) with both good waterfall and error floor properties is presented, based on the algebraic properties of their binary image. First, the algebraic properties of rows of the parity check matrix H associated with a code are characterized and optimized to improve the waterfall. Then the algebraic properties of cycles and stopping sets associated with the underlying Tanner graph are studied and linked to the global binary minimum distance of the code. Finally, simulations are presented to illustrate the excellent performance of the designed codes.

305 citations


Journal ArticleDOI
TL;DR: A hardware architecture for fully parallel stochastic low-density parity-check (LDPC) decoders that provides decoding performance within 0.5 and 0.25 dB of the floating-point sum-product algorithm with 32 and 16 iterations, respectively, and similar error-floor behavior is presented.
Abstract: Stochastic decoding is a new approach to iterative decoding on graphs. This paper presents a hardware architecture for fully parallel stochastic low-density parity-check (LDPC) decoders. To obtain the characteristics of the proposed architecture, we apply this architecture to decode an irregular state-of-the-art (1056,528) LDPC code on a Xilinx Virtex-4 LX200 field-programmable gate-array (FPGA) device. The implemented decoder achieves a clock frequency of 222 MHz and a throughput of about 1.66 Gb/s at Eb/N0=4.25 dB (a bit error rate of 10-8). It provides decoding performance within 0.5 and 0.25 dB of the floating-point sum-product algorithm with 32 and 16 iterations, respectively, and similar error-floor behavior. The decoder uses less than 40% of the lookup tables, flip-flops, and IO ports available on the FPGA device. The results provided in this paper validate the potential of stochastic LDPC decoding as a practical and competitive fully parallel decoding approach.

275 citations


Journal ArticleDOI
TL;DR: In this paper, a new coding scheme, called the STAR code, was proposed for correcting triple storage node failures (erasures), which is an extension of the double-erasure-correcting EVENODD code.
Abstract: Proper data placement schemes based on erasure correcting codes are one of the most important components for a highly available data storage system. For such schemes, low decoding complexity for correcting (or recovering) storage node failures is essential for practical systems. In this paper, we describe a new coding scheme, which we call the STAR code, for correcting triple storage node failures (erasures). The STAR code is an extension of the double-erasure-correcting EVENODD code and a modification of the generalized triple-erasure-correcting EVENODD code. The STAR code is an Maximum Distance Separable (MDS) code and thus is optimal in terms of node failure recovery capability for a given data redundancy. We provide detailed STAR code decoding algorithms for correcting various triple node failures. We show that the decoding complexity of the STAR code is much lower than those of existing comparable codes; thus, the STAR code is practically very meaningful for storage systems that need higher reliability.

264 citations


Proceedings ArticleDOI
13 Apr 2008
TL;DR: The reliability gain of network coding for reliable multicasting in wireless networks, where network coding is most promising, is quantified and the expected number of transmissions per packet is defined as the performance metric for reliability and analytical expressions characterizing the performance are derived.
Abstract: The capacity gain of network coding has been extensively studied in wired and wireless networks. Recently, it has been shown that network coding improves network reliability by reducing the number of packet retransmissions in lossy networks. However, the extent of the reliability benefit of network coding is not known. This paper quantifies the reliability gain of network coding for reliable multicasting in wireless networks, where network coding is most promising. We define the expected number of transmissions per packet as the performance metric for reliability and derive analytical expressions characterizing the performance of network coding. We also analyze the performance of reliability mechanisms based on rateless codes and automatic repeat request (ARQ), and compare them with network coding. We first study network coding performance in an access point model, where an access point broadcasts packets to a group of K receivers over lossy wireless channels. We show that the expected number of transmissions using ARQ, compared to network coding, scales as ominus (log K) as the number of receivers becomes large. We then use the access point model as a building block to study reliable multicast in a tree topology. In addition to scaling results, we derive expressions for the expected number of transmissions for finite multicast groups as well. Our results show that network coding significantly reduces the number of retransmissions in lossy networks compared to an ARQ scheme. However, rateless coding achieves asymptotic performance results similar to that of network coding.

241 citations


Journal ArticleDOI
TL;DR: Low-density lattice codes (LDLC) are novel lattice code that can be decoded efficiently and approach the capacity of the additive white Gaussian noise (AWGN) channel and the paper discusses convergence results and implementation considerations.
Abstract: Low-density lattice codes (LDLC) are novel lattice codes that can be decoded efficiently and approach the capacity of the additive white Gaussian noise (AWGN) channel. In LDLC a codeword x is generated directly at the n-dimensional Euclidean space as a linear transformation of a corresponding integer message vector b, i.e., x = Gb-1, where H = G-1 is restricted to be sparse. The fact that H is sparse is utilized to develop a linear-time iterative decoding scheme which attains, as demonstrated by simulations, good error performance within ~0.5 dB from capacity at block length of n =100,000 symbols. The paper also discusses convergence results and implementation considerations.

211 citations


Patent
Tieniu Li1
16 May 2008
TL;DR: In this article, the authors present a technique to recover data from uncorrectable errors from nonvolatile integrated circuit memory devices such as NAND flash by decoding of error correction code data.
Abstract: Apparatus and methods, such as those that read data from non-volatile integrated circuit memory devices (106), such as NAND flash. For example, disclosed techniques can be embodied in a device driver (110) of an operating system (104). Errors are tracked during read operations. If sufficient errors are observed (204) during read operations, the block is then retired when it is requested to be erased or a page of the block is to be written (210), (212), (214), (310). One embodiment is a technique to recover data from uncorrectable errors. For example, a read mode can be changed (410) to a more reliable read mode to attempt to recover data. One embodiment further returns data (106) from the memory device regardless of whether the data was correctable (430) by decoding of error correction code data or not.

198 citations


Patent
12 May 2008
TL;DR: In this article, a method for operating a memory includes encoding input data with an error correction code (ECC) to produce input encoded data including first and second sections, such that the ECC is decodable based on the first section at a first redundancy, and based on both the first and the second sections at a second redundancy that is higher than the first redundancy.
Abstract: A method for operating a memory includes encoding input data with an Error Correction Code (ECC) to produce input encoded data including first and second sections, such that the ECC is decodable based on the first section at a first redundancy, and based on both the first and the second sections at a second redundancy that is higher than the first redundancy. Output encoded data is read and a condition is evaluated. The input data is reconstructed using a decoding level selected, responsively to the condition, from a first level, at which a first part of the output encoded data corresponding to the first section is processed to decode the ECC at the first redundancy, and a second level, at which the first part and a second part of the output encoded data corresponding to the second section are processed jointly to decode the ECC at the second redundancy.

189 citations


Journal ArticleDOI
TL;DR: A decoding principle for network error correction codes is proposed, based on which two decoding algorithms are introduced and analyzed and their performance is analyzed.
Abstract: In this paper, we study basic properties of linear network error correction codes, their construction and error correction capability for various kinds of errors. Our discussion is restricted to the single-source multicast case. We define the minimum distance of a network error correction code. This plays the same role as it does in classical coding theory. We construct codes that can correct errors up to the full error correction capability specified by Singleton bound for network error correction codes recently established by Cai and Yeung. We propose a decoding principle for network error correction codes, based on which we introduce two decoding algorithms and analyze their performance. We formulate the global kernel error correction problem and characterize the error correction capability of codes for this kind of error.

Journal ArticleDOI
TL;DR: In this article, it was shown that if one encodes the information as where is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of pieces of information with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors).
Abstract: This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal (a block of pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors). We show that if one encodes the information as where is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of pieces of information with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well.

Journal ArticleDOI
TL;DR: Simulation results show that, for terminated LDPC convolutional codes of sufficiently large memory, performance can be improved by increasing the density of the syndrome former matrix.
Abstract: Potentially large storage requirements and long initial decoding delays are two practical issues related to the decoding of low-density parity-check (LDPC) convolutional codes using a continuous pipeline decoder architecture. In this paper, we propose several reduced complexity decoding strategies to lessen the storage requirements and the initial decoding delay without significant loss in performance. We also provide bit error rate comparisons of LDPC block and LDPC convolutional codes under equal processor (hardware) complexity and equal decoding delay assumptions. A partial syndrome encoder realization for LDPC convolutional codes is also proposed and analyzed. We construct terminated LDPC convolutional codes that are suitable for block transmission over a wide range of frame lengths. Simulation results show that, for terminated LDPC convolutional codes of sufficiently large memory, performance can be improved by increasing the density of the syndrome former matrix.

Patent
17 Sep 2008
TL;DR: Lee distance based codes in a flash device can increase the number of errors that can be corrected for a given number of redundancy cells, compared with Hamming distance-based codes as discussed by the authors.
Abstract: Apparatus and methods for operating a flash device characterized by use of Lee distance based codes in a flash device so as to increase the number of errors that can be corrected for a given number of redundancy cells, compared with Hamming distance based codes.

Journal ArticleDOI
TL;DR: An information-theoretic approach for detecting Byzantine or adversarial modifications in networks employing random linear network coding is described and it is shown how the detection probability varies with the overhead, coding field size, and the amount of information unknown to the adversary about the random code.
Abstract: An information-theoretic approach for detecting Byzantine or adversarial modifications in networks employing random linear network coding is described. Each exogenous source packet is augmented with a flexible number of hash symbols that are obtained as a polynomial function of the data symbols. This approach depends only on the adversary not knowing the random coding coefficients of all other packets received by the sink nodes when designing its adversarial packets. We show how the detection probability varies with the overhead (ratio of hash to data symbols), coding field size, and the amount of information unknown to the adversary about the random code.

Journal ArticleDOI
TL;DR: In this paper, the problem of error correction in coherent and non-coherent network coding is considered under an adversarial model, where knowledge of the network topology and network code is assumed at the source and destination nodes, and the error correction capability of an (outer) code is succinctly described by the rank metric.
Abstract: The problem of error correction in both coherent and noncoherent network coding is considered under an adversarial model. For coherent network coding, where knowledge of the network topology and network code is assumed at the source and destination nodes, the error correction capability of an (outer) code is succinctly described by the rank metric; as a consequence, it is shown that universal network error correcting codes achieving the Singleton bound can be easily constructed and efficiently decoded. For noncoherent network coding, where knowledge of the network topology and network code is not assumed, the error correction capability of a (subspace) code is given exactly by a new metric, called the injection metric, which is closely related to, but different than, the subspace metric of Kotter and Kschischang. In particular, in the case of a non-constant-dimension code, the decoder associated with the injection metric is shown to correct more errors then a minimum-subspace-distance decoder. All of these results are based on a general approach to adversarial error correction, which could be useful for other adversarial channels beyond network coding.

Proceedings ArticleDOI
20 Nov 2008
TL;DR: It is shown that the adjacency graph of permutations is a subgraph of a multi-dimensional array of a special size, a property that enables code designs based on Lee- metric codes.
Abstract: We investigate error-correcting codes for a novel storage technology for flash memories, the rank-modulation scheme. In this scheme, a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. The resulting scheme eliminates the need for discrete cell levels, overcomes overshoot errors when programming cells (a serious problem that reduces the writing speed), and mitigates the problem of asymmetric errors. In this paper, we study the properties of error correction in rank modulation codes. We show that the adjacency graph of permutations is a subgraph of a multi-dimensional array of a special size, a property that enables code designs based on Lee- metric codes. We present a one-error-correcting code whose size is at least half of the optimal size. We also present additional error-correcting codes and some related bounds.

Patent
Patrick J. Lee1
23 May 2008
TL;DR: In this paper, an error correction system for a polynomial code over a Galois field GF (q) comprising q elements is described, wherein the encoded codeword comprises an input data sequence, at least one check symbol, and redundancy symbols.
Abstract: An error correction system is disclosed comprising an encoder operable to generate an encoded codeword of a polynomial code over a Galois field GF(q) comprising q elements, wherein the encoded codeword comprises an input data sequence, at least one check symbol, and redundancy symbols. A decoder decodes a received codeword into the encoded codeword by correcting at least one error in the received codeword to generate a corrected codeword, evaluating at least one symbol of the corrected codeword relative to the check symbol in order to detect a shift error, and when the shift error is detected, shift the corrected codeword to correct the shift error.

Journal ArticleDOI
TL;DR: The error detection and correction capability of the IBM POWER6™ processor enables high tolerance to single-event upsets and the soft-error resilience was tested with proton beam- and neutron beam-induced fault injection.
Abstract: The error detection and correction capability of the IBM POWER6™ processor enables high tolerance to single-event upsets. The soft-error resilience was tested with proton beam- and neutron beam-induced fault injection. Additionally, statistical fault injection was performed on a hardware-emulated POWER6 processor simulation model. The error resiliency is described in terms of the proportion of latch upset events that result in vanished errors, corrected errors, checkstops, and incorrect architected states.

Patent
Yan Zhang1, Paul Lu1, Yue Chen1
05 Feb 2008
TL;DR: In this paper, the authors present a method for ensuring data integrity in a data processing system, which monitors when data for a specified device is available for error correction code generation, based on the indicated size of the data.
Abstract: Systems and methods for ensuring data integrity in a data processing system are disclosed. The method may include monitoring when data for a specified device is available for error correction code generation. A new error correction code may be generated in hardware for the data, based on the indicated size of the data. Detected errors may be corrected in software, based on the generated new error correction code. A first indication of the specified device, a second indication of the data and a third indication of a size of the data may be received during the monitoring. The method may also include indicating when the generating of the new error correction code for a specified number of accesses for at least a portion of the data is complete, and enabling or disabling the error correction code generation. The enabling and/or the disabling may be accomplished via an enable signal.

Patent
Jun Kitahara1
03 Jan 2008
TL;DR: In this paper, the memory controller is configured to count how many times data read processing has been executed in memory cells within the management area; read, when the data read-processing count that is counted for a first management area exceeds a first threshold, data and an error correction code that are stored in the memory cells contained within the first management areas; decode the read error correction codes; and write the data corrected by decoding the erasure code in other management areas than the first manager area.
Abstract: Read error in a flash memory destroys data that is not requested to be read, and an efficient read disturb check method is therefore needed. In addition, data may be destroyed beyond repair by error correction before a read error check is run. A non-volatile data storage apparatus including a plurality of memory cells and a memory controller, in which the memory controller is configured to: count how many times data read processing has been executed in memory cells within the management area; read, when the data read processing count that is counted for a first management area exceeds a first threshold, data and an error correction code that are stored in the memory cells within the first management area; decode the read error correction code; and write the data corrected by decoding the error correction code in other management areas than the first management area.

Journal ArticleDOI
TL;DR: This paper addresses the creation of two balanced descriptions based on the concept of redundant slices, while keeping full compatibility with the H.264 standard syntax and decoding behavior in case of single description reception.
Abstract: In this paper, a novel H.264 multiple description technique is proposed. The coding approach is based on the redundant slice representation option, defined in the H.264 standard. In presence of losses, the redundant representation can be used to replace missing portions of the compressed bit stream, thus yielding a certain degree of error resilience. This paper addresses the creation of two balanced descriptions based on the concept of redundant slices, while keeping full compatibility with the H.264 standard syntax and decoding behavior in case of single description reception. When two descriptions are available still a standard H.264 decoder can be used, given a simple preprocessing of the received compressed bit streams. An analytical setup is employed in order to optimally select the amount of redundancy to be inserted in each frame, taking into account both the transmission condition and the video decoder error propagation. Experimental results demonstrate that the proposed technique favorably compares with other H.264 multiple description approaches.

Proceedings ArticleDOI
01 Oct 2008
TL;DR: This paper summarizes the research on low complexity LDPC decoder architecture with statistical buffer management for magnetic recoding channels that require data rates of over 5 Gbps, real time bit error rates in the order of 10-12, and quasi real timebit error rates on the orderof 10-15.
Abstract: Low-density parity-check (LDPC) codes have received considerable attention as a next-generation coding technique for communication and storage channels. The developments in the hardware storage business include hard disk drives that support higher capacities and transfer rate credit to perpendicular recording. To meet the demanding error correction requirements at a lower silicon cost, most storage systems manufacturers have started adopting LDPC based error correction systems. This paper summarizes the research on low complexity LDPC decoder architecture with statistical buffer management for magnetic recoding channels that require data rates of over 5 Gbps, real time bit error rates in the order of 10-12, and quasi real time bit error rates in the order of 10-15.

Proceedings ArticleDOI
17 Nov 2008
TL;DR: This paper proposes to use Reed-Solomon (RS) codes for error correction in MLC flash memory, and can achieve 0.02 dB and 0.2 dB additional gains by using RS and BCH codes, respectively, without any overhead.
Abstract: Prior research efforts have been focusing on using BCH codes for error correction in multi-level cell (MLC) NAND flash memory. However, BCH codes often require highly parallel implementations to meet the throughput requirement. As a result, large area is needed. In this paper, we propose to use Reed-Solomon (RS) codes for error correction in MLC flash memory. A (828, 820) RS code has almost the same rate and length in terms of bits as a BCH (8248, 8192) code. Moreover, it has at least the same error-correcting performance in flash memory applications. Nevertheless, with 70% of the area, the RS decoder can achieve a throughput that is 121% higher than the BCH decoder. A novel bit mapping scheme using gray code is also proposed in this paper. Compared to direct bit mapping, our proposed scheme can achieve 0.02 dB and 0.2 dB additional gains by using RS and BCH codes, respectively, without any overhead.

Journal ArticleDOI
TL;DR: In-situ measurements of the scattering function are used to drive a channel simulator developed in the context of underwater acoustic telemetry, and it is shown that time-varying Doppler shifts due to platform motion must be eliminated from measured scattering functions in order to provide the stochastic tap gains with the true Dopplers spectrum of the channel.
Abstract: In-situ measurements of the scattering function are used to drive a channel simulator developed in the context of underwater acoustic telemetry. Two operation modes of the simulator are evaluated. A replay mode is accomplished by interpolation of measured impulse responses. A second, stochastic mode delivers multiple realizations of a given scattering function. The initial assumption of wide-sense stationary uncorrelated scattering is violated by strong phase correlations between taps. It is shown that time-varying Doppler shifts due to platform motion must be eliminated from measured scattering functions in order to provide the stochastic tap gains with the true Doppler spectrum of the channel. The simulator is validated through a comparison of acoustic data measured at sea, and emulated data, governed by the same scattering function. This comparison is based on scattering and coherence functions, multipath phase measurements, and application of a decision feedback equalizer. After the Doppler correction, the synthetic data are indistinguishable from the acoustic data in terms of delay-Doppler spread, temporal coherence, phase behavior, equalizer mean square error, and bit error ratio.

Journal ArticleDOI
TL;DR: Optimize options are described in which recovery operations may be further adapted according to the damping probability gamma, and stabilizer codes whose encoding and recovery operations can be completely described with Clifford group operations are presented.
Abstract: Error correction procedures are considered which are designed specifically for the amplitude damping channel. Amplitude damping errors are analyzed in the stabilizer formalism. This analysis allows a generalization of the [4,1] ldquoapproximaterdquo amplitude damping code. This generalization is presented as a class of [2(M+1),M] codes; quantum circuits for encoding and recovery operations are presented. A [7,3] amplitude damping code based on the classical Hamming code is presented. All of these are stabilizer codes whose encoding and recovery operations can be completely described with Clifford group operations. Finally, optimization options are described in which recovery operations may be further adapted according to the damping probability gamma.

Journal ArticleDOI
TL;DR: The proposed multiple error correction scheme utilizes the Chinese Remainder Theorem (CRT) together with a novel algorithm that significantly simplifies the error correcting process for integers.
Abstract: This paper presents some results on multiple error detection and correction based on the Redundant Residue Number System (RRNS). RRNS is often used in parallel processing environments because of its ability to increase the robustness of information passing between the processors. The proposed multiple error correction scheme utilizes the Chinese Remainder Theorem(CRT) together with a novel algorithm that significantly simplifies the error correcting process for integers. An extension of the scheme further reduces the computational complexity without compromising its error correcting capability. Proofs and examples are provided for the coding technique.

Journal ArticleDOI
Dong-Wook Kim1, Bang Jung1, Hanjin Lee1, Dan Sung1, Hyunsoo Yoon1 
TL;DR: Through link-level and system-level simulations, it is shown that the proposed MCS selection criterion yields higher average cell throughput than the conventional M CS selection schemes for slowly varying channels.
Abstract: We propose an optimal modulation and coding scheme (MCS) selection criterion for maximizing user throughput in cellular networks. The proposed criterion adopts both the Chase combining and incremental redundancy based hybrid automatic repeat request (HARQ) mechanisms and it selects an MCS level that maximizes the expected throughput which is estimated by considering both the number of transmissions and successful decoding probability in HARQ operation. We also prove that the conventional MCS selection rule is not optimized with mathematical analysis. Through link-level and system-level simulations, we show that the proposed MCS selection criterion yields higher average cell throughput than the conventional MCS selection schemes for slowly varying channels.

Patent
04 Jan 2008
TL;DR: Error correction is tailored for the use of an ECC for correcting asymmetric errors with low magnitude in a data device, with minimal modifications to the conventional data device architecture as discussed by the authors.
Abstract: Error correction is tailored for the use of an ECC for correcting asymmetric errors with low magnitude in a data device, with minimal modifications to the conventional data device architecture. The technique permits error correction and data recovery to be performed with reduced-size error correcting code alphabets. For particular cases, the technique can reduce the problem of constructing codes for correcting limited magnitude asymmetric errors to the problem of constructing codes for symmetric errors over small alphabets. Also described are speed up techniques for reaching target data levels more quickly, using more aggressive memory programming operations.

Patent
20 Jun 2008
TL;DR: In this paper, a check code generator is configured to generate check codes based on the particular level of error detection indicated by the check code configuration signal, and an error locator is configurable to produce addresses of errors in a set of data.
Abstract: An invention is provided for dynamically configurable error correction. The invention includes receiving a check code configuration signal, which indicates a particular level of error detection. A check code generator is configured to generate check codes based on the particular level of error detection indicated by the check code configuration signal. In addition, an error locator configuration signal is received that indicates a particular level of error addressing, and an error locator is configured to produce addresses of errors in a set of data based on the particular level of error addressing indicated by the error locator configuration signal.