scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 2000"


Journal ArticleDOI
TL;DR: The proposed RCPT-ARQ system combines the performance of turbo codes with the frugal use of incremental redundancy inherent in the rate compatible punctured convolutional codes of Hagenauer (1988) to achieve enhanced throughput performance over a nonstationary Gaussian channel.
Abstract: This paper introduces a hybrid forward-error correction/automatic repeat-request (ARQ) system that employs rate compatible punctured turbo (RCPT) codes to achieve enhanced throughput performance over a nonstationary Gaussian channel. The proposed RCPT-ARQ system combines the performance of turbo codes with the frugal use of incremental redundancy inherent in the rate compatible punctured convolutional codes of Hagenauer (1988). Moreover, this paper introduces the notion of puncturing the systematic code symbols of a turbo code to maximize throughput at signal-to-noise ratios (SNRs) of interest. The resulting system provides both an efficient family of achievable code rates at middle to high SNR and powerful low-rate error correction capability at low SNR.

472 citations


Proceedings ArticleDOI
03 Oct 2000
TL;DR: Simulation results indicate that, for the simulated combinational logic circuits, although diverse duplex systems (with two different implementations of the same logic function) sometimes have marginally higher area overhead, they provide significant protection against multiple failures and CMFs compared to other CED techniques like parity prediction.
Abstract: Concurrent error detection (CED) techniques (based on hardware duplication, parity codes, etc.) are widely used to enhance system dependability. All CED techniques introduce some form of redundancy. Redundant systems we subject to common-mode failures (CMFs). While most of the studies of CED techniques focus on area overhead, few analyze the CMF vulnerability of these techniques. In this paper, we present simulation results to quantitatively compare various CED schemes based on their area overhead and the protection (data integrity) they provide against multiple failures and CMFs. Our results indicate that, for the simulated combinational logic circuits, although diverse duplex systems (with two different implementations of the same logic function) sometimes have marginally higher area overhead, they provide significant protection against multiple failures and CMFs compared to other CED techniques like parity prediction.

288 citations


Journal ArticleDOI
TL;DR: Results show that the robot accuracy is improved by an order of magnitude after calibration, and a comprehensive error model is derived for combining geometric errors, position–dependent compliance errors and time–variant thermal errors.
Abstract: In order to achieve the stringent accuracy requirement of some robotic applications such as robotic measurement systems, it is critical to compensate for nongeometric errors such as compliance errors and thermal errors in addition to geometric errors. This paper investigates the effect of geometric errors, link compliance and temperature variation on robot positioning accuracy. A comprehensive error model is derived for combining geometric errors, position–dependent compliance errors and time–variant thermal errors. A general methodology is developed to identify these errors simultaneously. A laser tracker is applied to calibrate these errors by an inverse calibration method. Robot geometric errors and compliance errors are calibrated at room temperature while robot parameter thermal errors are calibrated at different temperatures when the robot warms up and cools down. Empirical thermal error models are established using orthogonal regression methods to correlate robot parameter thermal errors with the corresponding temperature field. These models can be built into the controller and used to compensate for quasi-static thermal errors due to internal and external heat sources. Experimental results show that the robot accuracy is improved by an order of magnitude after calibration.

202 citations


Proceedings ArticleDOI
01 Jan 2000
TL;DR: The obtained results show that detection of such temporal faults can be achieved by means of meaningful hardware and performance cost.
Abstract: IC technologies are approaching the ultimate limits of silicon in terms of channel width, power supply and speed. By approaching these limits, circuits are becoming increasingly sensitive to noise, which will result in unacceptable rates of soft-errors. Furthermore, defect behavior is becoming increasingly complex resulting in increasing number of timing faults that can escape detection by fabrication testing. Thus, fault tolerant techniques will become necessary even for commodity applications. This work considers the implementation and improvements of a new soft error and timing error detecting technique based on time redundancy. Arithmetic circuits were used as test vehicle to validate the approach. Simulations and performance evaluations of the proposed detection technique were made using time and logic simulators. The obtained results show that detection of such temporal faults can be achieved by means of meaningful hardware and performance cost.

195 citations


Journal ArticleDOI
TL;DR: Tuncated type-II hybrid ARQ schemes have significantly higher average coding rates than FEC at high and medium signal-to-noise ratio even with noisy feedback and can be viewed as adaptive FEC that adapts to the instantaneous channel conditions.
Abstract: This paper considers truncated type-II hybrid automatic repeat-request (ARQ) schemes with noisy feedback over block fading channels. With these ARQ techniques, the number of retransmissions is limited, and, similar to forward error correction (FEC), error-free delivery of data packets cannot be guaranteed. Bounds on the average number of transmissions, the average coding rate as well as the reliability of the schemes are derived using random coding techniques, and the performance is compared with FEC. The random coding bounds reveal the achievable performance with block codes and maximum-likelihood soft-decision decoding. Union upper bounds and simulation results show that over block fading channels, these bounds can be closely approached with simple terminated convolutional codes and soft-decision Viterbi decoding. Truncated type-II hybrid ARQ and the corresponding FEC schemes have the same probability of packet erasure; however, the truncated ARQ schemes offer a trade-off between the average coding rate and the probability of undetected error. Truncated ARQ schemes have significantly higher average coding rates than FEC at high and medium signal-to-noise ratio even with noisy feedback. Truncated ARQ can be viewed as adaptive FEC that adapts to the instantaneous channel conditions.

195 citations


Journal ArticleDOI
TL;DR: An overview of previous work in MD with an emphasis on adaptive methods is provided, including (suboptimal) linear receivers and the data-aided MMSE receiver.
Abstract: A number of CDMA receivers have been proposed that cover the whole spectrum of performance/complexity from the simple matched filter to the optimal Viterbi (1995) processor. Adaptive solutions, in particular, have the potential of providing the anticipated multiuser detection (MD) performance gains with a complexity that would be manageable for third generation systems. Our goal, in this article, is to provide an overview of previous work in MD with an emphasis on adaptive methods. We start with (suboptimal) linear receivers and discuss the data-aided MMSE receiver. Blind (nondata-aided) implementations are also reviewed together with techniques that can mitigate possible multipath effects and channel dispersion. In anticipation of those developments, appropriate discrete-time (chip rate) CDMA models are reviewed, which incorporate asynchronism and channel dispersion. For systems with large spreading factors, the convergence and tracking properties of conventional adaptive filters may be inadequate due to the large number of coefficients which must be estimated. In this context, reduced rank adaptive filtering is discussed. In this approach, the number of parameters is reduced by restricting the receiver tap vector to belong to a carefully chosen subspace. In this way the number of coefficients to be estimated is significantly reduced with minimal performance loss.

182 citations


Journal ArticleDOI
TL;DR: This paper demonstrates that software-implemented EDAC is a low-cost solution that provides protection for code segments and can appreciably enhance the system availability in aLow-radiation space environment.
Abstract: In many computer systems, the contents of memory are protected by an error detection and correction (EDAC) code Bit-flips caused by single event upsets (SEU) are a well-known problem in memory chips; EDAC codes have been an effective solution to this problem These codes are usually implemented in hardware using extra memory bits and encoding/decoding circuitry In systems where EDAC hardware is not available, the reliability of the system can be improved by providing protection through software Codes and techniques that can be used for software implementation of EDAC are discussed and compared The implementation requirements and issues are discussed, and some solutions are presented The paper discusses in detail how system-level and chip-level structures relate to multiple error correction A simple solution is presented to make the EDAC scheme independent of these structures The technique in this paper was implemented and used effectively in an actual space experiment We have observed that SEU corrupt the operating system or programs of a computer system that does not have any EDAC for memory, forcing the system to be reset frequently Protecting the entire memory (code and data) might not be practical in software However this paper demonstrates that software-implemented EDAC is a low-cost solution that provides protection for code segments and can appreciably enhance the system availability in a low-radiation space environment

171 citations


Patent
Patrick J. Lee1
28 Sep 2000
TL;DR: In this article, a trellis sequence detector is used for detecting an estimated data sequence from signal sample values, and a post processor detects and corrects an error event of the detector by remodulating the estimated data sequences into a sequence of expected sample values.
Abstract: A communication channel is disclosed comprising a trellis sequence detector for detecting an estimated data sequence from signal sample values. A post processor detects and corrects an error event of the trellis sequence detector by remodulating the estimated data sequence into a sequence of expected sample values, computing samples errors as the difference between the expected samples and signal samples, and correlating the sample errors with error event samples. The communication channel further comprises a Reed-Solomon error correction code (ECC) decoder for decoding a Reed-Solomon codeword in the estimated data sequence in response to the detected error event.

162 citations


Journal ArticleDOI
TL;DR: An error-concealment scheme that is based on block-matching principles and spatio-temporal video redundancy is presented and proves to be satisfactory for packet error rates (PER) ranging from 1% to 10% and for video sequences with different content and motion and surpasses that of other EC methods under study.
Abstract: The MPEG-2 compression algorithm is very sensitive to channel disturbances due to the use of variable-length coding. A single bit error during transmission leads to noticeable degradation of the decoded sequence quality, in that part or an entire slice information is lost until the next resynchronization point is reached. Error concealment (EC) methods, implemented at the decoder side, present one way of dealing with this problem. An error-concealment scheme that is based on block-matching principles and spatio-temporal video redundancy is presented in this paper. Spatial information (for the first frame of the sequence or the next scene) or temporal information (for the other frames) is used to reconstruct the corrupted regions. The concealment strategy is embedded in the MPEG-2 decoder model in such a way that error concealment is applied after entire frame decoding. Its performance proves to be satisfactory for packet error rates (PER) ranging from 1% to 10% and for video sequences with different content and motion and surpasses that of other EC methods under study.

161 citations


Proceedings ArticleDOI
25 Jun 2000
TL;DR: The implications of using a low density parity check code (LDPCC) in place of the usual Goppa code in McEliece's cryptosystem allows for larger block lengths and the possibility of a combined error correction/encryption protocol.
Abstract: We examine the implications of using a low density parity check code (LDPCC) in place of the usual Goppa code in McEliece's cryptosystem. Using a LDPCC allows for larger block lengths and the possibility of a combined error correction/encryption protocol.

142 citations


Journal ArticleDOI
TL;DR: This paper proposes a technique which utilizes the residual redundancy at the output of the source coder to provide error protection for entropy coded systems.
Abstract: When using entropy coding over a noisy channel, it is customary to protect the highly vulnerable bitstream with an error correcting code In this paper, we propose a technique which utilizes the residual redundancy at the output of the source coder to provide error protection for entropy coded systems

Proceedings ArticleDOI
28 Mar 2000
TL;DR: This work introduces a multiple-description product code which aims at optimally generating multiple, equally-important wavelet image descriptions from an image encoded by the popular SPIHT image coder.
Abstract: Summary form only given. This work introduces a multiple-description product code which aims at optimally generating multiple, equally-important wavelet image descriptions from an image encoded by the popular SPIHT image coder. Because the SPIHT image coder is highly sensitive to errors, forward error correction is used to protect the image against bit errors occurring in the channel. The error-correction code is a concatenated channel code including a row (outer) code based on RCPC codes with CRC error detection and a source-channel column (inner) code consisting of the scalable SPIHT image coder and an optimized array of unequal protection Reed-Solomon erasure-correction codes. By matching the unequal protection codes to the embedded source bitstream using our simple, fast optimizer, we maximize expected image quality and provide for graceful degradation of the received image during fades. To achieve unequal protection, each packet is split into many Reed-Solomon symbols. The i/sup th/ symbol in each packet forms an (n,k) Reed-Solomon code or "column". A fast, nearly-optimal optimizer, based on Lagrange multipliers and optimal to within convex hull and discretization approximations, chooses k for each Reed-Solomon "column" to minimize the expected mean-square error at the receiver. We validated our use of this structure by evaluating its performance in the context of transmitting images over a wireless fading channel. The performance of this scheme was evaluated by simulating the transmission of the Lena image over a Clarke flat-fading channel with an average SNR of 10 dB and a normalized Doppler frequency of 10/sup -5/ Hz.

Journal ArticleDOI
TL;DR: Simulations with and without the GEC technique indicate that the technique results in a large improvement in the signal-to-noise-and-distortion (SINAD) and spurious-free-dynamic-range (SFDR) of the converter.
Abstract: A gain error correction (GEC) technique is presented that continuously measures and digitally compensates for analogue gain errors present in each stage of a pipelined analogue-to-digital-converter (ADC). Simulations with and without the GEC technique indicate that the technique results in a large improvement in the signal-to-noise-and-distortion (SINAD) and spurious-free-dynamic-range (SFDR) of the converter.

Proceedings ArticleDOI
O. Ait Sab1, V. Lemaire1
07 Mar 2000
TL;DR: This solution gives 4 dB additional gain margin compared with the existing rs(255,239) with redundancy equal to 28% and is based on soft decoding and soft decision.
Abstract: We present the performance of a foward error correction (fec) scheme for submarine transmission systems using Block turbo code. The decoding is based on soft decoding and soft decision. This solution gives 4 dB additional gain margin compared with the existing rs(255,239) with redundancy equal to 28%.

Journal ArticleDOI
TL;DR: The performance of fault-tolerant quantum computation with concatenated codes using local gates in small numbers of spatial dimensions is discussed in this article, where it is shown that a threshold result still exists in three, two, or one spatial dimensions when next-to-nearest-neighbour gates are available, and explicit constructions are presented.
Abstract: The performance of fault-tolerant quantum computation with concatenated codes using local gates in small numbers of spatial dimensions is discussed. It is shown that a threshold result still exists in three, two, or one spatial dimensions when next-to-nearest-neighbour gates are available, and explicit constructions are presented. In two or three dimensions, it is also shown how nearest-neighbour gates can give a threshold result. In all cases, it is simply demonstrated that a threshold exists, and no attempt to optimize the error correction circuit or to determine the exact value of the threshold is made. The additional overhead due to the fault-tolerance in both space and time is polylogarithmic in the error rate per logical gate.

Patent
26 Jun 2000
TL;DR: A design-based tool for determining an error correction code (ECC) failure probability of an iterative decoding algorithm provides a technique for testing the effectiveness of the algorithm before the integrated circuit implementing the algorithm is built as mentioned in this paper.
Abstract: A design-based tool for determining an error correction code (ECC) failure probability of an iterative decoding algorithm provides a technique for testing the effectiveness of the algorithm before the integrated circuit implementing the algorithm is built

Proceedings ArticleDOI
28 Mar 2000
TL;DR: In this paper, the authors consider the problem of joint source/channel coding of real-time sources, such as audio and video, for the purpose of multicasting over the Internet.
Abstract: We consider the problem of joint source/channel coding of real-time sources, such as audio and video, for the purpose of multicasting over the Internet. The sender injects into the network multiple source layers and multiple channel (parity) layers, some of which are delayed relative to the source. Each receiver subscribes to the number of source layers and the number of channel layers that optimizes the source-channel rate allocation for that receiver's available bandwidth and packet loss probability. We augment this layered FEC system with layered ARQ. Although feedback is normally problematic in broadcast situations, ARQ is simulated by having the receivers subscribe and unsubscribe to the delayed channel coding layers to receive missing information. This pseudo-ARQ scheme avoids an implosion of repeat requests at the sender, and is scalable to an unlimited number of receivers. We show gains of up to 18 dB on channels with 20% loss over systems without error control, and additional gains of up to 13 dB when FEC is augmented by pseudo-ARQ in a hybrid system. The hybrid system is controlled by an optimal policy for a Markov decision process.

Patent
02 Nov 2000
TL;DR: In this article, a generic structure of Hybrid ARQ using Turbo Codes is provided which requires the function of channel coding, redundancy selection, buffering and maximum-ratio diversity combining, channel decoding, error detection, and sending back an acknowledgement to the transmitter.
Abstract: A generic structure of Hybrid ARQ using Turbo Codes is provided which requires the function of channel coding, redundancy selection, buffering and maximum-ratio diversity combining, channel decoding, error detection, and sending back an acknowledgement to the transmitter (Fig.3). The functions of channel coding and redundancy selection (Fig.1) are performed at the transmitter while the remaining functions are performed at the receiver. The initial code rate can be explicitly communicated to the receiver or blindly detected.

Proceedings ArticleDOI
19 Apr 2000
TL;DR: This work introduces a multiple-description product code which aims at optimally generating multiple, equally-important wavelet image descriptions and offers significant improvements in both peak and expected image quality when compared to current state-of-the-art techniques.
Abstract: This work introduces a multiple-description product code which aims at optimally generating multiple, equally-important wavelet image descriptions. The codes used are a concatenated channel code including a row (outer) code based on RCPC codes with CRC error detection and a source-channel column (inner) code consisting of the scalable SPIHT image coder and an optimized array of unequal protection Reed-Solomon erasure- correction codes. By systematically matching the unequal protection codes to the embedded source bitstream using a simple, fast optimizer that can run in real time, we allow image quality to degrade gracefully as fade worsens and maximize expected image quality at the receiver. This approach to image transmission over fading channels offers significant improvements in both peak and expected image quality when compared to current state-of-the-art techniques. Our packetization scheme is also naturally suited for hybrid packet-network/wireless channels, such as those used for wireless Internet access.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: It is shown that for comparable performance the new method can be implemented with much less quantization bits, which can lead to considerably lower decoding cost.
Abstract: This letter is concerned with the implementation issue of the sum-product algorithm (SPA) for decoding the low density parity check codes. It is shown that the direct implementation of the original form of SPA is sensitive to the quantization effect. We propose a parity likelihood ratio technique to overcome the problem. It is shown that for comparable performance the new method can be implemented with much less quantization bits, which can lead to considerably lower decoding cost.

Patent
08 Nov 2000
TL;DR: In this paper, a method and apparatus for encoding and decoding linear block codes while using limited working storage is described, where the error syndromes are calculated incrementally as each encoded symbol is received.
Abstract: A method and apparatus for encoding and decoding linear block codes while using limited working storage is disclosed. For decoding, the error syndromes are calculated incrementally as each encoded symbol is received. That is, as each symbol, r i of the code word is received, the received symbol, r i is multiplied by the entries in the “ith” column of a decoding matrix, resulting in intermediate syndrome components. The intermediate syndrome components are added to the appropriate entry in a syndrome vector. Once all symbols r i are received, the syndrome vector contains the error syndromes s i for the received code word.

Proceedings ArticleDOI
25 Jun 2000
TL;DR: The results show that using the mechanisms allows for a fairly high detection probability for errors in the areas monitored by the mechanisms, and if only errors causing failure are taken into account the authors have a detection probability of over 99%.
Abstract: In order to be able to tolerate the effects of faults, we must first detect the symptoms of faults, i.e. the errors. This paper evaluates the error detection properties of an error detection scheme based on the concept of executable assertions aiming to detect data errors in internal signals. The mechanisms are evaluated using error injection experiments in an embedded control system. The results show that using the mechanisms allows one to obtain a fairly high detection probability for errors in the areas monitored by the mechanisms. The overall detection probability for errors injected to the monitored signals was 74%, and if only errors causing failure are taken into account we have a detection probability of over 99%. When subjecting the target system to random error injections in the memory areas of the application, i.e., not only the monitored signals, the detection probability for errors that cause failure was 81%.

Journal ArticleDOI
TL;DR: A technique for packing ATM cells with compressed data is described, whereby the location of missing macroblocks in the encoded video stream can be found and permits the proper decoding of correctly received macroblocks, and thus prevents the loss of ATM cells from affecting the decoding process.
Abstract: When transmitting compressed video over a data network, one has to deal with how channel errors affect the decoding process. This is particularly a problem with data loss or erasures. In this paper we describe techniques to address this problem in the context of asynchronous transfer mode (ATM) networks. Our techniques can be extended to other types of data networks such as wireless networks. In ATM networks channel errors or congestion cause data to be dropped, which results in the loss of entire macroblocks when MPEG video is transmitted. In order to reconstruct the missing data, the location of these macroblocks must be known. We describe a technique for packing ATM cells with compressed data, whereby the location of missing macroblocks in the encoded video stream can be found. This technique also permits the proper decoding of correctly received macroblocks, and thus prevents the loss of ATM cells from affecting the decoding process. The packing strategy can also be used for wireless or other types of data networks. We also describe spatial and temporal techniques for the recovery of lost macroblocks. In particular, we develop several optimal estimation techniques for the reconstruction of missing macroblocks that contain both spatial and temporal information using a Markov random field model. We further describe a sub-optimal estimation technique that can be implemented in real time.

Patent
17 Mar 2000
TL;DR: In this paper, a decoding system of an error correction code is presented, where a mixed correction of a one-extended Reed-Solomon code at one path is made possible.
Abstract: In a decoding system of an error correction code, a mixed correction of, for example, a one-extended Reed-Solomon code at one path is made possible. The number e of erasures and an erasure position polynomial E(x) are obtained from an erasure flag F including an extended erasure flag F−. A syndrome polynomial S(x) is obtained from a received word R including an extended received symbol R−, a modified syndrome polynomial Sm(x) is obtained from the polynomials E(x) and S(x), and an error evaluator polynomial ω(x) and an error locator polynomial σ(x) are obtained from the Sm(x). An error position i is detected from σ(x). An error value ei at a position i is calculated from ω(x), σ(x) and the error position i, an extended error value e− is calculated from this error value ei and the 0th element S0 of the syndrome S, and further, presumed information Ip is obtained from the received word R, the error position i, the error value ei, and the extended error value e−. Correctable judgement is made by using ω(x), σ(x), e, the number p of parity symbols, and the like, and if correctable, presumed information Ip is outputted.

Journal ArticleDOI
TL;DR: The first DSP implementation, to the authors' knowledge, of the MPEG-4 simple profile video standard is discussed and some general and application-specific DSP instructions on the TMS320C54x DSP are highlighted that enable the core operations in the video coder efficiently, often in a single cycle, with minimal control overhead.
Abstract: We discuss the design and implementation of wireless video communication systems on DSPs. We cover both video coding and the channel coding aspects of the problem. The emphasis of the article is on highlighting the issues involved both from an algorithmic standpoint as well as from a DSP standpoint. We discuss the first DSP implementation, to the authors' knowledge, of the MPEG-4 simple profile video standard. We give an overview of DSPs and highlight the salient features that make them especially well-suited for wireless applications. We describe MPEG-4 simple profile video coding noting that in order to facilitate interoperability, it is important that wireless devices use standardized compression algorithms. We discuss our implementation of the MPEG-4 simple profile video codec on Texas Instruments' TMS320C54x DSP and discuss the issues involved. We highlight some general and application-specific DSP instructions on the TMS320C54x DSP that enable us to implement the core operations in the video coder efficiently, often in a single cycle, with minimal control overhead. We describe the performance of MPEG-4 video compression for a variety of content and formats. We give a description of channel coding with the H.223 standard. Then, we describe several channel coding experiments using both unequal and equal protection on MPEG-4 video sent through a simulated GSM channel. Unequal error protection, which ensures fewer errors in the important portions of the MPEG-4 video bitstream, provides improved quality when compared to equal error protection under harsh error conditions.

Journal ArticleDOI
TL;DR: An adaptive block based intra refresh algorithm for increasing error robustness in an interframe coding system is described, demonstrating a significant improvement in terms of error recovery time over nonadaptive intra update strategies.
Abstract: An adaptive block based intra refresh algorithm for increasing error robustness in an interframe coding system is described. The goal of this algorithm is to allow the intra update rates for different image regions to vary according to various channel conditions and image characteristics. The update scheme is based on an "error-sensitivity metric," accumulated at the encoder, representing the vulnerability of each coded block to channel errors. As each new frame is encoded, the accumulated metric for each block is examined, and those blocks deemed to have an unacceptably high metric are sent using intra coding as opposed to inter coding. This approach requires no feedback channel and is fully compatible with H.263. It involves a negligible increase in encoder complexity and no change in the decoder complexity. Simulations performed using an H.263 bitstream corrupted by channel errors demonstrate a significant improvement in terms of error recovery time over nonadaptive intra update strategies.

Patent
17 May 2000
TL;DR: In this paper, an error arising after a full stripe write is detected by a difference in sequence numbers for all of the components of user data in the stripe, and the errors in both cases are corrected by using the parity metadata for the entire collection of data and the correct information from the other components of the user data and metadata, and applying this information to an error correcting algorithm.
Abstract: Sequence number metadata which identifies an input/output (I/O) operation, such as a full stripe write on a redundant array of independent disks (RAID) mass storage system, and revision number metadata which identifies an I/O operation such as a read modify write operation on user data recorded in components of the stripe, are used in an error detection and correction technique, along with parity metadata, to detect and correct silent errors arising from inadvertent data path and drive data corruption. An error arising after a full stripe write is detected by a difference in sequence numbers for all of the components of user data in the stripe. An error arising after a read modify write is detected by a revision number which occurred before the correct revision number. The errors in both cases are corrected by using the parity metadata for the entire collection of user data and the correct information from the other components of the user data and metadata, and applying this information to an error correcting algorithm. The technique may be executed in conjunction with a read I/O operation without incurring a substantial computational overhead penalty.

Journal ArticleDOI
TL;DR: It is shown that a centralized protocol based on type 2 hybrid ARQ comes close to the performance of a protocol with local retransmissions, and that using DER with type 2 hybrids gives best performance in terms of bandwidth and latency.
Abstract: We examine the impact of the loss recovery mechanism on the performance of a reliable multicast protocol. Approaches for loss recovery in reliable multicast can be divided into two major classes: centralized (source-based) recovery and distributed recovery. For both classes we consider the state of the art: for centralized recovery, an integrated transport layer scheme using parity multicast for error recovery (hybrid ARQ type 2) as well as timer-based feedback suppression, and for distributed recovery, a scheme with local data multicast retransmission and feedback processing in a local neighborhood. We also evaluate the benefits of combining the two approaches into distributed error recovery (DER) with local retransmissions using a type 2 hybrid ARQ scheme. The schemes are evaluated for up to 10/sup 6/ receivers under different loss scenarios with respect to network bandwidth usage and completion time of a reliable transfer. We show that using DER with type 2 hybrid ARQ gives best performance in terms of bandwidth and latency. For networks, where local retransmission is not possible, we show that a centralized protocol based on type 2 hybrid ARQ comes close to the performance of a protocol with local retransmissions.

Patent
Kalliojaervi Kari1
08 Feb 2000
TL;DR: In this article, a method for reliably receiving digital information from a transmitting device is provided for reliably decoding digital information, where the information to be received is arranged in discrete subunits (200, 202, 203, 204, 301, 302, 303, 304) so that a predetermined number of subunits correspond to a superunit.
Abstract: A method is provided for reliably receiving digital information from a transmitting device. The information to be received is arranged in discrete subunits (201, 202, 203, 204, 301, 302, 303, 304) so that a predetermined number of subunits correspond to a superunit (200, 300). It is encoded with a certain error detection code (402), corresponding to a certain error detection decoding method, and additionally with a certain error correction code (403), corresponding to a certain error correction decoding method. According to the invention a superunit is error correction decoded (405), and during the error correction decoding (405), the decoding reliability of each subunit of the superunit to be decoded is separately estimated. The error correction decoded superunit is error detection decoded (406), and during the error detection decoding it is detected, whether or not there were errors in the superunit to be decoded. Partial corrective action (407, 408, 409, 450, 451) is arranged for on the decoded superunit on the basis of the estimated reliabilities of the subunits.

Patent
17 May 2000
TL;DR: In this article, a data structure contains sequence number and revision number metadata which are used in an error detection and correction technique, along with parity metadata, to detect and correct silent errors arising from inadvertent data path and drive data corruption.
Abstract: A data structure contains sequence number metadata which identifies an input/output (I/O) operation such as a full stripe write on a redundant array of independent disks (RAID) mass storage system, and also contains revision number metadata which identifies a subsequent I/O operation such as a read modify write on only a fractional component of the entire user data. The sequence number and revision number metadata are used in an error detection and correction technique, along with parity metadata, to detect and correct silent errors arising from inadvertent data path and drive data corruption. An error to a portion of the stripe is detected by a difference in sequence numbers for all of the components of data. An error arising after an I/O operation is detected by a revision number which is different from the correct revision number. The errors in both cases are corrected by using the parity metadata for the entire collection of user data and the correct information from the other components of the user data and metadata, and applying this information to an error correcting algorithm. The technique may be executed in conjunction with a read I/O operation without incurring a substantial processing overhead penalty.