scispace - formally typeset
Search or ask a question

Showing papers on "Noisy-channel coding theorem published in 2006"


Journal ArticleDOI
03 Jan 2006
TL;DR: The paper provides a tutorial overview of the DVB-S2 system, describing its main features and performance in various scenarios and applications.
Abstract: DVB-S2 is the second-generation specification for satellite broad-band applications, developed by the Digital Video Broadcasting (DVB) Project in 2003. The system is structured as a toolkit to allow the implementation of the following satellite applications: TV and sound broadcasting, interactivity (i.e., Internet access), and professional services, such as TV contribution links and digital satellite news gathering. It has been specified around three concepts: best transmission performance approaching the Shannon limit, total flexibility, and reasonable receiver complexity. Channel coding and modulation are based on more recent developments by the scientific community: low density parity check codes are adopted, combined with QPSK, 8PSK, 16APSK, and 32APSK modulations for the system to work properly on the nonlinear satellite channel. The framing structure allows for maximum flexibility in a versatile system and also synchronization in worst case configurations (low signal-to-noise ratios). Adaptive coding and modulation, when used in one-to-one links, then allows optimization of the transmission parameters for each individual user,dependant on path conditions. Backward-compatible modes are also available,allowing existing DVB-S integrated receivers-decoders to continue working during the transitional period. The paper provides a tutorial overview of the DVB-S2 system, describing its main features and performance in various scenarios and applications.

383 citations


Journal ArticleDOI
TL;DR: An analysis under the iterative decoding of coset low-density parity-check codes over GF(q), designed for use over arbitrary discrete-memoryless channels, shows that under a Gaussian approximation, the entire q-1-dimensional distribution of the vector messages is described by a single scalar parameter.
Abstract: We present an analysis under the iterative decoding of coset low-density parity-check (LDPC) codes over GF(q), designed for use over arbitrary discrete-memoryless channels (particularly nonbinary and asymmetric channels). We use a random- coset analysis to produce an effect that is similar to output symmetry with binary channels. We show that the random selection of the nonzero elements of the GF(q) parity-check matrix induces a permutation-invariance property on the densities of the decoder messages, which simplifies their analysis and approximation. We generalize several properties, including symmetry and stability from the analysis of binary LDPC codes. We show that under a Gaussian approximation, the entire q-1-dimensional distribution of the vector messages is described by a single scalar parameter (like the distributions of binary LDPC messages). We apply this property to develop extrinsic information transfer (EXIT) charts for our codes. We use appropriately designed signal constellations to obtain substantial shaping gains. Simulation results indicate that our codes outperform multilevel codes at short block lengths. We also present simulation results for the additive white Gaussian noise (AWGN) channel, including results within 0.56 dB of the unrestricted Shannon limit (i.e., not restricted to any signal constellation) at a spectral efficiency of 6 bits/s/Hz.

281 citations


Journal ArticleDOI
Takashi Mizuochi1
TL;DR: Recent progress in forward error correction (FEC) for optical communications is reviewed and the error count function has proved useful for the adaptive equalization of both chromatic dispersion and PMD.
Abstract: Recent progress in forward error correction (FEC) for optical communications is reviewed. The various types of FEC are classified as belonging to one of three generations. A third-generation FEC, based on a block turbo code, has been fully integrated in very large scale integration, and thanks to the use of 3-bit soft decision, a net coding gain of 10.1 dB was demonstrated experimentally. That has brought a number of positive impacts to existing systems. The Shannon limit is discussed for hard and soft decision decoding. The interplay between FEC and error bursts is discussed. Fast polarization scrambling with FEC has been effective in mitigating polarization mode dispersion (PMD). The error count function has proved useful for the adaptive equalization of both chromatic dispersion and PMD

197 citations


Journal ArticleDOI
TL;DR: A more realistic model for the channel is developed here that takes into account the effect of crosstalk, jitter, reflection, inter-symbol interference, and AWGN, and Interestingly, the proposed signaling schemes are significantly less sensitive to such interference.
Abstract: Increasing demand for high-speed interchip interconnects requires faster links that consume less power. The Shannon limit for the capacity of these links is at least an order of magnitude higher than the data rate of the current state-of-the-art designs. Channel coding can be used to approach the theoretical Shannon limit. Although there are numerous capacity-approaching codes in the literature, the complexity of these codes prohibits their use in high-speed interchip applications. This work studies several suitable coding schemes for chip-to-chip communication and backplane application. These coding schemes achieve 3-dB coding gain in the case of an additive white Gaussian noise (AWGN) model for the channel. In addition, a more realistic model for the channel is developed here that takes into account the effect of crosstalk, jitter, reflection, inter-symbol interference (ISI), and AWGN. Interestingly, the proposed signaling schemes are significantly less sensitive to such interference. Simulation results show coding gains of 5-8 dB for these methods with three typical channel models. In addition, low-complexity decoding architectures for implementation of these schemes are presented. Finally, circuit simulation results confirm that the high-speed implementations of these methods are feasible.

75 citations


Journal ArticleDOI
TL;DR: This work investigates the construction of joint source-channel turbo codes for the reliable communication of binary Markov sources over additive white Gaussian noise and Rayleigh fading channels and shows that the JSC coding system considerably outperforms tandem coding schemes for bit error rates smaller than 10/sup -4/, while enjoying a lower system complexity.
Abstract: We investigate the construction of joint source-channel (JSC) turbo codes for the reliable communication of binary Markov sources over additive white Gaussian noise and Rayleigh fading channels. To exploit the source Markovian redundancy, the first constituent turbo decoder is designed according to a modified version of Berrou's original decoding algorithm that employs the Gaussian assumption for the extrinsic information. Due to interleaving, the second constituent decoder is unable to adopt the same decoding method; so its extrinsic information is appropriately adjusted via a weighted correction term. The turbo encoder is also optimized according to the Markovian source statistics and by allowing different or asymmetric constituent encoders. Simulation results demonstrate substantial gains over the original (unoptimized) Turbo codes, hence significantly reducing the performance gap to the Shannon limit. Finally, we show that our JSC coding system considerably outperforms tandem coding schemes for bit error rates smaller than 10/sup -4/, while enjoying a lower system complexity.

61 citations


Journal ArticleDOI
TL;DR: A novel parity-check matrix design for low-density parity- check (LDPC) codes is described, which eliminates the routing problem associated with LDPC codes, and the codes have outstanding error-rate performance close to the Shannon limit.
Abstract: A novel parity-check matrix design for low-density parity-check (LDPC) codes is described. By eliminating the routing problem associated with LDPC codes, the design results in a small implementation area, and the codes have outstanding error-rate performance close to the Shannon limit for a wide range of code rates, from 1/4 to 9/10, and for various modulation schemes such as binary phase-shift keying (PSK), quaternary PSK, 8-PSK, 16-amplitude PSK (APSK), and 32-APSK. As a result, LDPC codes designed with this method have been standardized for next-generation digital video broadcasting.

45 citations


Posted Content
TL;DR: In this paper, the authors trace the evolution of channel coding from Hamming codes to capacity-approaching codes, focusing on the contributions that have led to the most significant improvements in performance vs. complexity for practical applications.
Abstract: Starting from Shannon's celebrated 1948 channel coding theorem, we trace the evolution of channel coding from Hamming codes to capacity-approaching codes. We focus on the contributions that have led to the most significant improvements in performance vs. complexity for practical applications, particularly on the additive white Gaussian noise (AWGN) channel. We discuss algebraic block codes, and why they did not prove to be the way to get to the Shannon limit. We trace the antecedents of today's capacity-approaching codes: convolutional codes, concatenated codes, and other probabilistic coding schemes. Finally, we sketch some of the practical applications of these codes.

25 citations


Proceedings ArticleDOI
09 Jul 2006
TL;DR: It is shown that any S-W coding problem is dual to a channel coding problem for a semi-symmetric channel for any stationary, ergodic source-side information pair with finite alphabet.
Abstract: In this paper we investigate the relationship between Slepian-Wolf (S-W) coding and channel coding. It is shown that any S-W coding problem is dual to a channel coding problem for a semi-symmetric channel. This result holds for any stationary, ergodic source-side information pair with finite alphabet.

19 citations


Journal ArticleDOI
TL;DR: In this paper, the reliability-based hybrid ARQ (RB-HARQ) scheme with low density parity check (LDPC) codes has been proposed, which can achieve near Shannon limit performance and be decoded easily compared to turbo code.
Abstract: The Reliability-Based Hybrid ARQ (RB-HARQ) scheme, which can be used with error correcting codes using soft-input soft-output (SISO) decoders such as convolutional codes and turbo codes has been proposed. In the RB-HARQ scheme, the error rate performance is improved by selecting the retransmission bits based on Log Likelihood Ratio (LLR) of each bit in the receiver. However, the receiver has to send the bit positions of retransmission bits to the transmitter. Therefore, the RB-HARQ scheme requires a great number of feedback bits. On the other hand, Low Density Parity Check (LDPC) codes are attracting a lot of interest, recently. Because LDPC codes can achieve near Shannon limit performance and be decoded easily compared to turbo code. In this paper, we evaluate the RB-HARQ scheme using LDPC code. Moreover, we propose a RB-HARQ scheme that requires a fewer feedback bits by utilizing a code structure of LDPC code. We refer to the scheme as the RB-HARQ (row base) scheme. We show that the RB-HARQ and RB-HARQ (row base) schemes using LDPC code have better error rate performance than the scheme without ARQ. We also show that the RB-HARQ (row base) scheme has a good trade-off between error rate performance and the number of feedback bits compared to the RB-HARQ scheme.

13 citations


Journal ArticleDOI
TL;DR: Low-density parity-check codes achieve coding performance which approaches the Shannon limit and use of generator polynomial reconstruction, partial product multiplication and functional sharing in the parity register results in a highly efficient design.
Abstract: Low-density parity-check codes achieve coding performance which approaches the Shannon limit. An (8158,7136) encoder was implemented in a five-metal, 0.25-mum CMOS process. Use of generator polynomial reconstruction, partial product multiplication and functional sharing in the parity register results in a highly efficient design. Only 1492 flip-flops along with a programmable 21-bit look-ahead scheme are used to achieve an 860-Mb/s data throughput for this rate 7/8 LDPC code. A comparable two-stage encoder requires 8176 flip-flops

11 citations


Proceedings ArticleDOI
07 May 2006
TL;DR: The theoretical and practicable capacity efficiencies for known-channel MIMO are compared for two idealized channels, with the circulant channel having a higher capacity efficiency than that of the random i.i.d. channel, for practical SNR values.
Abstract: The impact of communications signal processing such as QAM modulations (instead of gaussian signals), finite block lengths (instead of infinitely long codes), and using simpler algorithms (instead of expensive-to-implement ones), etc., is a lower practicable capacity efficiency than that of the Shannon limit. In this paper, the theoretical and practicable capacity efficiencies for known-channel MIMO are compared for two idealized channels. The motivation is to identify worthwhile trade-offs between capacity reduction and complexity reduction. The channels are the usual complex gaussian random i.i.d., and also the complex gaussian circulant. The comparison reveals new and interesting capacity behaviour, with the circulant channel having a higher capacity efficiency than that of the random i.i.d. channel, for practical SNR values. A circulant channel would also suggest implementation advantages owing to its fixed eigenvectors. Because of the implementation complexity of water filling, the simpler but sub-optimum solution of equal power allocation is investigated and shown to be worthwhile.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: A quantum version of Feinstein's theorem is developed and used to give a completely self-contained proof of the direct channel coding theorem, for transmission of classical information through a quantum channel with Markovian correlated noise.
Abstract: In this paper, a quantum version of Feinstein's Theorem is developed. This is then used to give a completely self-contained proof of the direct channel coding theorem, for transmission of classical information through a quantum channel with Markovian correlated noise. Our proof does not rely on the Holevo-Schumacher-Westmoreland (HSW) Theorem. In addition, for the case of memoryless channels, our method yields an alternative proof of the HSW Theorem.

01 Jan 2006
TL;DR: This paper reviews low-density parity-check codes, repeat-accumulate codes, and turbo codes, emphasising recent advances and describing the empirical power-laws obeyed by decoding times of sparse graph codes.
Abstract: In 1948, Claude Shannon posed and solved one of the fundamental problems of information theory. The question was whether it is possible to communicate reliably over noisy channels, and, if so, at what rate. He defined a theoretical limit, now known as the Shannon limit, up to which communication is possible, and beyond which communication is not possible. Since 1948, coding theorists have attempted to design error-correcting codes capable of getting close to the Shannon limit. In the last decade remarkable progress has been made using codes that are defined in terms of sparse random graphs, and which are decoded by a simple probability-based message-passing algorithm. This paper reviews low-density parity-check codes (Gallager codes), repeat-accumulate codes, and turbo codes, emphasising recent advances. Some previously unpublished results are then presented, describing (a) experiments on Gallager codes with small blocklengths; (b) a stopping rule for decoding of repeat-accumulate codes, which saves computer time and allows block decoding errors to be detected and flagged; and (c) the empirical power-laws obeyed by decoding times of sparse graph codes.

Proceedings ArticleDOI
01 Nov 2006
TL;DR: The performance curves show that with the decreasing of code rate, both coding gains and the distances from Shannon limit increase under the two algorithms, which means for non- binary LDPC Codes to be suitable for applications at high rates.
Abstract: Two different decoding algorithms for non-binary LDPC codes: the fast Fourier transform belief propagation (FFT-BP) decoding algorithm and the Log-domain belief propagation (Log-BP) decoding algorithm are introduced in this paper Their performance advantage of two different decoding algorithms for LDPC Codes over GF(q) is investigated at high code rates (1/2~1) over AWGN channel The performance curves show that with the decreasing of code rate, both coding gains and the distances from Shannon limit increase under the two algorithms, which means for non- binary LDPC Codes to be suitable for applications at high rates Meanwhile it is shown that Log-BP decoding algorithm may contribute a considerable saving in computational complexity at the cost of sacrifice small performance, which is important for Log-BP decoding algorithm to be applied in hardware implementation

Journal ArticleDOI
TL;DR: The method of extrinsic information transfer charts is extended, that is limited to the case of a concatenation of two component codes, to the cases of multiple turbo codes.
Abstract: In the low signal-to-noise ratio regime, the performance of concatenated coding schemes is limited by the convergence properties of the iterative decoder. Idealizing the model of iterative decoding by an independence assumption, which represents the case in which the codeword length is infinitely large, leads to analyzable structures from which this performance limit can be predicted. Mutual information transfer characteristics of the constituent coding schemes comprising convolutional encoders and soft-in/soft-out decoders have been shown to be sufficient to characterize the components within this model. Analyzing serial and parallel concatenations is possible just by these characteristics. In this paper, we extend the method of extrinsic information transfer charts, that is limited to the case of a concatenation of two component codes, to the case of multiple turbo codes. Multiple turbo codes are parallel concatenations of three or more constituent codes, which, in general, may not be identical and may not have identical code rates. For the construction of low-rate codes, this concept seems to be very favorable, as power efficiencies close to the Shannon limit can be achieved with reasonable complexity

Proceedings ArticleDOI
01 Jun 2006
TL;DR: Three high-rate LDPC codes in image transmission over Rayleigh fading channel show that they can obtain high information speed and bandwidth efficiency, and also can achieve high reliability and excellent performance in image communication over wireless channels.
Abstract: It has been proved that Low-density parity-check (LDPC) codes is good codes and have good distance. They have near Shannon limit performance when decoded using an iterative probabilistic algorithm. Moreover, high-rate LDPC codes have high data rate and high spectral efficiency. So, in this paper, we explored three high-rate LDPC codes in image transmission over Rayleigh fading channel, the simulation results show that high rate LDPC codes can obtain high information speed and bandwidth efficiency, they also can achieve high reliability and excellent performance in image communication over wireless channels. We also found that the performance is more sensitive to code rate than to code length.

Journal ArticleDOI
TL;DR: The result indicates that when code length N = 3,072 and code rate R = 1/3, the difference with the Shannon limit is about 2 dB, therefore, the performance of LDPC is also very effective on all kinds of channels, including the Rician kind.
Abstract: This paper analyzes and simulates the performance of irregular low-density parity check (LDPC) codes on Rician fading channels. The authors also modified the brief propagation decoding algorithm, proved the symmetry and showed the stability conditions of the channels, and calculated the Shannon limits on Rician channels in this paper. By using Visual C++ programming to simulate the performance, the result indicates that when code length N = 3,072 and code rate R = 1/3, the difference with the Shannon limit is about 2 dB. Therefore, the performance of LDPC is also very effective on all kinds of channels, including the Rician kind.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: A RB-HARQ scheme that requires a fewer feedback bits by utilizing a code structure of low density parity check (LDPC) codes is proposed, which has better error rate performance and higher throughput performance than no ARQ scheme.
Abstract: The reliability-based hybrid ARQ (RB-HARQ) scheme, which can be used with error correcting codes using soft-input soft-output (SISO) decoders such as convolutional codes and turbo codes, has been proposed. In the RB-HARQ scheme, the error rate performance is improved by selecting retransmission bits based on log likelihood ratio (LLR) of each bit in the receiver. However, the receiver has to send the bit positions of retransmission bits to the transmitter. Therefore, the RB-HARQ scheme requires a large number of feedback bits. On the other hand, low density parity check (LDPC) codes are recently attracting a lot of interest, because LDPC codes can achieve near Shannon limit performance. In this paper, we evaluate the RB-HARQ scheme using LDPC codes. Moreover, we propose a RB-HARQ scheme that requires a fewer feedback bits by utilizing a code structure of LDPC code. We refer to the scheme as the RB-HARQ (row base) scheme. We show that the RB-HARQ and RB-HARQ (row base) schemes using LDPC codes have better error rate performance and higher throughput performance than no ARQ scheme. We also show that the RB- HARQ (row base) scheme has a good trade-off between error rate performance and the number of feedback bits compared to the RB-HARQ scheme.

Proceedings ArticleDOI
24 Apr 2006
TL;DR: A new technique to increase the performances of turbo codes by increasing the constraint length therefore the free distance of the code is presented and it is shown that with the increase of constraint length, one could get good results concerning the reduction of the bit error rate (BER).
Abstract: The increase of performances of turbo codes with short frame is the subject of a lot of research. Several techniques have been elaborated such as the choice of good interleaver or the use of a coder with high coding rate. In this paper, we present a new technique to increase the performances of turbo codes by increasing the constraint length therefore the free distance of the code. We could show that with the increase of constraint length, we could get good results concerning the reduction of the bit error rate (BER). We examine the performances in bit error rate (BER) of turbo codes for coding rates 1/2 and 1/3 with the use of variables constraint lengths. We used a Gaussian channel and a BPSK modulation for signal transmission. For decoding we use soft output Viterbi algorithm (SOVA)

Proceedings ArticleDOI
03 Apr 2006
TL;DR: This paper shows how joint demapping for multilevel modulation and source decoding using the BCJR algorithm on a Markov source, reduce the source symbol error rate (SSER), when compared to the subsystems working independently.
Abstract: In this paper we show how joint demapping for multilevel modulation and source decoding using the BCJR algorithm on a Markov source, reduce the source symbol error rate (SSER), when compared to the subsystems working independently. Simulations demonstrate that at low E b/No values, our proposal performs better than systems that consider source compression and channel coding with short codelength if there is redundancy in the source information to be used. Iterative and joint iterative systems using LDPC, turbo codes or the turbo principle with a convolutional code, show excellent bit error rate performance and approach the Shannon limit as the code length is increased. However, when the size of the code is relatively short, their performance is poor at low Eb/No values, becoming worse than the uncoded signal at some points

Journal ArticleDOI
TL;DR: An efficient, iterative soft-in-soft-out decoding scheme is employed for the parallel and serially concatenated single parity check (SPC) product codes, which has very low complexity, requiring only two addition-equivalent-operations per information bit.
Abstract: An efficient, iterative soft-in-soft-out decoding scheme is employed for the parallel and serially concatenated single parity check (SPC) product codes, which has very low complexity, requiring only two addition-equivalent-operations per information bit. For a rate 0.8637 of parallel concatenated SPC product code, a performance of BER=10/sup -5/ at E/sub b//N/sub 0/=3.66 dB can be achieved using this decoding scheme, which is within 1 dB from the Shannon limit.

Journal Article
TL;DR: Simulation results show that the performance of BIRACM outperforms traditional trellis-coded modulation (TCM) and BICM, and can increase the spectral efficiency of IRA schemes using bit interleaving coded modulation (BIRACM).
Abstract: Irregular repeat accumulate (IRA) codes have natural liner-time encoding and decoding algorithms. Its performance is very close to Shannon limit. This paper presents IRA schemes using bit interleaving coded modulation (BIRACM)which can increase the spectral efficiency, and simulates BIRACM on AWGN and Rayleigh channel. Simulation results show that the performance of BIRACM outperforms traditional trellis-coded modulation (TCM) and BICM.

01 Jan 2006
TL;DR: Simulation results show that the system purposed by this paper owns excellent capacity of multiple access similar to the IDMA system and can acquire greater codinggain relative to the latter system through short and middleframe transmitting over three channels at the bit error rate(BER) level.
Abstract: -In this paper regular (3,6) LDPC codes is appliedto IDMA system (called as LDPC-CodedIDMA system) andperformance comparison between IDMA systems based onLDPC Codes and based on convolutional codes is completedover different transmitting environments without powerattribution. Simulation results show that the system purposedby this paper owns excellent capacity of multiple accesssimilar to the latter system and can acquire greater codinggain relative to the latter system through short and middleframe transmitting over three channels at the bit error rate(BER) level of 10- 3 and 10- 6 when the number of usersis 16. Thus higher quality data services get ensured throughthe front system. Meanwhile this indicates voice service maybe supported better by the front system over short frametransmission once the delay of receiver hardware is furtherreduced. key words-IDMA, LDPC codes, iterative receiver, wirelesschannelsI. INTRODUCTIONalgorithm [10] which is essentially a low-cost iterative soft­cancellation tcchniquel''l.Low density parity check (LDPC) codes are excellent error­correcting codes and its performance is almost as close tothe Shannon limit as that of Turbo codes. In paper [12],convolutional codes is used in IDMA system to verify thesystem performance of multiple access over AWGN channeland quasi-static Rayleigh fading multipath channel. In thispaper, regular (3,6) LDPC codes[13] is applied to IDMAsystem. Performance comparison between LDPC-CodedandConvolutional-CodedIDMA systems is completed to verifyperformance advantage of purposed system under AWGNand other two wireless transmitting environments if powerattribution is not considered.This paper is organized as follows. System model for LDPC­Coded IDMA is introduced in Section II. In Section III,computer simulation results are given and the performanceof the proposed system is also analyzed. Section IV containsthe conclusions.

Proceedings ArticleDOI
01 Nov 2006
TL;DR: Experimental results for the 10 standard AIRS ultraspectral test granules show that 3DWT-RVLC produces significantly fewer erroneous pixels than JPEG 2000 Part 2 due to errors remaining after DVB-S2 LDPC channel decoding of burst errors.
Abstract: Satellite data transmission may possess random and burst errors. Our previous study [1] shows that 3D wavelet transform with reversible variable-length coding (3 DWT- RVLC) [1] has significantly better error resilience than JPEG 2000 Part 2 [2,3] on 1-bit random error remaining after channel decoding. In this paper, we investigate the burst error correction performance of the Digital Video Broadcasting - Second Generation (DVB-S2) standard for the 3 DWT-RVLC- compressed ultraspectral sounder data. DVB-S2 consists of Low density parity-check (LDPC) codes, which have been shown to perform close to the Shannon limit. We also study the error contamination after 3DWT-RVLC source decoding. Experimental results for the 10 standard AIRS ultraspectral test granules show that 3DWT-RVLC produces significantly fewer erroneous pixels than JPEG 2000 Part 2 due to errors remaining after DVB-S2 LDPC channel decoding of burst errors.

Journal Article
Chen Wei1
TL;DR: This paper combined OFDM with LDPC, simulates and analyses the performance of LDPC-COFDM system with different modulation and comes to conclusion thatLDPC- COFDM method is effective in the high-speed data transmission.
Abstract: Orthogonal Frequency Division Multiplexing(OFDM) is a multi-carrier modulation technique,which proved to be a main method for high-speed data transmission.It is widely used in WAN and HDTV.LDPC(low-density parity-check) is an effective error-correcting code.Its performance is very close to the Shannon limit and it has lower decoding complexity.This paper combined OFDM with LDPC.It simulates and analyses the performance of LDPC-COFDM system with different modulation.It also compares LDPC-COFDM with Turbo-COFDM in dissimilar Channel.It comes to conclusion that LDPC-COFDM method is effective in the high-speed data transmission.

Journal Article
TL;DR: A better algorithm is proposed to reduce the amount of operations by curve fitting when updating the computation of check codes, it is useful for the realizing of hardware and can reduce decoding delay.
Abstract: Low Density Parity Check Codes(LDPC) is a kind of linear group codes which can be near the Shannon limit,the complexity of decoding is low.The BP decoding algorithm has a good performance among the decoding algorithms,it can make certain that if the codes are decoded in the iterative decoding process,but it is complicated and needs large operations.In this paper,a better algorithm is proposed to reduce the amount of operations by curve fitting when updating the computation of check codes,it is useful for the realizing of hardware and can reduce decoding delay.The results of computer simulation have shown that the proposed algorithm has similar performance with the traditional BP algorithm,and it is less complicated.

Journal ArticleDOI
TL;DR: A time-delay feedback control method to stablize the fix-point is proposed and results show that this method achieves 0.3 dB improvement for Turbo codes under low SNR condition.
Abstract: Turbo Codes can approach the Shannon limit very closely with the help of its special iterative decoding algorithm. This paper establishes a nonlinear dynamic system to analyze the relationship between Turbo decoding output and the number of iterations. Here, the number of iterations is taken as the time axis, decoding output as the state variable, SNR and information bits N as system parameters. It is shown that with SNR increasing, the decoding algorithm undergoes three stages, namely the indecisive fix-point, singular region and unequivocal fix-point. Bifurcations occur during the transformation from the indecisive fix-point to the singular region. It is first proposed that fold, period doubling and Neimark-Sacker bifurcation all have the possibility to occur, depending on the value of N. In the singular region, phase trajectories may appear as period-two, period-three, quasiperiod and chaos. This paper first observed and confirmed the existence of period-three and chaos. Singular region deteriorates the performance of Turbo codes under low SNR. This paper proposes a time-delay feedback control method to stablize the fix-point. Simulation results show that this method achieves 0.1—0.3 dB improvement for Turbo codes under low SNR condition.

01 Jan 2006
TL;DR: The method of extrinsic information transfer charts is extended to the case of multiple turbo codes, which seems to be very favorable, as power efficiencies close to the Shannon limit can be achieved with reasonable complexity.
Abstract: In the low signal-to-noise ratio regime, the per- formance of concatenated coding schemes is limited by the convergence properties of the iterative decoder. Idealizing the model of iterative decoding by an independence assumption, which represents the case in which the codeword length is infinitely large, leads to analyzable structures from which this performance limit can be predicted. Mutual information transfer characteristics of the constituent coding schemes comprising convolutional encoders and soft-in/soft-out decoders have been shown to be sufficient to characterize the components within this model. Analyzing serial and parallel concatenations is possible just by these characteris- tics. In this paper, we extend the method of extrinsic information transfer charts, that is limited to the case of a concatenation of two component codes, to the case of multiple turbo codes. Multiple turbo codes are parallel concatenations of three or more con- stituent codes, which, in general, may not be identical and may not have identical code rates. For the construction of low-rate codes, this concept seems to be very favorable, as power efficiencies close to the Shannon limit can be achieved with reasonable complexity.

Proceedings ArticleDOI
30 Oct 2006
TL;DR: If symbol-by-symbol reliability is available, then an erasure correction code, such as information dispersal algorithm (IDA), may perform as well as TPC in terms of block error rate for large blocks, while the TPC will always have a lower bit error rate.
Abstract: This paper applies erasure correction codes to powerline communications and compares them to Turbo Product Codes (TPC). The TPC offers near Shannon limit performance for the Gaussian channel. However, assuming a channel with burst noise, then the TPC must be able to detect burst noise boundaries to assign lower reliability to burst errors. If symbol-by-symbol reliability is available, then an erasure correction code, such as Information Dispersal Algorithm (IDA), may perform as well as TPC in terms of block error rate for large blocks, while the TPC will always have a lower bit error rate. An approximation for the performance of IDA is also given.

Proceedings ArticleDOI
TL;DR: This study investigates the burst error correction performance of LDPC codes via the new Digital Video Broadcasting - Second Generation (DVB-S2) standard for ultraspectral sounder data compressed by 3DWT-RVLC and studies the error contamination after 3D WT- RVLC source decoding.
Abstract: Previous study shows 3D Wavelet Transform with Reversible Variable-Length Coding (3DWT-RVLC) has much better error resilience than JPEG2000 Part 2 on 1-bit random error remaining after channel decoding. Errors in satellite channels might have burst characteristics. Low-density parity-check (LDPC) codes are known to have excellent error correction capability near the Shannon limit performance. In this study, we investigate the burst error correction performance of LDPC codes via the new Digital Video Broadcasting - Second Generation (DVB-S2) standard for ultraspectral sounder data compressed by 3DWT-RVLC. We also study the error contamination after 3DWT-RVLC source decoding. Statistics show that 3DWT-RVLC produces significantly fewer erroneous pixels than JPEG2000 Part 2 for ultraspectral sounder data compression.