scispace - formally typeset
Search or ask a question

Showing papers on "Noisy-channel coding theorem published in 2013"


Journal ArticleDOI
TL;DR: It is shown that even when this condition is not satisfied, symbol-by-symbol transmission is, in some cases, the best known strategy in the nonasymptotic regime.
Abstract: This paper finds new tight finite-blocklength bounds for the best achievable lossy joint source-channel code rate, and demonstrates that joint source-channel code design brings considerable performance advantage over a separate one in the nonasymptotic regime. A joint source-channel code maps a block of k source symbols onto a length-n channel codeword, and the fidelity of reproduction at the receiver end is measured by the probability e that the distortion exceeds a given threshold d. For memoryless sources and channels, it is demonstrated that the parameters of the best joint source-channel code must satisfy nC - kR(d) ≈ √(nV + k V(d)) Q-1(e), where C and V are the channel capacity and channel dispersion, respectively; R(d) and V(d) are the source rate-distortion and rate-dispersion functions; and Q is the standard Gaussian complementary cumulative distribution function. Symbol-by-symbol (uncoded) transmission is known to achieve the Shannon limit when the source and channel satisfy a certain probabilistic matching condition. In this paper, we show that even when this condition is not satisfied, symbol-by-symbol transmission is, in some cases, the best known strategy in the nonasymptotic regime.

141 citations


Posted Content
TL;DR: In this paper, a block Markov superposition transmission (BMST) is proposed to construct big convolutional codes from short codes, which can be implemented as an iterative sliding-window decoding algorithm with a tunable delay.
Abstract: A construction of big convolutional codes from short codes called block Markov superposition transmission (BMST) is proposed. The BMST is very similar to superposition blockMarkov encoding (SBME), which has been widely used to prove multiuser coding theorems. The encoding process of BMST can be as fast as that of the involved short code, while the decoding process can be implemented as an iterative sliding-window decoding algorithm with a tunable delay. More importantly, the performance of BMST can be simply lower-bounded in terms of the transmission memory given that the performance of the short code is available. Numerical results show that, 1) the lower bounds can be matched with a moderate decoding delay in the low bit-error-rate (BER) region, implying that the iterative slidingwindow decoding algorithm is near optimal; 2) BMST with repetition codes and single parity-check codes can approach the Shannon limit within 0.5 dB at BER of 10^{-5} for a wide range of code rates; and 3) BMST can also be applied to nonlinear codes.

78 citations


Proceedings ArticleDOI
05 Jun 2013
TL;DR: This work proposes non-uniform QAM constellations with modulation orders until 1024k-QAM approaching the Shannon limit up to 0.036 b/s/Hz at 29 dB SNR, corresponding to a short-coming of 0.108 dB.
Abstract: Spectrally efficient transmission on the physical layer is a crucial attribute of state-of-the-art transmission systems. The latest broadcast and terrestrial point-to-point systems such as DVB-T2, DVB-C2 or LTE use LDPC- or Turbo-codes jointly with QAM constellations to achieve capacity approaching the Shannon limit. However, since the gap between the BICM capacity of conventional uniform QAM constellations and the Shannon limit increases for larger SNR, constellation shaping is required to closely approach the Shannon limit when leaving the low SNR region. We propose non-uniform QAM constellations with modulation orders until 1024k-QAM approaching the Shannon limit up to 0.036 b/s/Hz at 29 dB SNR, corresponding to a short-coming of 0.108 dB. To maintain low-complexity 1-dimensional demapping of each component, the symmetry of the constellations is kept. In addition, we propose quadrant symmetric non-uniform QAM constellations optimized in both dimensions, to improve the performance of the constellations at low SNR.

77 citations


Journal ArticleDOI
TL;DR: A newly designed quarter-rate quasi-cyclic low density parity check (LDPC) code for the cloud transmission system that is only 0.6 dB away from Shannon limit.
Abstract: In this brief, we propose a newly designed quarter-rate quasi-cyclic low density parity check (LDPC) code for the cloud transmission system. The cloud transmission system requires a near Shannon limit low-rate coding for the next generation terrestrial digital television system. The proposed LDPC code is optimized for R=1/4. At the code length of 64 K, it is only 0.6 dB away from Shannon limit.

60 citations


Journal ArticleDOI
TL;DR: In this paper, the authors determined the range of feasible values of standard error exponents for binary-input memoryless symmetric channels of fixed capacity C and showed that extremes are attained by the binary symmetric and the binary erasure channels.
Abstract: This paper determines the range of feasible values of standard error exponents for binary-input memoryless symmetric channels of fixed capacity C and shows that extremes are attained by the binary symmetric and the binary erasure channel. The proof technique also provides analogous extremes for other quantities related to Gallager's E0 function, such as the cutoff rate, the Bhattacharyya parameter, and the channel dispersion.

23 citations


Proceedings ArticleDOI
28 Mar 2013
TL;DR: A 1.15Gb/s fully parallel decoder of a (960, 480) regular-(2, 4) NB-LDPC code over GF(64) in 65nm CMOS is presented, which permits 87% logic utilization that is significantly higher than a fully parallel binary LDPC decoder.
Abstract: The primary design goal of a communication or storage system is to allow the most reliable transmission or storage of more information at the lowest signal-to-noise ratio (SNR). State-of-the-art channel codes including turbo and binary LDPC have been extensively used in recent applications [1-2] to close the gap towards the lowest possible SNR, known as the Shannon limit. The recently developed nonbinary LDPC (NB-LDPC) code, defined over Galois field (GF), holds great promise for approaching the Shannon limit [3]. It offers better coding gain and a lower error floor than binary LDPC. However, the complex nonbinary decoding prevents any practical chip implementation to date. A handful of FPGA designs and chip synthesis results have demonstrated throughputs up to only 50Mb/s [4-6]. In this paper, we present a 1.15Gb/s fully parallel decoder of a (960, 480) regular-(2, 4) NB-LDPC code over GF(64) in 65nm CMOS. The natural bundling of global interconnects and an optimized placement permit 87% logic utilization that is significantly higher than a fully parallel binary LDPC decoder [7]. To achieve high energy efficiency, each processing node detects its own convergence and applies dynamic clock gating, and the decoder terminates when all nodes are clock gated. The dynamic clock gating and termination reduce the energy consumption by 62% for energy efficiency of 3.37nJ/b, or 277pJ/b/iteration, at a 1V supply.

17 citations


Journal ArticleDOI
TL;DR: In this paper, the generalized mutual information (GMI) of bit-interleaved coded modulation (BICM) systems, sometimes called the BICM capacity, was investigated at low signal-to-noise ratio (SNR).
Abstract: The generalized mutual information (GMI) of bit-interleaved coded modulation (BICM) systems, sometimes called the BICM capacity, is investigated at low signal-to-noise ratio (SNR). The combinations of input alphabet, input distribution, and binary labeling that achieve the Shannon limit - 1.59 dB are completely characterized. The main conclusion is that a BICM system with probabilistic shaping achieves the Shannon limit at low SNR if and only if it can be represented as a zero-mean linear projection of a hypercube. Hence, probabilistic shaping offers no extra degrees of freedom to optimize the low-SNR BICM-GMI, in addition to what is provided by geometrical shaping. The analytical conclusions are confirmed by numerical results, which also show that for a fixed input alphabet, probabilistic shaping can improve the BICM-GMI in the low and medium SNR range.

15 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: The encoding and decoding of IEEE 802.11n LDPC codes on GPPs is presented, which can significantly reduce the number of iterations and is well suited to using SIMD instructions, and results show that the throughput can meet the protocol timing requirement under the performance premise.
Abstract: Recently, General-purpose processor (GPP) soft defined radio (SDR) platforms have drawn great attention for their programmability and flexibility, and some high-speed wireless protocol stacks (e.g., IEEE 802.11a/b/g) have been implemented on them using commodity general-purpose PCs. Low-density parity-check (LDPC) codes are optionally used in IEEE 802.11n high throughput (HT) system as a high-performance error correcting code instead of convolutional codes for the near Shannon limit performance. In order to complete the implementation of IEEE 802.11n on SDR platforms, this paper presents the encoding and decoding of IEEE 802.11n LDPC codes on GPPs. We extensively use the features of contemporary processor architectures to accelerate data processing, including large low-latency caches to store lookup tables and SIMD processing on GPPs. Layered decoding is used in this paper, which can significantly reduce the number of iterations and is well suited to using SIMD instructions. The implementation results show that the throughput can meet the protocol timing requirement under the performance premise.

12 citations


Journal ArticleDOI
TL;DR: In this article, the authors focus on coherently detected, non differentially encoded, polarisation division multiplexed 100G communication systems and assess the net coding gain achievable by various constellations that can be used in lieu of the conventional quadrature phase-shift keying.
Abstract: In this letter, we focus on coherently detected, non differentially encoded, polarisation division multiplexed 100G communication systems; through incorporating phase noise in the channel model, we assess the net coding gain achievable by various constellations that can be used in lieu of the conventional quadrature phase-shift keying (QPSK). Whereas the optical community put much effort into the optimization of the forward error correction codes for QPSK, trying to reduce the gap to the Shannon limit, we show that a potential gain of more than 2 dB is still available if different constellations are adopted.

8 citations


Dissertation
15 Mar 2013
TL;DR: This work proposes a high throughput low complexity radix-16 SISO decoder intended to break the bottleneck that appears because of the recursive operations in the heart of the turbo decoding algorithm and proposes two techniques so that the architecture can be used in practical applications.
Abstract: The turbo codes are a well known channel coding technique widely used because of their outstanding error decoding performance close to the Shannon limit. These codes were proposed using a clever pragmatic approach where a set of concepts that had been previously introduced, together with the iterative processing of data, are successfully combined to obtain close to optimal decoding performance capabilities. However, precisely because this iterative processing, high latency values appear and the achievable decoder throughput is limited. At the beginning of our research activities, the fastest turbo decoder architecture introduced in the literature achieved a throughput peak value around 700 Mbit/s. There were also several works that proposed architectures capable of achieving throughput values around 100 Mbit/s. Research opportunities were then available in order to establish architectural solutions that enable the decoding at a few Gbit/s, so that the industrial requirements are fulfilled and future high performance digital communication systems can be conceived. The first part of this work is devoted to the study of the turbo codes at an algorithmic level. Several SISO decoder algorithms are explored, and different parallel turbo decoder techniques are analyzed. The convergence of parallel turbo decoder is specially considered. To this end the EXtrinsic Information Transfer (EXIT) charts are used. Conclusions derived from these kind of diagrams have served to propose a novel SISO decoder schedule to be used in shuffled turbo decoder architectures. The architectural issues when implementing high parallel turbo decoder are considered in the second part of this thesis. We propose a high throughput low complexity radix-16 SISO decoder. This decoder is intended to break the bottleneck that appears because of the recursive operations in the heart of the turbo decoding algorithm. The design of this architecture was possible thanks to the elimination of parallel paths in a radix-16 trellis diagram transition. The proposed SISO decoder implements a high speed radix-8 Add Compare Select (ACS) unit which exhibits a lower hardware complexity and lower critical path compared with a radix-16 ACS unit. Our radix-16 SISO decoder degrades the turbo decoder error correcting performance. Therefore, we have proposed two techniques so that the architecture can be used in practical applications. Thus, architectural solutions to build high parallel turbo decoder architectures, which integrate our SISO decoder, are presented. Finally, a methodology to efficiently explore the design space of parallel turbo decoder architectures is described. The main objective of this approach is to reduce the time to market constraint by designing turbo decoder architectures for a given throughput.

5 citations


Journal ArticleDOI
TL;DR: The proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.
Abstract: A novel construction method of the check matrix for the regular low density parity check (LDPC) code is proposed. The novel regular systematically constructed Gallager (SCG)-LDPC(3969,3720) code with the code rate of 93.7% and the redundancy of 6.69% is constructed. The simulation results show that the net coding gain (NCG) and the distance from the Shannon limit of the novel SCG-LDPC(3969,3720) code can respectively be improved by about 1.93 dB and 0.98 dB at the bit error rate (BER) of 10−8, compared with those of the classic RS(255,239) code in ITU-T G.975 recommendation and the LDPC(32640,30592) code in ITU-T G.975.1 recommendation with the same code rate of 93.7% and the same redundancy of 6.69%. Therefore, the proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.

Journal ArticleDOI
TL;DR: This paper presents hardware implementation technique for Two-Stage Hybrid decoder resulting in better complexity and delay and shows comparison between soft, hard and hybrid decoding techniques in terms of memory usage, and delay time as to be used for real implementation for some applications such as DVB-S2.
Abstract: LDPC codes are gaining high attention in Channel Coding field these days. However, one of the main problems facing usage of these codes in communication systems is the high complexity decoding scheme that results in high decoding delay. Such delay is not acceptable in some applications that depend on time such as video transmission. This paper presents hardware implementation technique for Two-Stage Hybrid decoder resulting in better complexity and delay. Also, it shows comparison between soft, hard and hybrid decoding techniques in terms of memory usage, and delay time as to be used for real implementation for some applications such as DVB-S2. Keywords Channel Coding, LDPC, VHDL, Hybrid decoding. 1. INTRODUCTION The first Low-Density Parity Check Code (LDPC) was discovered by Gallager [1], [2] in 1960s where this code proved the extraordinary performance with iterative decoding that was very close to Shannon limit which is difficult to reach. LDPC codes are used in many applications such as DVB-S2 [3], which is a standard for Digital Video Broadcasting–Satellite-Second Generation because of its excellent Bit Error Rate (BER) performance. These codes are expected to be included in many future standards as well as to replace many existing channel coding techniques. There are many types of LDPC decoding algorithms that have good performance with acceptable delay time. These algorithms can be classified into soft-decision decoding algorithm such as the Sum-Product Algorithm (SPA) and hard-decision decoding algorithm such as Bit-Flipping (BF) algorithm which was discovered by Gallager. The Bit-Flipping algorithm has many modified versions [4], [5], [6] that have better BER performance compared to the original Bit-Flipping algorithm. The good BER performance of SPA comes on the expense of the high complexity which increases the delay time for the used design. Such high delay is considered as a drawback for some applications especially for those who are greatly affected by the delay such as video and audio transmission. On the other side, hard decision such as Bit Flipping algorithm and its modified versions have limited BER performance with lower complexity and delay time if compared to (SPA). So, Two-Stage Hybrid decoding algorithm proposed in [7] was used as a new decoding algorithm, where this algorithm provides a trade-off algorithm between hard and soft-decision algorithms and has better performance compared to that of BF and less complexity and delay time compared to that of SPA. This paper shows an FPGA design of the real hardware implementation for Two-Stage Hybrid decoder with comparison to SPA, and BF in terms of memory usage and delay time. The rest of the paper is organized as follows. Section (2) presents the background on SPA and BF algorithms. These algorithms provide the basis for the Two-Stage Hybrid decoding. Also, this section presents the existing Two-Stage Hybrid algorithm and how it is composed of SPA as the first stage and BF as the second stage. Section (3) introduces the hardware implementation technique for the three algorithms with showing the procedure of each algorithm to compare between their performances. Section (4) includes typical simulation results for the three decoders. Also, comparison between the three decoders is shown with respect to memory usage and delay time. Finally, conclusion and future work are presented in Section (5).

Dissertation
01 Jan 2013
TL;DR: A modified decision feedback equalizer algorithm (DFE) that provides significant improvement when compared with the conventional DFE is proposed and is analysed based on a set of filter design criteria based on minimizing the bit error probability of impulse noise using digital smear filter.
Abstract: Turbo coding, a forward error correcting coding (FEC) technique, has made near Shannon Limit performance possible when Iterative decoding algorithms are used. Intersymbol interference (ISI) is a major problem in communication systems when information is transmitted through a wireless channel. Conventional approaches implement an equalizer to remove the ISI, but significant performance gain can be achieved through joint equalization and decoding. In this thesis, the suitability of turbo equalization as a means of achieving low bit error rate for high data communication systems over channels with intersymbol interference was investigated. A modified decision feedback equalizer algorithm (DFE) that provides significant improvement when compared with the conventional DFE is proposed. It estimates the data using the a priori information from the SISO channel decoder and also a priori detected data from previous iteration to minimize error propagation. Investigation was also carried out with Iterative decoding with imperfect minimum mean square error (MMSE) decision feedback equalizer, assuming soft outputs from the channel decoder that are independent identically distributed Gaussian random variables. The prefiltering method is considered in this thesis, where an all-pass filter is employed at the receiver before equalization to create a minimum phase overall impulse response. The band limited channel suffers performance degradation due to impulsive noise generated by electrical appliances. This thesis analysed a set of filter design criteria based on minimizing the bit error probability of impulse noise using digital smear filter.

Proceedings ArticleDOI
18 Jun 2013
TL;DR: Under optimal tradeoffs between source rate and channel block error probability obtained from finite block length analysis, noisy channel quantizers based on joint source-channel coding principles are shown to outperform the separate quantizer designed via Lloyd-Max in terms of end-to-end distortion.
Abstract: This paper investigates the validity of Shannon's separation theorem in the finite block length regime. Under optimal tradeoffs between source rate and channel block error probability obtained from finite block length analysis, noisy channel quantizers based on joint source-channel coding principles are shown to outperform the separate quantizer designed via Lloyd-Max in terms of end-to-end distortion. Numerical results for the scalar case under the binary symmetric channel and discrete-input memoryless channel demonstrate that the separation of source and channel coding no longer holds in the finite block length regime, but the advantages of joint designs may be large or small depending on the system configuration.

Proceedings ArticleDOI
01 Jul 2013
TL;DR: This work proposes an automatic-repeat-request (ARQ)-based LDPC coded cooperative system that provides an improvement in both error rate and throughput.
Abstract: Cooperative communications have received an increasing attention in the last few years due to the spread of wireless devices. While multi-antenna systems can improve the system performance greatly, it is difficult to implement antenna arrays on hand-held devices due to size, cost and hardware limitation. Cooperation allows the users to share their single-antenna devices to create a virtual multi-antenna system that benefits from the transmit diversity. On the other hand, low density parity check (LDPC) code is a class of linear block codes that can theoretically approach the Shannon limit. In this work, we propose an automatic-repeat-request (ARQ)-based LDPC coded cooperative system that provides an improvement in both error rate and throughput.

Proceedings ArticleDOI
05 Jun 2013
TL;DR: The performance degradation of each step from Shannon limit to implementation for the second generation digital video broadcasting - terrestrial (DVB-T2) system is analyzed from three aspects: the time-frequency bandwidth loss, imperfect coded modulation scheme, and other implementation issues.
Abstract: The performance degradation of each step from Shannon limit to implementation for the second generation digital video broadcasting - terrestrial (DVB-T2) system is analyzed in this paper, from three aspects: the time-frequency bandwidth loss, imperfect coded modulation scheme, and other implementation issues. The analysis results intuitively show the weak parts that still exist in the DVB-T2 system, and also provide guidelines for system improvement in the future.

Patent
04 Dec 2013
TL;DR: In this paper, the authors proposed a method for constructing LDPC codes, which consists of the following steps that: step 1, the size of a check matrix of the LDPC code is set, specifically, the number of lines, number of columns, and the size size of circulatory sub blocks of the check matrix are set; step 2, according to the numbers of the lines and columns and size of the circulatory subsets, the optimal line-column distribution uniquely determines the column degree value of each sub block; and a step 3, values of the figures in the
Abstract: The invention discloses a method for constructing LDPC codes. The method comprises the following steps that: step 1, the size of a check matrix of the LDPC codes is set, specifically, the number of lines, the number of columns and the size of circulatory sub blocks of the check matrix are set; step 2, according to the number of the lines, the number of the columns and the size of the circulatory sub blocks, the number of the circulatory sub blocks contained in the check matrix is calculated, and optimal line-column distribution is searched, and the optimal line-column distribution uniquely determines the column degree value of each circulatory sub block; a code table of the LDPC codes is set, such that each circulatory sub block is corresponding to one line in the code table, and the number of the figures of the corresponding line is equal to the column degree value of the circulatory sub block; and a step 3, values of the figures in the code table are determined. Codes constructed by using the method of the invention are irregular codes; with the design of column weight distribution and a short-cycle elimination algorithm adopted, the performance of the codes is close to Shannon limit; with double diagonal form-degree two-node design adopted, the implementation of an encoder can be simplified; and considerations are given to the regularization of line weight, such that the error floor of the codes can be decreased.

01 Jan 2013
TL;DR: Simulation of Turbo Encoder and Viterbi Decoder with soft and hard decoding a posterior probability (APP) Decoder has been implemented in Matlab and results show that less complex decoder strategies produce good results for voice quality bit error rates.
Abstract: LTE (Long Term Evolution) is the upcoming standards towards 4G, which is designed to increase the capacity and throughput performance when compared to UMTS and WiMax. Turbo codes are a high performance forward error correction at a given code rate. The principle of turbo code permits near approach to Shannon limit, which describes the maximum capacity of the channel. The decoder systems are compared for complexity as well as for equal numbers of iterations. The following figure shows the bit error rate performance of the parallel concatenated coding scheme in an AWGN channel over a range of Eb/No values for two sets of code block lengths and number of decoding iterations. Results show that less complex decoder strategies produce good results for voice quality bit error rates. Simulation of Turbo Encoder and Viterbi Decoder with soft and hard decoding a posterior probability (APP) Decoder has been implemented in Matlab. The result of the Viterbi Decoder using Soft Decision is better than that of the Viterbi Decoder using Hard Decision in terms of the Bit Error rate (BER). Also the simulation of PCC Turbo Coder and APP Decoder was performed and the comparison shows that performancewith MIMO channel is found to be better than without MIMO.

Journal ArticleDOI
TL;DR: The experiment with AWGN demonstrated the rightness of the design and showed that spinal codes are extremely close to the Shannon Limit, and validity of the data structure was proven with BSC.
Abstract: Spinal codes are a new kind of adaptive rateless codes. By analyzing the theory, consisting of hash-function encoding, modulation, maximum-likelihood decoding, and puncturing, the data structure was redesigned, and simulation of the whole transmission process was accomplished. The transfer rate was tested with both additive white Gaussian noise channel (AWGN channel) and binary symmetric channel (BSC). The experiment with AWGN demonstrated the rightness of our design and showed that spinal codes are extremely close to the Shannon Limit. What?s more, validity of the data structure was also proven with BSC. These results show that spinal codes are efficient and practical, which will be very useful in the near future.

Dissertation
01 Jan 2013
TL;DR: This thesis will consider LDPC codes as an Error Correcting Code and study it’s performance with BPSK system in AWGN environment and study different kind of characteristics of the system and find performance variation in term of SNR performance.
Abstract: Coded modulation is a bandwidth-efficient scheme that integrates channel coding and modulation into one single entity to improve performance with the same spectral efficiency compared to uncoded modulation. Low-density parity-check (LDPC) codes are the most powerful error correction codes (ECCs) and approach the Shannon limit, while having a relatively low decoding complexity. Therefore, the idea of combining LDPC codes and bandwidth-efficient modulation has been widely considered. In this thesis we will consider LDPC codes as an Error Correcting Code and study it’s performance with BPSK system in AWGN environment and study different kind of characteristics of the system. LDPC system consists of two parts Encoder and Decoder. LDPC encoder encodes the data and sends it to the channel. The LDPC encoding performance depends on Parity matrix behavior which has characteristics like Rate, Girth, Size and Regularity. We will study the performance characteristics according to these characteristics and find performance variation in term of SNR performance. The decoder receives the data from the channel and decodes it. LDPC decoder has characteristics like time of iteration in addition all parity check matrix characteristics. We will also study the performance according to these characteristics. The main objective of this thesis is to implement LDPC system in FPGA. LDPC Encoder is implementation is done using Shift-Register based design to reduce complexity. LDPC decoder is used to decode the information received from the channel and decode the message to find the information. In the decoder we have used Modified Sum Product (MSP) Algorithm to decode, In the MSP we have used some quantized values to decode the data using Look Up Table (LUT) approximation. Finally we compare the SNR performance of theoretical LDPC system’s with FPGA implemented LDPC system’s performance.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: 3 new high rates - 0.81, 0.84 and 0.89 of length 2512, 1843 and 2619 respectively, LDPC codes for radio application show good bit error rate performance with min-sum decoding algorithm.
Abstract: Low-density parity-check (LDPC) code is linear-block error-correcting code defined by sparse parity-check matrix. LDPC codes have found wide application in various fields like satellite transmission, recording in magnetic discs etc. because of their capability to reach near Shannon limit performance. This paper presents 3 new high rates - 0.81, 0.84 and 0.89 of length 2512, 1843 and 2619 respectively, LDPC codes for radio application. The codes are constructed using shifted identity matrix and a lower staircase matrix. The complexity of encoding of these codes is low and decoding can be done by belief propagation. Simulation results of the codes show good bit error rate performance with min-sum decoding algorithm.

Journal Article
TL;DR: The result indicates that the decode performance becomes better as the code length increases, and among the decoding algorithms discussed in the paper,the performance of Min-Sum is better than that of hard decision, while the performance of BP is the best as that of Log-BP.
Abstract: LDPC is a kind of linear group code,near Shannon limit performance. The paper introduces the method of constructing check matrix and the principle of hard decision,BP,Log-BP and Min-Sum. It also analyses the performance of LDPC with different lengths and different decode methods based on Matlab simulation. The result indicates that the decode performance becomes better as the code length increases,and among the decoding algorithms discussed in the paper,the performance of Min-Sum is better than that of hard decision,while the performance of BP is the best as that of Log-BP.

Book ChapterDOI
01 Jan 2013
TL;DR: This work applied low-density lattice codes (LDLC) in COFDM system and the simulation results showed that the LDLC is effective to improve the BER of the system in multipath environments.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM) is suitable for high-bit-rate data transmission in multipath environments. Many error-correcting codes have been applied in COFDM to reduce the high bit error rate (BER). Recently, low-density lattice codes (LDLC) attracted much attention. LDLC was proposed by Sommer in 2007, and the performance is very close to Shannon limit with iterative decoding. We applied LDLC in COFDM system and the simulation results showed that the LDLC is effective to improve the BER of the system in multipath environments.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that a desirable performance can be achieved with the appropriately designed LDPC in SFH system and some valuable guidelines for designing LDPC coded slow frequency-hopping system are proposed.
Abstract: Low-Density Parity-Check code, for its admiring performance which is close to the Shannon limit, has attracted large attention in anti-jamming systems. In this paper, we studied the LDPC coded slow frequency-hopping (SFH) system and analyzed its performance with Multi-tone Interference (MTI). Three modulation schemes (BPSK, QPSK and 8PSK) are adopted in the system. Especially for 8PSK, we compared the performance of two different soft-decision demodulation schemes over additive white Gaussian noise (AWGN) channel. Also, the influences of code rate, code length, modulation scheme and jamming mode are analyzed and examined, respectively. Finally, some valuable guidelines for designing LDPC coded SFH system are proposed. Simulation results demonstrate that a desirable performance can be achieved with the appropriately designed LDPC in SFH system.

Journal ArticleDOI
TL;DR: A low-complexity Turbo coded BICM-ID (bit-interleaved coded modulation with iterative decoding) system that not only shows similar BER performance but also reduces the decoding complexity.
Abstract: In this paper, we propose a low-complexity Turbo coded BICM-ID (bit-interleaved coded modulation with iterative decoding) system. A Turbo code is a powerful error correcting code with a BER (bit error rate) performance very close to the Shannon limit. In order to increase spectral efficiency of the Turbo code, a coded modulation combining Turbo code with high order modulation is used. The BER performance of Turbo-BICM can be improved by Turbo-BICM-ID using iterative demodulation and decoding algorithm. However, compared with Turbo-BICM, the decoding complexity of Turbo-BICM-ID is increased by exchanging information between decoder and demodulator. To reduce the decoding complexity of Turbo-BICM-ID, we propose a low-complexity Turbo-BICM-ID system. When compared with conventional Turbo-BICM-ID, the proposed scheme not only show similar BER performance but also reduce the decoding complexity.

01 Jan 2013
TL;DR: A novel modification over SOVA (Soft output Viterbi Algorithm) is proposed, which indicates a comparable BER as compared to LOG-MAP with reduced complexity and shows an improvement of 12% over area utilization over Xilinx FPGA.
Abstract: To fulfil the extensive need of high data rate transfer in today’s wireless communication systems such as WiMAX and 4G LTE (Long Term Evolution), the turbo codes gives an exceptional performance. They have allowed for near Shannon limit information transfer in modern communication systems. As the performance of these codes increases, their decoding complexity is also increases and so the power consumption. To reduce this complexity without decreasing its BER (Bit Error Rate) performance a novel modification over SOVA (Soft output Viterbi Algorithm) is proposed in this paper. The proposed model is also implemented on FPGA Xilinx Virtex 5 XC5VLX85ff676-2. The simulation results over MATLAB has been shown, indicates a comparable BER as compared to LOG-MAP with reduced complexity. The synthesis results over Xilinx FPGA shows an improvement of 12% over area utilization as compared to MAX-LOG-MAP implementation. So with reduced area and low BER, a cost effective solution proposed in this paper.

Journal ArticleDOI
TL;DR: Improved turbo equalizer which generates a feedback signal through a simple calculation to improve performance in single carrier system with the LMS(least mean square) algorithm based equalizer and LDPC(low density parity check) codes is proposed.
Abstract: In this paper, we propose a improved turbo equalizer which generates a feedback signal through a simple calculation to improve performance in single carrier system with the LMS(least mean square) algorithm based equalizer and LDPC(low density parity check) codes. LDPC codes can approach the Shannon limit performance closely. However, computational complexity of LDPC codes is greatly increased by increasing the repetition of the LDPC codes and using a long parity check matrix in harsh environments. Turbo equalization based on LDPC code is used for improvement of system performance. In this system, there is a disadvantage of very large amount of computation due to the increase of the repetition number. To less down the amount of this complicated calculation, The proposed improved turbo equalizer adjusts the adoptive equalizer after the soft decision and the LDPC code. Through the simulation results, it`s confirmed that performance of improved turbo equalizer is close to the SISO-MMSE(soft input soft output minimum mean square error) turbo equalizer based on LDPC code with the smaller amount of calculation.

Patent
15 May 2013
TL;DR: In this article, a geometrical closed loop line of sight (LOS) multiple-input-multiple-output (MIMO) MIMO system is considered, where the SVD processing can recover most, if not all, of any performance degradation caused by a deviation from the perfectly optimal spacing between those respective antennae.
Abstract: The invention relates to geometrical closed loop line of sight (LOS) multiple-input-multiple-output (MIMO). Singular value decomposition (SVD) processing for a LOS communication channel into respective channel matrices, and appropriate processing of signals within transmitter and/or receiver communication devices operate to support very high data throughput, including approaching or converging to the Shannon limit channel capacity (e.g., bits/sec/ Hz). Certain communication systems operate with multi-antenna communication devices, and sometimes, the optimal spacing between those respective antennae cannot be achieved. Appropriate processing can recover most, if not all, of any performance degradation as may be incurred by a deviation from the perfectly optimal spacing between those respective antennae. In addition, any deleterious effects of phase noise among the antennae may be mitigated by driving the antennae using a common or singular local oscillator, or tracking the communication channel (e.g., channel estimation, tracking, etc.) and updating the respective SVD channel matrices based upon such phase noise.

Journal ArticleDOI
TL;DR: The effect of UEP with LDPC codes is analyzed for high-quality message data by MSE according to the number of bits per symbol, the ratio of the message bits, and protection level of the classified groups.
Abstract: Powerful error control and increase in the number of bits per symbol should be provided for future high-quality communication systems. Each message bit may have different importance in multimedia data. Hence, UEP(unequal error protection) may be more efficient than EEP(equal error protection) in such cases. And the LDPC(low-density parity-check) code shows near Shannon limit error correcting performance. Therefore, the effect of UEP with LDPC codes is analyzed for high-quality message data in this paper. The relationship among MSE(mean square error), BER(bit error rate) and the number of bits per symbol is analyzed theoretically. Then, total message bits in a symbol are classified into two groups according to importance to prove the relationship by simulation. And the UEP performance is obtained by simulation according to the number of message bits in each group with the constraint of a fixed total code rate and codeword length. As results, the effect of UEP with the LDPC codes is analyzed by MSE according to the number of bits per symbol, the ratio of the message bits, and protection level of the classified groups.

01 Jan 2013
TL;DR: This paper has proposed a design of a TURBO Decoder using SOVA Algorithm, an advanced error correction technique widely used in the communications industry to achieve the best possible data reception with the fewest possible errors.
Abstract: The turbo encoder is employed to increase the free distance of the turbo code, hence improving its error-correction performance. A Design of Turbo encoder and decoder. In data transmission, turbo coding helps achieve near Shannon limit performance. SOVA Algorithms have been studied and their design considerations have been presented. Turbo coding is an advanced error correction technique widely used in the communications industry. Turbo encoders and decoders are key elements in today’s communication systems to achieve the best possible data reception with the fewest possible errors. The basis of turbo coding is to introduce redundancy in the data to be transmitted through a channel. The redundant data helps to recover original data from the received data. The design problem of generating the turbo code and decoding the code iteratively using SOVA detectors has been considered. This paper has proposed a design of a TURBO Decoder using SOVA Algorithm.