scispace - formally typeset
Search or ask a question

Showing papers on "Noisy-channel coding theorem published in 2018"


Proceedings ArticleDOI
01 Sep 2018
TL;DR: In this article, the notions of a "FEC threshold" and a "nonlinear Shannon limit" are critically reviewed, highlighting their limitations and possible alternatives, with emphasis on their practical usage.
Abstract: Fundamental information-theoretic concepts are explained for nonspecialists, with emphasis on their practical usage. The notions of a "FEC threshold" and a "nonlinear Shannon limit" are critically reviewed, highlighting their limitations and possible alternatives.

61 citations


Journal ArticleDOI
TL;DR: In this paper, a hierarchical distribution matching (DM) and dematching (invDM) scheme for probabilistic shaping with soft-decision forward error correction (FEC) coding is proposed.
Abstract: The implementation difficulties of combining distribution matching (DM) and dematching (invDM) for probabilistic shaping (PS) with soft-decision forward error correction (FEC) coding can be relaxed by reverse concatenation, for which the FEC coding and decoding lies inside the shaping algorithms. PS can seemingly achieve performance close to the Shannon limit, although there are practical implementation challenges that need to be carefully addressed. We propose a hierarchical DM (HiDM) scheme, having fully parallelized input/output interfaces and a pipelined architecture that can efficiently perform the DM/invDM without the complex operations of previously proposed methods such as constant composition DM (CCDM). Furthermore, HiDM can operate at a significantly larger post-FEC bit error rate (BER) for the same post-invDM BER performance, which facilitates simulations. These benefits come at the cost of a slightly larger rate loss and required signal-to-noise ratio at a given post-FEC BER.

48 citations


Journal ArticleDOI
TL;DR: This paper designs the multiuser codebook and the advanced decoding from the perspective of the theoretical capacity and the system feasibility, and proposes an iterative joint detection and decoding scheme with only partial inner iterations, which exhibits significant performance gain over the traditional one with separate detection and decode.
Abstract: Sparse code multiple access (SCMA) is a promising non-orthogonal air-interface technology for its ability to support massive connections. In this paper, we design the multiuser codebook and the advanced decoding from the perspective of the theoretical capacity and the system feasibility. First, different from the lattice-based constellation in point-to-point channels, we propose a novel codebook for maximizing the constellation constrained capacity. We optimize a series of 1-D superimposed constellations to construct multi-dimensional codewords. An effective dimensional permutation switching algorithm is proposed to further obtain the capacity gain. Consequently, it shows that the performance of the proposed codebook approaches the Shannon limit and achieves significant gains over the other existing ones. Furthermore, we provide a symbol-based extrinsic information transfer tool to analyze the convergence of SCMA iterative detection, where the complex codewords are considered in modeling the a priori probabilities instead of assuming the binary inputs in previous literature. Finally, to approach the capacity, we develop a low-density parity-check code-based SCMA receiver. Most importantly, by utilizing the EXIT charts, we propose an iterative joint detection and decoding scheme with only partial inner iterations, which exhibits significant performance gain over the traditional one with separate detection and decoding.

47 citations


Journal ArticleDOI
TL;DR: This letter provides the asymptotic analysis of Raptor codes over binary input additive white Gaussian noise channels using discretized density evolution (DDE) and proves a necessary condition for the successful decoding of such codes.
Abstract: In this letter, we first provide the asymptotic analysis of Raptor codes over binary input additive white Gaussian noise channels using discretized density evolution (DDE). We show that the Raptor codes of various realized code-rates using the same output degree distribution perform within 0.4-dB to the Shannon limit. We then prove a necessary condition for the successful decoding of such codes and use it with a DDE-based optimization method to obtain optimized Raptor codes with further improvement in decoding thresholds.

27 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: For the binary erasure channel, the polar-coding paradigm gives rise to codes that not only approach the Shannon limit but also do so under the best possible scaling of their block length as a function of the gap to capacity.
Abstract: We prove that, at least for the binary erasure channel, the polar-coding paradigm gives rise to codes that not only approach the Shannon limit but, in fact, do so under the best possible scaling of their block length as a function of the gap to capacity. This result exhibits the first known family of binary codes that attain both optimal scaling and quasi-linear complexity of encoding and decoding. Specifically, for any fixed $\delta \gt 0$, we exhibit binary linear codes that ensure reliable communication at rates within $\epsilon \gt 0$ of capacity with block length $n = O(1/\epsilon^{2+\delta})$, construction complexity $\Theta(n)$, and encoding/decoding complexity $\Theta(n\log n)$.

23 citations


Proceedings ArticleDOI
20 May 2018
TL;DR: With turbo iterative decoding, the proposed polar-TPC can outperform the conventional BCH-based TPC by 0.5 dB, and can approach the performance of the corresponding long polar code within 0.2 dB with a capability of 256- times faster decoding.
Abstract: Ultra-reliable forward error correction (FEC) codes approaching the Shannon limit have played an important role in increasing spectral efficiency of wireless communications. In addition to the error correction performance, both low-power and low-latency decoding are demanded for the fifth generation (5G) wireless applications. In this paper, we introduce turbo product codes (TPC) consisting of multiple polar codes to enable highly parallel decoding for high- throughput and low-latency FEC. With turbo iterative decoding, the proposed polar-TPC can outperform the conventional BCH-based TPC by 0.5 dB, and can approach the performance of the corresponding long polar code within 0.2 dB with a capability of 256- times faster decoding. In addition, we apply irregular polar codes, whose polarization units are pruned, to further reduce the computational complexity by 50% and decoding latency by 80% without sacrificing performance. We analyze the impact of list size, turbo iteration count, and fading channels to demonstrate the potential of the polar-TPC for 5G wireless systems.

19 citations


Journal ArticleDOI
TL;DR: In this article, a lattice code based on construction a lattices where the underlying linear codes are non-binary irregular repeat-accumulate (IRA) codes is proposed.
Abstract: Most multi-dimensional (more than two dimensions) lattice partitions only form additive quotient groups and lack multiplication operations. This prevents us from constructing lattice codes based on multi-dimensional lattice partitions directly from non-binary linear codes over finite fields. In this paper, we design lattice codes from Construction A lattices where the underlying linear codes are non-binary irregular repeat-accumulate (IRA) codes. Most importantly, our codes are based on multi-dimensional lattice partitions with finite constellations . We propose a novel encoding structure that adds randomly generated lattice sequences to the encoder’s messages, instead of multiplying lattice sequences to the encoder’s messages. We prove that our approach can ensure that the decoder’s messages exhibit permutation-invariance and symmetry properties. With these two properties, the densities of the messages in the iterative decoder can be modeled by Gaussian distributions described by a single parameter. With Gaussian approximation, extrinsic information transfer charts for our multi-dimensional IRA lattice codes are developed and used for analyzing the convergence behavior and optimizing the decoding thresholds. Simulation results show that our codes can approach the unrestricted Shannon limit within 0.46 dB and outperform the previously designed lattice codes with 2-D lattice partitions and existing lattice coding schemes for large codeword length.

14 citations


Proceedings ArticleDOI
01 Jul 2018
TL;DR: It is demonstrated that LPDC-coded 8-PAM with exponential distribution outperforms the LDPC- coded 8- PAM with uniform distribution of the same achievable information rate by 1.8 dB, and this improvement is comparable to that obtained from mutual information calculations.
Abstract: We study non-uniform 8-PAM signaling schemes, based on Maxwell-Boltzmann and exponential source distributions, as candidates suitable for data centers’ communications. We evaluate mutual information of these schemes against corresponding uniform 8-PAM scheme, for different parameters of source distributions. The properly chosen variable parameters for different SNR-regions allows us to closely approach Shannon limit, and to significantly outperform the corresponding uniform counterpart. We demonstrate that LPDC-coded 8-PAM with exponential distribution outperforms the LDPC-coded 8-PAM with uniform distribution of the same achievable information rate by 1.8 dB, and this improvement is comparable to that obtained from mutual information calculations.

11 citations


Proceedings ArticleDOI
01 Oct 2018
TL;DR: A high-rate polar-staircase coding scheme with systematic polar codes as the component codes is proposed and a soft successive cancellation list (SSCL) decoding algorithm is adopted with the tradeoff between reliability and complexity is optimized.
Abstract: Polar codes are proved to be able to theoretically achieve the Shannon limit. However, the performance of polar codes with short code length is not well in practice. One widely used method to improve the short length codes is concatenation. Recently, staircase coding structure provides an efficient concatenation scheme for finite length block codes, which the component block code can concatenate itself to improve the coding performance. Thus, in this paper, we propose a high-rate polar-staircase coding scheme with systematic polar codes as the component codes. The polar-staircase coding scheme can enhance the unreliable parts of the polar codes through the concatenation. To achieve the asymptotic performance, which is mainly depending on the decoding algorithms, three soft decoding algorithms are analyzed for our polar-staircase coding. We first investigate the conventional belief propagation (BP) decoding and soft cancellation (SCAN) decoding. The performance of the two algorithms is not well in the short length regime. Then, we adopt and optimize a soft successive cancellation list (SSCL) decoding algorithm for the polar-staircase codes with the tradeoff between reliability and complexity. Simulations show that the SSCL decoding outperforms than the other soft decoding algorithms over the AWGN channels.

10 citations


Proceedings ArticleDOI
01 Oct 2018
TL;DR: This work provides comparison of polar codes and turbo codes applicated to optimal Faster-than-Nyquist signals and estimates the efficiency of the joint use of these coding techniques and optimal signals in terms of approaching the Shannon limit.
Abstract: This work provides comparison of polar codes and turbo codes applicated to optimal Faster-than-Nyquist signals. The efficiency of the joint use of these coding techniques and optimal signals is estimated in terms of approaching the Shannon limit. For different cases, the minimum distance between the Shannon curve and the point which coordinates are the values of the spectral and energy efficiency is calculated. The use of polar codes allows to get closer to the Shannon limit of 11%. Compared to “classic” BPSK signals, optimal signals with polar coding come up with the gain about 86%.

9 citations


Proceedings ArticleDOI
01 Dec 2018
TL;DR: This paper investigates the hardware implementation of a turbo-Hadamard encoder/decoder system and shows that the degradation compared with floating-point computation is within 0.15 dB.
Abstract: A turbo-Hadamard code (THC) is a type of low-rate channel code with capacity approaching the ultimate Shannon limit, i.e., −1.6 dB. In this paper, we investigate the hardware implementation of a turbo-Hadamard encoder/decoder system. In particular, we study THC systems with rates 0.0114 and 0.0143. With channel messages quantized into 6 bits, we show that the degradation compared with floating-point computation is within 0.15 dB. The entire system has been implemented on a FPGA board. When the code rate equals 0.0143, a bit error rate (BER) of 10−5 can be achieved at around E b /N 0 = −0.3 dB with a throughput of about 600 Mbps.

Journal ArticleDOI
TL;DR: In this article, a rate-compatible length-scalable raptor-like quasi-cyclic low density parity check (RL-QC-LDPC) codes are proposed for future DTTB systems, where a wide range of code rates and information/parity block lengths can be obtained with low implementation complexity.
Abstract: With the worldwide deployment and rapid development of digital television terrestrial broadcasting (DTTB) systems, the demands of diversified multimedia services from the fixed/mobile receptions increase significantly. To satisfy the various requirements of quality of service, the forward error correction codes with multiple code rates and lengths become a trend for DTTB standards. In this paper, rate-compatible length-scalable raptor-like quasi-cyclic low density parity check (RL-QC-LDPC) codes are proposed for future DTTB systems, where a wide range of code rates and information/parity block lengths can be obtained with low implementation complexity. A low-complexity construction method, called progressive matrix growth (PMG), is also proposed for such kind of RL-QC-LDPC codes. Unlike conventional construction methods, PMG performs 2-D progressive extension of both information variable nodes and check nodes with limited computational complexity and memory resources. Based on the performance prediction provided by the tool of multi-edge type density evolution, several code families are constructed with PMG as examples, and careful discussions on design parameters are given. The numerical results show that the average gaps of SNR threshold to Shannon limit of binary-input additive white Gaussian noise channel are only 0.41~0.44 dB for the code families with information-bit puncturing.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: It is shown that optimal signals with turbo coding are 7.6 times closer to the Shannon limit than “classic” BPSK signals for the same data rate.
Abstract: In this work the joint use of turbo codes and optimal signals is considered. Due to high spectral efficiency provided by optimal signals and improved energy efficiency caused by turbo coding it is possible to get closer to the Shannon limit. The degree of approaching is defined as the minimal distance between the Shannon curve and the point which coordinates are values of spectral and energy efficiency. The results are also compared to “classic” BPSK signals. It is shown that optimal signals with turbo coding are 7.6 times closer to the Shannon limit than “classic” BPSK signals for the same data rate. Parameters of signals with better approaching to the Shannon limit are found.

Journal ArticleDOI
TL;DR: Estimates of the Shannon limit for five different configurations of airborne ultrasound digital communication channels show that in case of systems with multichannel transmission and higher order modulation schemes, the achieved data rates are closest to theoretical boundaries, but still further work is needed in order to achieve more efficient data transmission in airborne ultrasound communication channels.
Abstract: The use of ultrasound in air as a means of digital communications has been demonstrated in recent years. Due to electromagnetic compatibility, privacy, and security issues, this kind of transmission fits the requirements of many medical applications, body-area networks, and general wireless short-range systems, where the transmission takes place in a single room with the distance up to 10 m. In the literature, different models of airborne ultrasound transmission have been investigated, but not supported by discussion of the channel theoretical capacity. On the other hand, knowledge of this measure is essential in the design of an efficient communication link and allows identifying its potential areas of applications. In this letter, we calculate estimates of the Shannon limit for five different configurations of airborne ultrasound digital communication channels. Our evaluation is based on the numerical results available in the literature. The obtained estimates show that in case of systems with multichannel transmission and higher order modulation schemes, the achieved data rates are closest to theoretical boundaries, but still further work is needed in order to achieve more efficient data transmission in airborne ultrasound communication channels.

Posted Content
26 Feb 2018
TL;DR: This demonstration shows that 3D deep learning can compensate nonlinear inter-carrier crosstalk effects even in the presence of frequency stochastic variations, which has hitherto been considered impossible.
Abstract: The nonlinear Shannon capacity limit has been identified as the fundamental barrier to the maximum rate of transmitted information in optical communications. In long-haul high-bandwidth optical networks, this limit is mainly attributed to deterministic Kerr-induced fiber nonlinearities and from the interaction of amplified spontaneous emission noise from cascaded optical amplifiers with fiber nonlinearity: the stochastic parametric noise amplification. Unlike earlier impractical approaches that compensate solely deterministic nonlinearities, here we demonstrate a novel electronic-based adaptive three-dimensional (3D) deep neural network that tackles the interplay of deterministic and stochastic nonlinearity manifestation in coherent optical signals. Our demonstration shows that 3D deep learning can compensate nonlinear inter-carrier crosstalk effects even in the presence of frequency stochastic variations, which has hitherto been considered impossible. Our solution significantly outperforms conventional 2D machine learning and gold-standard nonlinear equalizers without sacrificing computational complexity, leading to record-breaking transmission performance for up to 40 Gbit/sec high-spectral-efficient optical signals.

Journal ArticleDOI
23 Apr 2018
TL;DR: The dynamic shifting of window decoding (DS-WD) is proposed to reduce the complexity of SC-LDPC codes and achieves the complexity reduction of 7% and 25% without any loss in performance compared to the N-WD algorithms.
Abstract: In channel coding theory, the performance of error correcting codes (ECCs) approaching the Shannon limit can be achieved through increasing code lengths. Unfortunately, the complexity of ECCs will be increased as the code length increases. Nowadays, the magnetic recording (MR) system takes advantage of powerful ECCs by using 4 Kbytes sector. Among the advanced ECCs, the spatially coupled LDPC (SC-LDPC) codes (also known as a LDPC convolutional code) [1]are shown to have the decoding latency and complexity lower than those of the underlying LDPC block codes (LDPC-BC). Moreover, the SC-LDPC codes with threshold decoding outperform the LDPC-BC codes [2]. Hence, the SC-LDPC codes are the strong candidate for the future MR systems, when the sector size is increased beyond 4 Kbytes. An SC-LDPC decoder can use sliding window decoding [3]whereby the received signals are decoded by sliding window along the bit sequence. The window decoder is called “uniform window decoding (U-WD),” when all variable nodes (VNs) within a window are updated. In order to reduce the complexity of window decoding, some researchers proposed the non-uniform window decoding (N-WD) [4], which do not update the VNs with no improvement in the bit error rate (BER). This approach provides about 35-50% reduction in complexity compared to U-WD. In this work, we consider the application of SC-LDPC codes in MR systems, whereby SC-LDPC decoder cooperates with BCJR detector to encounter inter-symbol interference (ISI). We propose the dynamic shifting of window decoding (DS-WD) to reduce the complexity of SC-LDPC codes. Herein, the number of shifted bits is defined according to their soft BERs which are estimated at each decoding position. In addition, we modify the N-WD [4]to reinforce our proposed algorithm called “dynamic-shifting non-uniform window decoding (DS-N-WD).” The DS-WD and DS-N-WD achieve the complexity reduction of 7% and 25% without any loss in performance compared to the N-WD algorithms.

Journal ArticleDOI
Akihiro Maruta1
TL;DR: In this paper, some fundamental papers on the nonlinear Shannon limit are reviewed to better understanding its meaning and applicable range.
Abstract: The remaining issues in optical transmission technology are the degradation of optical signal to noise power ratio due to amplifier noise and the distortions due to optical nonlinear effects in a fiber. Therefore in addition to the Shannon limit, practical channel capacity is believed to be restricted by the nonlinear Shannon limit. The nonlinear Shannon limit has been derived under the assumption that the received signal points on the constellation map deviated by optical amplifier noise and nonlinear interference noise are symmetrically distributed around the ideal signal point and the sum of the noises are regarded as white Gaussian noise. The nonlinear Shannon limit is considered as a kind of theoretical limitation. However it is doubtful that its derivation process and applicable range have been understood well. In this paper, some fundamental papers on the nonlinear Shannon limit are reviewed to better understanding its meaning and applicable range. key words: optical fiber communication, nonlinear Shannon limit, Gaussian noise model, four wave mixing

Journal ArticleDOI
TL;DR: Under the test of MATLAB emulation, the application of LDPC code in encryption algorithm is fully verified and improves the system’s stability and security.
Abstract: LDPC (low-density parity-check code) is a parity check code, and its performance is very close to Shannon limit It is a sort of good channel code which has good ability of error correction The coding can be introduced to an information-hiding algorithm because of its functional advantages This way can improve robustness of a system in the process of information hiding, and it has a good application prospect In this paper, a new algorithm is proposed which concludes two steps Firstly, the encryption information needs to be implemented scrambling dispose; secondly, LDPC codes and modulation are used in the algorithm, and the encryption information is embedded into the carrier image, so the process of watermark embedding is completed The method realizes the double encryption of information, and it improves the system’s stability and security Under the test of MATLAB emulation, the application of LDPC code in encryption algorithm is fully verified

Proceedings ArticleDOI
01 Sep 2018
TL;DR: A modified version of the ACS, known as Compare-Add-Select (CSA) has been used in this paper and it is asserted that for the proposed architecture, there is a 50% reduction in both area and power without compromising the performance when compared to the conventional architecture.
Abstract: There is a growing demand for low power error control decoders that are widely used in communication systems. One of the key component in a turbo error control decoder used to decode turbo codes-the Shannon limit approaching code- is a Max-Log-MAP decoder. To scale down the power and area used by the conventional Add-Compare-Select (ACS) module of the Max-Log-MAP decoder, a modified version of the ACS, known as Compare-Add-Select (CSA) has been used in this paper. The scale down is accomplished by reducing the number of operations used in calculating the path metrics of the Max-Log-MAP decoder. The observations assert that for the proposed architecture, there is a 50% reduction in both area and power without compromising the performance when compared to the conventional architecture.

Proceedings ArticleDOI
Kexin Liang1, Bowen Feng1, Jian Jiao1, Shaohua Wu1, Qinyu Zhang1 
01 Oct 2018
TL;DR: This paper proposes an approximate estimation method based on frozen bits (FBES) for finite code length polar code over fading channel and compares the estimation performance of the proposed scheme with the existed estimation scheme.
Abstract: Polar code has emerged as a candidate of channel coding scheme for 5G as it can achieve the Shannon limit with low encoding and decoding complexity. Channel polarization is the main principle of polar code based on channel state information estimation, and then the encoded bits can separate as information bits and frozen bits, where the frozen bits in polar code are constant bits which are known at both the transmitting and receiving side. Hence, an efficient signal-to-noise-ratio (SNR) estimation scheme based on frozen bits over fading channel is proposed in this paper. In this channel estimation scheme, we first analyze the theoretically SNR with infinite code length according to the mapping between SNR and frozen bit error rate. Then we propose an approximate estimation method based on frozen bits (FBES) for finite code length polar code over fading channel. In addition, we compare the estimation performance of the proposed scheme with the existed estimation scheme. Simulation results validate our analytical results and show that the proposed estimation scheme has better estimation performance with a relatively lower complexity.

Proceedings ArticleDOI
26 Oct 2018
TL;DR: It is demonstrated that LPDC-coded HPGS 8-PAM outperforms both probabilistic-shaped and uniform LDPC-coded 8- PAM schemes.
Abstract: We propose a hybrid probabilistic-geometric-shaped (HPGS) 8-PAM signaling scheme, as a candidate suitable for data centers’ communications. The properly chosen HPGS 8-PAM parameters for different SNR-regions allows us to closely approach the Shannon limit. We demonstrate that LPDC-coded HPGS 8-PAM outperforms both probabilistic-shaped and uniform LDPC-coded 8-PAM schemes.

Journal ArticleDOI
TL;DR: A remarkable gain can be achieved in terms of capacity compared with the conventional QAM signals under the constraint of comparable power amplifier efficiency, and an approach to reduce the signal dynamic range is introduced.
Abstract: The recently proposed recursive convolutional lattice code (RCLC) can form a signal with pseudo-Gaussian constellations, and their parallel concatenation is shown to approach the Shannon limit. A practical limitation is that its input symbol is limited to $L^{2}$ -ary quadrature amplitude modulation (QAM), which has non-power-of-two constellation points when $L$ is chosen from the odd numbers. Therefore, encoding binary information by the RCLC is not straightforward. Furthermore, the information rate is limited to $\log _{2}~L$ bits per complex dimension due to their parallel concatenation. In this paper, we tackle these issues by introducing a serial concatenation of binary-input nonbinary-output convolutional code (CC) and the RCLC, where the outer CC outputs an $L$ -ary symbol that is matched to the input of the inner RCLC. We demonstrate that even with $L=3$ , the proposed approach can achieve 2 bits per complex dimension and still is able to approach the Shannon limit with lower decoding complexity compared with its parallel concatenation counterpart. As is demonstrated through theoretical analysis, the major practical drawback of the constellation generated by the RCLC is its Gaussian-like distribution, which has large peak-to-average power ratio. Therefore, we further introduce an approach to reduce the signal dynamic range for the proposed system. It is shown that a remarkable gain can be achieved in terms of capacity compared with the conventional QAM signals under the constraint of comparable power amplifier efficiency.

Journal ArticleDOI
TL;DR: The developed IDCC codes can be considered as an alternative to well-known iterative codes (LDPC codes and turbo codes) whose main advantage is the maximum proximity to the Shannon limit.
Abstract: We propose error correction iteratively decodable cyclic codes (IDCC) that consist of two cyclic Hamming codes with different generator polynomials. As a mathematical apparatus, we apply the theory of linear finite-state machines (LFSM) in binary Galois fields. A generalized decoding algorithm was constructed based on power permutation of bits in the code word and the new technique for combining the codes. By using hard decisions only, it is possible to achieve high speed and simple hardware-software implementation of encoder and decoder on linear feedback shift registers. The IDCC (n, k)-code makes it possible to correct the errors of multiplicity to (n−k). A code word may have arbitrary length: both small and large. Code rate (k/n) is close to one. It was established in the course of research that approaching the theoretical limit (border) by Shannon maximally close significantly increases length of codes, complicates encoders and decoders, increases a delay in decoding, and other problems appear. That is why the main criterion for the optimality of error correction coding is proposed to be those code characteristics that are important for practical application (time and hardware costs, focus on contemporary circuitry and parallel processing). From this point of view, the developed IDCC codes can be considered as an alternative to well-known iterative codes (LDPC codes and turbo codes) whose main advantage is the maximum proximity to the Shannon limit. This is important because at the present stage of development of science and technology one of the relevant scientific and technological problems is the task on ensuring high reliability of data transmission in different systems of digital communication. The proposed codes make it possible to solve the specified task at minimal resource costs and high efficiency.

Book ChapterDOI
01 Jan 2018
TL;DR: In this proposed design architecture, the SBF and MLDD algorithms employed here utilize reliability estimation to improve error performance and it has advantages over bit flipping (BF) algorithms.
Abstract: The latest advancements in low-density parity-check (LDPC) codes have been resulted in reducing the decoding complexity. Hence, these codes have excelled over turbo codes, BCH codes, and linear block codes in terms of evaluating the performance in higher decoding rate; hence, these decodable codes are the trending topic in coding theory of signals. Construction of LDPC codes is being elaborated in this proposed paper which further helps to study decoding and encoding of these binary and non-binary low-density parity-check codes, respectively. In this proposed design architecture, we have considered the SBF and MLDD algorithms employed here utilize reliability estimation to improve error performance and it has advantages over bit flipping (BF) algorithms. This algorithm can be improved with still more security level by having a trade-off between performance and data transmission. It can also be enhanced by implementing it in real-time applications for data decoding and correction, for smaller-size datum.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: A coded modulation technique which combines QPSK and OFDM modulation with a low rate, short block systematic turbo code to achieve exceptional energy efficiency in extremely noisy environments, where moderate data rates and short messages are required.
Abstract: In this work we propose a coded modulation technique which combines QPSK and OFDM modulation with a low rate, short block systematic turbo code. We aim to achieve exceptional energy efficiency in extremely noisy environments, where moderate data rates and short messages are required. For this proposed model, the performance improvement is produced by a unique mapping of trellis structure error correcting code and spectral efficiency is achieved by exploiting OFDM modulation. The designed OFDM distributes systematic and parity symbols along all sub-channels symmetrically and adjusts their power distinctively to achieve superior bit error performance. Further, we utilize four-dimensional M-ary Quadrature Amplitude Modulation (4D-MQAM) for parity symbols which effectively increases the overall rate of the system while maintaining the same level of energy efficiency. The resulting performance of our designed system compared with the theoretical sphere packing lower bound indicates a very small gap (less than 0.5 dB) which means a substantially close approach to the Shannon limit.

Posted Content
TL;DR: To analyze the asymptotic behavior of a new class of state-constrained signal codes called repeat-accumulate signal codes (RASCs), Monte Carlo density evolution (MC-DE) is employed and the optimum filters can be efficiently found for the given parameters of the encoder.
Abstract: State-constrained signal codes directly encode modulation signals using signal processing filters, the coefficients of which are constrained over the rings of formal power series. Although the performance of signal codes is defined by these signal filters, optimal filters must be found by brute-force search in terms of symbol error rate because the asymptotic behavior with different filters has not been investigated. Moreover, computational complexity of the conventional BCJR used in the decoder increases exponentially as the number of output constellations increase. We hence propose a new class of state-constrained signal codes called repeat-accumulate signal codes (RASCs). To analyze the asymptotic behavior of these codes, we employ Monte Carlo density evolution (MC-DE). As a result, the optimum filters can be efficiently found for given parameters of the encoder. We also introduce a low-complexity decoding algorithm for RASCs called the extended min-sum (EMS) decoder. The MC-DE analysis shows that the difference between noise thresholds of RASC and the Shannon limit is within 0.8 dB. Simulation results moreover show that the EMS decoder can reduce the computational complexity to less than 25 % of that of conventional decoder without degrading the performance by more than 1 dB.

Proceedings ArticleDOI
20 Apr 2018
TL;DR: An automated algorithm to correct loss of sub-carriers in Orthogonal Frequency Division Multiplexing (OFDM) system using Low-Density Parity-Check (LDPC) codes is proposed and LDPC-coded OFDM system is employed to correct 2D error and make improvement in OFDM BER curve.
Abstract: This paper proposes an automated algorithm to correct loss of sub-carriers in Orthogonal Frequency Division Multiplexing (OFDM) system using Low-Density Parity-Check (LDPC) codes. OFDM transmits data on high bit-rate but high-peak-to-average ratio PAPR, subcarriers-loss due to deep fades in multipath causes 2-dimensional errors i.e., in time and frequency-domain, and inter-symbol-interference (ISI) etc. occurs in multipath OFDM system. In order to reduce such OFDM errors, error correcting codes have been used by researchers like Turbo-code, LDPC-code, ReedSolomon-code, Alamouti-code etc. LDPC-coded OFDM system is employed in the paper to correct 2D error and make improvement in OFDM BER curve. LDPC was first introduced by Gallager during his graduation in 1962. The main purpose for LDPC selection amongst all error correcting codes is its performance which is near Shannon limit. Information bits are encoded using LDPC and modulated using binary phase-shift-keying (BPSK). Then, OFDM system transmit subcarriers on channel and is demodulated using BPSK and decoded using LDPC at receiver to recover original information bits. Results show that LDPC-coded-OFDM performed better than un-coded-OFDM in terms of BER vs Eb/No (energy-per-bit to spectral-noise ratio) in decibels.

Journal ArticleDOI
TL;DR: The applicability of the model with nonlinear memory to the QPSK communication line with a periodic dispersion compensation is demonstrated numerically.
Abstract: Channels with and without memory are compared using the simplest Gaussian noise distribution model. It is shown that both channels have similar dependences of the capacity on the average signal power. At the same time, the optimal input alphabet for the channel with memory changes. The applicability of the model with nonlinear memory to the QPSK communication line with a periodic dispersion compensation is demonstrated numerically.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: A new joint source-channel coding (JSCC) scheme to transmit a Gaussian source over an additive white Gaussian noise (AWGN) channel, where the channel input constraint is on energy per source symbol rather than per channel symbol.
Abstract: In this paper, we propose a new joint source-channel coding (JSCC) scheme to transmit a Gaussian source over an additive white Gaussian noise (AWGN) channel, where the channel input constraint is on energy per source symbol rather than per channel symbol. The JSCC scheme is based on an embedded Lloyd-Max quantizer with adjustable quantization levels and the ternary convolutional low density generator matrix (LDGM) codes, which are easily configurable in the sense that any rational code rate can be achieved without complicated optimization. In our scheme, first, a Gaussian source is quantized into ternary symbols, which can be formatted into different symbol planes according to their contribution to the distortion. Second, the symbol planes are encoded by a ternary convolutional LDGM code, where the relatively important symbol plane is encoded with a lower code rate. Third, the coded sequence is modulated in 3-PAM and then transmitted over the channel. Numerical results show that the proposed scheme performs close to the Shannon limit, and that the distortion can be lowered down by increasing the number of quantization levels.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: Methodology is proposed for the synthesis of optimal amplitude/phase structures of continuous and discrete signal modulations that ensure the maximum rate of information transmitting in the case of the peak power limiting as it is typical for modern communication systems.
Abstract: Modern encoding methods made it possible to obtain the information-interchange rate over communication channels closely approaching the Shannon limit. Issues connected with technological limitations that define the value of the information capacity for the used signals, which set out the value of Shannon rate, are clearly entering into the foreground of practical problems. Traditional approach to calculating the Shannon limit based on the model that envisages a limitation of the average signal power in the case of transmission over the communication channel with additive Gaussian noise has lost its universality at the current stage. Methodology is proposed for the synthesis of optimal amplitude/phase structures of continuous and discrete signal modulations that ensure the maximum rate of information transmitting in the case of the peak power limiting as it is typical for modern communication systems. Numerical results were obtained for Shannon limits under these conditions. Adjustments were found to the results that obtained on the basis of the traditional methodology. It was shown that in some cases these adjustments might be up to 3.5.4.0 dB in the view through the reduction of effective signal-to-noise level in the channel.