scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Decoder-in-the-Loop: Genetic Optimization-Based LDPC Code Design

23 Sep 2019-IEEE Access (Institute of Electrical and Electronics Engineers (IEEE))-Vol. 7, pp 141161-141170
TL;DR: In this article, the authors propose an evolutionary algorithm for LDPC code design based on the decoder-in-the-loop (DINL) concept, which takes into consideration the channel, code length and the number of iterations while optimizing the error-rate of the actual decoder hardware architecture.
Abstract: LDPC code design tools typically rely on asymptotic code behavior and are affected by an unavoidable performance degradation due to model imperfections in the short length regime. We propose an LDPC code design scheme based on an evolutionary algorithm, the Genetic Algorithm (GenAlg), implementing a “decoder-in-the-loop” concept. It inherently takes into consideration the channel, code length and the number of iterations while optimizing the error-rate of the actual decoder hardware architecture. We construct short length LDPC codes (i.e., the parity-check matrix) with error-rate performance comparable to, or even outperforming that of well-designed standardized short length LDPC codes over both AWGN and Rayleigh fading channels. Our proposed algorithm can be used to design LDPC codes with special graph structures (e.g., accumulator-based codes) to facilitate the encoding step, or to satisfy any other practical requirement. Moreover, GenAlg can be used to design LDPC codes with the aim of reducing decoding latency and complexity, leading to coding gains of up to 0.325 dB and 0.8 dB at BLER of 10 -5 for both AWGN and Rayleigh fading channels, respectively, when compared to state-of-the-art short LDPC codes. Also, we analyze what can be learned from the resulting codes and, as such, the GenAlg particularly highlights design paradigms of short length LDPC codes (e.g., codes with degree-1 variable nodes obtain very good results).
Citations
More filters
Journal ArticleDOI
TL;DR: This overview paper summarizes the state-of-the-art of AI-based 5G and B5G techniques on the algorithm, implementation, and optimization levels, and provides a summary of emerging techniques and open research problems.
Abstract: The communication industry is rapidly advancing towards 5G and beyond 5G (B5G) wireless technologies in order to fulfill the ever-growing needs for higher data rates and improved quality-of-service (QoS). Emerging applications require wireless connectivity with tremendously increased data rates, substantially reduced latency, and growing support for a large number of devices. These requirements pose new challenges that can no longer be efficiently addressed by conventional approaches. Artificial intelligence (AI) is considered as one of the most promising solutions to improve the performance and robustness of 5G and B5G systems, fueled by the massive amount of data generated in 5G and B5G networks and the availability of powerful data processing fabrics. As a consequence, a plethora of research on AI-based communication technologies has emerged recently, promising higher data rates and improved QoS with affordable implementation overhead. In this overview paper, we summarize the state-of-the-art of AI-based 5G and B5G techniques on the algorithm, implementation, and optimization levels. We shed light on the advantages and limitations of AI-based solutions, and we provide a summary of emerging techniques and open research problems.

42 citations

Proceedings ArticleDOI
01 Sep 2019
TL;DR: A deep learning-based polar code construction algorithm that represents the information/frozen bit indices of a polar code as a binary vector which can be interpreted as trainable weights of a neural network, facilitating the learning process through gradient descent and enabling an efficient code construction.
Abstract: In this work, we introduce a deep learning-based polar code construction algorithm. The core idea is to represent the information/frozen bit indices of a polar code as a binary vector which can be interpreted as trainable weights of a neural network (NN). For this, we demonstrate how this binary vector can be relaxed to a soft-valued vector, facilitating the learning process through gradient descent and enabling an efficient code construction. We further show how different polar code design constraints (e.g., code rate) can be taken into account by means of careful binary-to-soft and soft-to-binary conversions, along with rate-adjustment after each learning iteration. Besides its conceptual simplicity, this approach benefits from having the “decoder-in-the-toop”, i.e., the nature of the decoder is inherently taken into consideration while learning (designing) the polar code. We show results for belief propagation (BP) decoding over both AWGN and Rayleigh fading channels with considerable performance gains over state-of-the-art construction schemes.

20 citations

Journal ArticleDOI
TL;DR: A comprehensive overview on the current state-of-the art LDPC decoding algorithms and their applications and views on the open research problems, challenges and the scope for future prospects are forecasted through discussions.
Abstract: In the domain of wireless communication systems, Error Control Coding (ECC) schemes are one of the widely relied upon or responsible methodology for securing the integrity and authenticity of the data transmission process. In the last decade, due to the advent of modern communication standards and their wide range of services, there has been a resurgence of interest and support in the research community towards the conception of efficient and versatile ECC techniques. Recent developments in the wireless communication based technologies have witnessed the pliable nature of low-density parity-check (LDPC) codes and their contributions which cannot be overstated. As of now, the decoding schemes based on LDPC codes have emerged as one of the most promising and effective coding scheme for addressing several key problems of reliable data communication. In this article, comprehensive overview on the current state-of-the art LDPC decoding algorithms and their applications are provided. In addition, a thorough investigation and comparison is carried out on various LDPC decoding algorithms based on their performance, similarities, scalability, numerical stability and feasibility for hardware realization. Finally, at the end of this review, views on the open research problems, challenges and the scope for future prospects are forecasted through discussions.

10 citations

Proceedings ArticleDOI
Huang Lingchen1, Huazi Zhang1, Rong Li1, Yiqun Ge1, Jun Wang1 
01 Dec 2019
TL;DR: Simulation results show that the proposed learning- based polar constructions achieve comparable, or even better, performances than the state of the art under successive cancellation list (SCL) decoders, without exploiting any expert knowledge from polar coding theory in the learning algorithms.
Abstract: In this paper, we model nested polar code construction as a Markov decision process (MDP), and tackle it with advanced reinforcement learning (RL) techniques. First, an MDP environment with state, action, and reward is defined in the context of polar coding. Specifically, a state represents the construction of an $(N,K)$ polar code, an action specifies its reduction to an $(N,K-1)$ subcode, and reward is the decoding performance. A neural network architecture consisting of both policy and value networks is proposed to generate actions based on the observed states, aiming at maximizing the overall rewards. A loss function is defined to trade off between exploitation and exploration. To further improve learning efficiency and quality, an ''integrated learning'' paradigm is proposed. It first employs a genetic algorithm to generate a population of (sub-)optimal polar codes for each $(N,K)$, and then uses them as prior knowledge to refine the policy in RL. Such a paradigm is shown to accelerate the training process, and converge at better performances. Simulation results show that the proposed learning- based polar constructions achieve comparable, or even better, performances than the state of the art under successive cancellation list (SCL) decoders. Last but not least, this is achieved without exploiting any expert knowledge from polar coding theory in the learning algorithms.

8 citations

Journal ArticleDOI
TL;DR: This work considers automorphism ensemble decoding (AED) of quasi-cyclic (QC) low-density parity-check (LDPC) codes and can leverage a gain in error-correcting performance using an ensemble of identical BP decoders, without increasing the worst-case decoding latency.
Abstract: We consider automorphism ensemble decoding (AED) of quasi-cyclic (QC) low-density parity-check (LDPC) codes. Belief propagation (BP) decoding on the conventional factor graph is equivariant to the quasi-cyclic automorphisms and therefore prevents gains by AED. However, by applying small modifications to the parity-check matrix at the receiver side, we can break the symmetry without changing the code at the transmitter. This way, we can leverage a gain in error-correcting performance using an ensemble of identical BP decoders, without increasing the worst-case decoding latency. The proposed method is demonstrated using LDPC codes from the CCSDS, 802.11n and 5G standards and produces gains of 0.2 to 0.3 dB over conventional BP decoding. Compared to the similarly performing saturated BP (SBP), the proposed algorithm reduces the average decoding latency by more than eight times.

5 citations

References
More filters
BookDOI
01 May 1992
TL;DR: Initially applying his concepts to simply defined artificial systems with limited numbers of parameters, Holland goes on to explore their use in the study of a wide range of complex, naturally occuring processes, concentrating on systems having multiple factors that interact in nonlinear ways.
Abstract: From the Publisher: Genetic algorithms are playing an increasingly important role in studies of complex adaptive systems, ranging from adaptive agents in economic theory to the use of machine learning techniques in the design of complex devices such as aircraft turbines and integrated circuits. Adaptation in Natural and Artificial Systems is the book that initiated this field of study, presenting the theoretical foundations and exploring applications. In its most familiar form, adaptation is a biological process, whereby organisms evolve by rearranging genetic material to survive in environments confronting them. In this now classic work, Holland presents a mathematical model that allows for the nonlinearity of such complex interactions. He demonstrates the model's universality by applying it to economics, physiological psychology, game theory, and artificial intelligence and then outlines the way in which this approach modifies the traditional views of mathematical genetics. Initially applying his concepts to simply defined artificial systems with limited numbers of parameters, Holland goes on to explore their use in the study of a wide range of complex, naturally occuring processes, concentrating on systems having multiple factors that interact in nonlinear ways. Along the way he accounts for major effects of coadaptation and coevolution: the emergence of building blocks, or schemata, that are recombined and passed on to succeeding generations to provide, innovations and improvements. John H. Holland is Professor of Psychology and Professor of Electrical Engineering and Computer Science at the University of Michigan. He is also Maxwell Professor at the Santa Fe Institute and isDirector of the University of Michigan/Santa Fe Institute Advanced Research Program.

12,584 citations

Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations

Journal ArticleDOI
TL;DR: It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities.
Abstract: A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles

3,246 citations

MonographDOI
17 Mar 2008
TL;DR: This summary of the state-of-the-art in iterative coding makes this decision more straightforward, with emphasis on the underlying theory, techniques to analyse and design practical iterative codes systems.
Abstract: Having trouble deciding which coding scheme to employ, how to design a new scheme, or how to improve an existing system? This summary of the state-of-the-art in iterative coding makes this decision more straightforward. With emphasis on the underlying theory, techniques to analyse and design practical iterative coding systems are presented. Using Gallager's original ensemble of LDPC codes, the basic concepts are extended for several general codes, including the practically important class of turbo codes. The simplicity of the binary erasure channel is exploited to develop analytical techniques and intuition, which are then applied to general channel models. A chapter on factor graphs helps to unify the important topics of information theory, coding and communication theory. Covering the most recent advances, this text is ideal for graduate students in electrical engineering and computer science, and practitioners. Additional resources, including instructor's solutions and figures, available online: www.cambridge.org/9780521852296.

2,100 citations

Journal ArticleDOI
TL;DR: Improved algorithms are developed to construct good low-density parity-check codes that approach the Shannon limit very closely, especially for rate 1/2.
Abstract: We develop improved algorithms to construct good low-density parity-check codes that approach the Shannon limit very closely. For rate 1/2, the best code found has a threshold within 0.0045 dB of the Shannon limit of the binary-input additive white Gaussian noise channel. Simulation results with a somewhat simpler code show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10/sup -6/ using a block length of 10/sup 7/.

1,642 citations