scispace - formally typeset
Open AccessProceedings ArticleDOI

Learning to decode linear codes using deep learning

Reads0
Chats0
TLDR
A novel deep learning method for improving the belief propagation algorithm by assigning weights to the edges of the Tanner graph that allows for only a single codeword instead of exponential number of codewords.
Abstract
A novel deep learning method for improving the belief propagation algorithm is proposed. The method generalizes the standard belief propagation algorithm by assigning weights to the edges of the Tanner graph. These edges are then trained using deep learning techniques. A well-known property of the belief propagation algorithm is the independence of the performance on the transmitted codeword. A crucial property of our new method is that our decoder preserved this property. Furthermore, this property allows us to learn only a single codeword instead of exponential number of codewords. Improvements over the belief propagation algorithm are demonstrated for various high density parity check codes.

read more

Citations
More filters
Journal ArticleDOI

An Introduction to Deep Learning for the Physical Layer

TL;DR: In this article, an end-to-end reconstruction task was proposed to jointly optimize transmitter and receiver components in a single process, which can be extended to networks of multiple transmitters and receivers.
Journal ArticleDOI

Power of Deep Learning for Channel Estimation and Signal Detection in OFDM Systems

TL;DR: The proposed deep learning-based approach to handle wireless OFDM channels in an end-to-end manner is more robust than conventional methods when fewer training pilots are used, the cyclic prefix is omitted, and nonlinear clipping noise exists.
Journal ArticleDOI

Deep Learning Based Communication Over the Air

TL;DR: This paper builds, train, and run a complete communications system solely composed of NNs using unsynchronized off-the-shelf software-defined radios and open-source deep learning software libraries, and proposes a two-step learning procedure based on the idea of transfer learning that circumvents the challenges of training such a system over actual channels.
Journal ArticleDOI

Deep Learning for Intelligent Wireless Networks: A Comprehensive Survey

TL;DR: A comprehensive survey of the applications of DL algorithms for different network layers, including physical layer modulation/coding, data link layer access control/resource allocation, and routing layer path search, and traffic balancing is performed.
Journal ArticleDOI

Deep Learning Methods for Improved Decoding of Linear Codes

TL;DR: It is shown that deep learning methods can be used to improve a standard belief propagation decoder, and that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Related Papers (5)