scispace - formally typeset
Search or ask a question
Author

Yair Be'ery

Bio: Yair Be'ery is an academic researcher from Tel Aviv University. The author has contributed to research in topics: Decoding methods & Block code. The author has an hindex of 23, co-authored 79 publications receiving 2323 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that deep learning methods can be used to improve a standard belief propagation decoder, and that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results.
Abstract: The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

487 citations

Proceedings ArticleDOI
16 Jul 2016
TL;DR: A novel deep learning method for improving the belief propagation algorithm by assigning weights to the edges of the Tanner graph that allows for only a single codeword instead of exponential number of codewords.
Abstract: A novel deep learning method for improving the belief propagation algorithm is proposed. The method generalizes the standard belief propagation algorithm by assigning weights to the edges of the Tanner graph. These edges are then trained using deep learning techniques. A well-known property of the belief propagation algorithm is the independence of the performance on the transmitted codeword. A crucial property of our new method is that our decoder preserved this property. Furthermore, this property allows us to learn only a single codeword instead of exponential number of codewords. Improvements over the belief propagation algorithm are demonstrated for various high density parity check codes.

406 citations

Posted Content
TL;DR: In this paper, a deep learning method for improving the belief propagation algorithm is proposed, where weights are assigned to the edges of the Tanner graph and these edges are then trained using deep learning techniques.
Abstract: A novel deep learning method for improving the belief propagation algorithm is proposed. The method generalizes the standard belief propagation algorithm by assigning weights to the edges of the Tanner graph. These edges are then trained using deep learning techniques. A well-known property of the belief propagation algorithm is the independence of the performance on the transmitted codeword. A crucial property of our new method is that our decoder preserved this property. Furthermore, this property allows us to learn only a single codeword instead of exponential number of code-words. Improvements over the belief propagation algorithm are demonstrated for various high density parity check codes.

223 citations

Journal ArticleDOI
17 Jan 1993
TL;DR: The authors show that the same kind of direct-sum structure exists in all the primitive B CH codes, as well as in the BCH codes of composite block length, and introduce a related structure termed the "concurring-sum", and establish its existence in the primitive binary BCH code.
Abstract: The problem of efficient maximum-likelihood soft decision decoding of binary BCH codes is considered. It is known that those primitive BCH codes whose designed distance is one less than a power of two, contain subcodes of high dimension which consist of a direct-sum of several identical codes. The authors show that the same kind of direct-sum structure exists in all the primitive BCH codes, as well as in the BCH codes of composite block length. They also introduce a related structure termed the "concurring-sum", and then establish its existence in the primitive binary BCH codes. Both structures are employed to upper bound the number of states in the minimal trellis of BCH codes, and develop efficient algorithms for maximum-likelihood soft decision decoding of these codes. >

142 citations

Journal ArticleDOI
TL;DR: A binary multiple-check generalization of the Wagner rule is presented, and two methods for its implementation, one of which resembles the suboptimal Forney-Chase algorithms, are described.
Abstract: Maximum-likelihood soft-decision decoding of linear block codes is addressed. A binary multiple-check generalization of the Wagner rule is presented, and two methods for its implementation, one of which resembles the suboptimal Forney-Chase algorithms, are described. Besides efficient soft decoding of small codes, the generalized rule enables utilization of subspaces of a wide variety, thereby yielding maximum-likelihood decoders with substantially reduced computational complexity for some larger binary codes. More sophisticated choice and exploitation of the structure of both a subspace and the coset representatives are demonstrated for the (24, 12) Golay code, yielding a computational gain factor of about 2 with respect to previous methods. A ternary single-check version of the Wagner rule is applied for efficient soft decoding of the (12, 6) ternary Golay code. >

124 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, an end-to-end reconstruction task was proposed to jointly optimize transmitter and receiver components in a single process, which can be extended to networks of multiple transmitters and receivers.
Abstract: We present and discuss several novel applications of deep learning for the physical layer. By interpreting a communications system as an autoencoder, we develop a fundamental new way to think about communications system design as an end-to-end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process. We show how this idea can be extended to networks of multiple transmitters and receivers and present the concept of radio transformer networks as a means to incorporate expert domain knowledge in the machine learning model. Lastly, we demonstrate the application of convolutional neural networks on raw IQ samples for modulation classification which achieves competitive accuracy with respect to traditional schemes relying on expert features. This paper is concluded with a discussion of open challenges and areas for future investigation.

1,879 citations

Journal ArticleDOI
TL;DR: An algorithm for the c-approximate nearest neighbor problem in a d-dimensional Euclidean space, achieving query time of O(dn 1c2/+o(1)) and space O(DN + n1+1c2 + o(1) + 1/c2), which almost matches the lower bound for hashing-based algorithm recently obtained.
Abstract: In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The problem is of significant interest in a wide variety of areas.

1,759 citations

Journal ArticleDOI
TL;DR: An efficient closest point search algorithm, based on the Schnorr-Euchner (1995) variation of the Pohst (1981) method, is implemented and is shown to be substantially faster than other known methods.
Abstract: In this semitutorial paper, a comprehensive survey of closest point search methods for lattices without a regular structure is presented. The existing search strategies are described in a unified framework, and differences between them are elucidated. An efficient closest point search algorithm, based on the Schnorr-Euchner (1995) variation of the Pohst (1981) method, is implemented. Given an arbitrary point x /spl isin/ /spl Ropf//sup m/ and a generator matrix for a lattice /spl Lambda/, the algorithm computes the point of /spl Lambda/ that is closest to x. The algorithm is shown to be substantially faster than other known methods, by means of a theoretical comparison with the Kannan (1983, 1987) algorithm and an experimental comparison with the Pohst (1981) algorithm and its variants, such as the Viterbo-Boutros (see ibid. vol.45, p.1639-42, 1999) decoder. Modifications of the algorithm are developed to solve a number of related search problems for lattices, such as finding a shortest vector, determining the kissing number, computing the Voronoi (1908)-relevant vectors, and finding a Korkine-Zolotareff (1873) reduced basis.

1,616 citations