scispace - formally typeset
Search or ask a question
Author

Shrinivas Kudekar

Bio: Shrinivas Kudekar is an academic researcher from Qualcomm. The author has contributed to research in topics: Low-density parity-check code & Decoding methods. The author has an hindex of 23, co-authored 91 publications receiving 3076 citations. Previous affiliations of Shrinivas Kudekar include École Polytechnique Fédérale de Lausanne & École Normale Supérieure.


Papers
More filters
Journal ArticleDOI
TL;DR: The fundamental mechanism that explains why “convolutional-like” or “spatially coupled” codes perform so well is described, and it is conjecture that for a large range of graphical systems a similar saturation of the “dynamical” threshold occurs once individual components are coupled sufficiently strongly.
Abstract: Convolutional low-density parity-check (LDPC) ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing functions of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism that explains why “convolutional-like” or “spatially coupled” codes perform so well. In essence, the spatial coupling of individual codes increases the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum a posteriori (MAP) threshold of the underlying ensemble. For this reason, we call this phenomenon “threshold saturation.” This gives an entirely new way of approaching capacity. One significant advantage of this construction is that one can create capacity-approaching ensembles with an error correcting radius that is increasing in the blocklength. Although we prove the “threshold saturation” only for a specific ensemble and for the binary erasure channel (BEC), empirically the phenomenon occurs for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar saturation of the “dynamical” threshold occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms and new techniques for analysis.

736 citations

Journal ArticleDOI
TL;DR: The key technical result is a proof that, under belief-propagation decoding, spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble.
Abstract: We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble that fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in this ensemble have this property. The quantifier universal refers to the single ensemble/code that is good for all channels but we assume that the channel is known at the receiver. The key technical result is a proof that, under belief-propagation decoding, spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble. We conclude by discussing some interesting open problems.

356 citations

Proceedings ArticleDOI
01 Jul 2012
TL;DR: The key technical result is a proof that, under belief-propagation decoding, spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble.
Abstract: We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a-priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble which fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in that ensemble have that property. The quantifier universal refers to the single ensemble/code which is good for all channels if we assume that the channel is known at the receiver. The key technical result is a proof that under belief propagation decoding spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble. We conclude by discussing some interesting open problems.

321 citations

Journal ArticleDOI
TL;DR: This article describes the LDPC code design philosophy and how the broad requirements of 5G NR channel coding led to the introduction of novel structural features in the code design, culminating in anLDPC code that satisfies all the demands of5G NR.
Abstract: Turbo codes, prevalent in most modern cellular devices, are set to be replaced by LDPC codes as the code for forward error correction. This transition was ushered in mainly because of the high throughput demands for 5G New Radio (NR). The new channel coding solution also needs to support incremental-redundancy hybrid ARQ, and a wide range of blocklengths and coding rates, with stringent performance guarantees and minimal description complexity. In this article, we first briefly review the requirements of the new channel code for 5G NR. We then describe the LDPC code design philosophy and how the broad requirements of 5G NR channel coding led to the introduction of novel structural features in the code design, culminating in an LDPC code that satisfies all the demands of 5G NR.

269 citations

Proceedings ArticleDOI
13 Jun 2010
TL;DR: The fundamental mechanism which explains why “convolutional-like” or “spatially coupled” codes perform so well is described, and it is conjecture that for a large range of graphical systems a similar collapse of thresholds occurs once individual components are coupled sufficiently strongly.
Abstract: Convolutional LDPC ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing as a function of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism which explains why “convolutional-like” or “spatially coupled” codes perform so well. In essence, the spatial coupling of the individual code structure has the effect of increasing the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum-a-posteriori (MAP) threshold of the underlying ensemble. For this reason we call this phenomenon “threshold saturation”. This gives an entirely new way of approaching capacity. One significant advantage of such a construction is that one can create capacity-approaching ensembles with an error correcting radius which is increasing in the blocklength. Our proof makes use of the area theorem of the BP-EXIT curve and the connection between the MAP and BP threshold recently pointed out by Measson, Montanari, Richardson, and Urbanke. Although we prove the connection between the MAP and the BP threshold only for a very specific ensemble and only for the binary erasure channel, empirically the same statement holds for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar collapse of thresholds occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms as well as to new techniques for analysis.

206 citations


Cited by
More filters
Patent
14 Jun 2016
TL;DR: Newness and distinctiveness is claimed in the features of ornamentation as shown inside the broken line circle in the accompanying representation as discussed by the authors, which is the basis for the representation presented in this paper.
Abstract: Newness and distinctiveness is claimed in the features of ornamentation as shown inside the broken line circle in the accompanying representation.

1,500 citations

Journal ArticleDOI
TL;DR: The fundamental mechanism that explains why “convolutional-like” or “spatially coupled” codes perform so well is described, and it is conjecture that for a large range of graphical systems a similar saturation of the “dynamical” threshold occurs once individual components are coupled sufficiently strongly.
Abstract: Convolutional low-density parity-check (LDPC) ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing functions of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism that explains why “convolutional-like” or “spatially coupled” codes perform so well. In essence, the spatial coupling of individual codes increases the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum a posteriori (MAP) threshold of the underlying ensemble. For this reason, we call this phenomenon “threshold saturation.” This gives an entirely new way of approaching capacity. One significant advantage of this construction is that one can create capacity-approaching ensembles with an error correcting radius that is increasing in the blocklength. Although we prove the “threshold saturation” only for a specific ensemble and for the binary erasure channel (BEC), empirically the phenomenon occurs for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar saturation of the “dynamical” threshold occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms and new techniques for analysis.

736 citations

Book
01 Jan 2001
TL;DR: In this paper, the Hessian B Parisi equation and channel coding theorem of K-Sat were derived for the mean field theory of phase transitions and the replica symmetry breaking theory of spin glasses.
Abstract: 1 Mean-field theory of phase transitions 2 Mean-field theory of spin glasses 3 Replica symmetry breaking 4 Gauge theory of spin glasses 5 Error-correcting codes 6 Image restoration 7 Associative memory 8 Learning in perceptron 9 Optimization problems A Eigenvalues of the Hessian B Parisi equation C Channel coding theorem D Distribution and free energy of K-Sat References Index

595 citations

Journal ArticleDOI
TL;DR: A transmission system with adjustable data rate for single-carrier coherent optical transmission is proposed, which enables high-speed transmission close to the Shannon limit, and it is experimentally demonstrated that the optical transmission of probabilistically shaped 64-QAM signals outperforms the transmission reach of regular 16- QAM and regular 64-ZAM signals.
Abstract: A transmission system with adjustable data rate for single-carrier coherent optical transmission is proposed, which enables high-speed transmission close to the Shannon limit. The proposed system is based on probabilistically shaped 64-QAM modulation formats. Adjustable shaping is combined with a fixed-QAM modulation and a fixed forward-error correction code to realize a system with adjustable net data rate that can operate over a large reach range. At the transmitter, an adjustable distribution matcher performs the shaping. At the receiver, an inverse distribution matcher is used. Probabilistic shaping is implemented into a coherent optical transmission system for 64-QAM at 32 Gbaud to realize adjustable operation modes for net data rates ranging from 200 to 300 Gb/s. It is experimentally demonstrated that the optical transmission of probabilistically shaped 64-QAM signals outperforms the transmission reach of regular 16-QAM and regular 64-QAM signals by more than 40% in the transmission reach.

564 citations

Journal ArticleDOI
TL;DR: In this paper, a denoising-based approximate message passing (D-AMP) framework is proposed to integrate a wide class of denoisers within its iterations. But, the performance of D-AMP is limited by the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.
Abstract: A denoising algorithm seeks to remove noise, errors, or perturbations from a signal. Extensive research has been devoted to this arena over the last several decades, and as a result, todays denoisers can effectively remove large amounts of additive white Gaussian noise. A compressed sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, we develop an extension of the approximate message passing (AMP) framework, called denoising-based AMP (D-AMP), that can integrate a wide class of denoisers within its iterations. We demonstrate that, when used with a high-performance denoiser for natural images, D-AMP offers the state-of-the-art CS recovery performance while operating tens of times faster than competing methods. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.

535 citations