scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 2019"


Journal ArticleDOI
TL;DR: The obtained results show that the performance of noise-aided BPL decoder is much better than the performances of BP, successive cancelation (SC), and successive cancellation list (SCL) decoders, and it is a bit behind of cyclic redundancy check aided SCL in terms of bit-error rate performance.
Abstract: Noise can help weak information bearing signal to be detected in nonlinear systems by the use of stochastic perturbation. In this letter, we propose a novel belief propagation list (BPL) decoder that relies on artificial noise. The proposed decoder is constructed by using parallel independent BP decoders. Noise with different powers are added to different BP decoders that runs in parallel, and the output of the decoder that pass the early detection and termination criteria check is considered as the recovered data. Adding small amount of noise enables the decoder to handle un-converged errors. The obtained results show that the performance of noise-aided BPL decoder is much better than the performances of BP, successive cancelation (SC), and successive cancelation list (SCL) decoders, and it is a bit behind of cyclic redundancy check aided SCL in terms of bit-error rate performance.

25 citations


Posted Content
TL;DR: An upper bound on the list decoding radius of an insdel code in terms of its rate is provided and a Zyablov-type bound for insdel errors is obtained and a family of explicit insdel codes is constructed with efficient list decoding algorithm.
Abstract: Insdel errors occur in communication systems caused by the loss of positional information of the message. Since the work by Guruswami and Wang, there have been some further investigations on the list decoding of insertion codes, deletion codes and insdel codes. However, unlike classical Hamming metric or even rank-metric, there are still many unsolved problems on list decoding of insdel codes. The contributions of this paper mainly consist of two parts. Firstly, we analyze the list decodability of random insdel codes. We show that list decoding of random insdel codes surpasses the Singleton bound when there are more insertion errors than deletion errors and the alphabet size is sufficiently large. Furthermore, our results reveal the existence of an insdel code that can be list decoded against insdel errors beyond its minimum insdel distance while still having polynomial list size. This provides a more complete picture on the list decodability of insdel codes when both insertion and deletion errors happen. Secondly, we construct a family of explicit insdel codes with efficient list decoding algorithm. As a result, we derive a Zyablov-type bound for insdel errors. Recently, after our results appeared, Guruswami et al. provided a complete solution for another open problem on list decoding of insdel codes. In contrast to the problems we considered, they provided a region containing all possible insertion and deletion errors that are still list decodable by some q-ary insdel codes of non-zero rate. More specifically, for a fixed number of insertion and deletion errors, while our paper focuses on maximizing the rate of a code that is list decodable against that amount of insertion and deletion errors, Guruswami et al. focuses on finding out the existence of a code with asymptotically non-zero rate which is list decodable against this amount of insertion and deletion errors.

23 citations


Posted Content
Huang Lingchen1, Huazi Zhang1, Rong Li1, Yiqun Ge1, Jun Wang1 
TL;DR: In this paper, an artificial-intelligence-driven approach to design error correction codes (ECC) is proposed, in which the code constructor is realized by AI algorithms and the code evaluator provides code performance metric measurements.
Abstract: In this paper, we investigate an artificial-intelligence (AI) driven approach to design error correction codes (ECC). Classic error correction code was designed upon coding theory that typically defines code properties (e.g., hamming distance, subchannel reliability, etc.) to reflect code performance. Its code design is to optimize code properties. However, an AI-driven approach doesn't necessarily rely on coding theory any longer. Specifically, we propose a constructor-evaluator framework, in which the code constructor is realized by AI algorithms and the code evaluator provides code performance metric measurements. The code constructor keeps improving the code construction to maximize code performance that is evaluated by the code evaluator. As examples, we construct linear block codes and polar codes with reinforcement learning (RL) and evolutionary algorithms. The results show that comparable code performance can be achieved with respect to the existing codes. It is noteworthy that our method can provide superior performances where existing classic constructions fail to achieve optimum for a specific decoder (e.g., list decoding for polar codes).

22 citations


Journal ArticleDOI
TL;DR: The rank metric is generalized and the rank-metric Singleton bound is established and the definition of Gabidulin codes is extended and it is shown that its properties are preserved.
Abstract: In this paper, it is shown that some results in the theory of rank-metric codes over finite fields can be extended to finite commutative principal ideal rings. More precisely, the rank metric is generalized and the rank-metric Singleton bound is established. The definition of Gabidulin codes is extended and it is shown that its properties are preserved. The theory of Grobner bases is used to give the unique decoding, minimal list decoding, and error-erasure decoding algorithms of interleaved Gabidulin codes. These results are then applied in space-time codes and in random linear network coding as in the case of finite fields. Specifically, two existing encoding schemes of random linear network coding are combined to improve the error correction.

21 citations


Proceedings ArticleDOI
01 Aug 2019
TL;DR: A modified pruning scheme named shifted-pruning over a set of low-reliability bit-channels named critical bits is proposed aiming to avoid the elimination of the correct path in additional decoding attempts after a decoding failure occurs.
Abstract: In successive cancellation list (SCL) decoding, the list pruning operation retains the L paths with highest likelihoods. However, the correct path might be among the paths with low likelihoods due to channel noise. In this case, the correct path is eliminated from the list. In this work, we study the event of elimination of the correct path and we analyze where and how this event occurs. A modified pruning scheme named shifted-pruning over a set of low-reliability bit-channels named critical bits is proposed aiming to avoid the elimination of the correct path in additional decoding attempts after a decoding failure occurs. Shifted-pruning is realized by selecting the paths k + 1 to k + L out of the 2L ordered paths instead of the paths 1 to L. The numerical results for polar codes of length 512 and code rates 0.5 and 0.8 and list sizes L = 2, 8 and 32 show that the shifted-pruning scheme is a low-complexity equivalent to the bit-flipping scheme while it can outperform the bit-flipping method by providing 0. 25-0.5dB gain.

20 citations


Journal ArticleDOI
21 Oct 2019-Entropy
TL;DR: The main results are first introduced without proofs, followed by exemplifications of the theorems with further related analytical results, interpretations, and information-theoretic applications.
Abstract: This paper is focused on the derivation of data-processing and majorization inequalities for f-divergences, and their applications in information theory and statistics. For the accessibility of the material, the main results are first introduced without proofs, followed by exemplifications of the theorems with further related analytical results, interpretations, and information-theoretic applications. One application refers to the performance analysis of list decoding with either fixed or variable list sizes; some earlier bounds on the list decoding error probability are reproduced in a unified way, and new bounds are obtained and exemplified numerically. Another application is related to a study of the quality of approximating a probability mass function, induced by the leaves of a Tunstall tree, by an equiprobable distribution. The compression rates of finite-length Tunstall codes are further analyzed for asserting their closeness to the Shannon entropy of a memoryless and stationary discrete source. Almost all the analysis is relegated to the appendices, which form the major part of this manuscript.

20 citations


Proceedings ArticleDOI
07 Jul 2019
TL;DR: The conventional construction of polar codes is modified by a greedy search algorithm in which a bit-swapping approach is employed to re-distribute the low-reliability bits in the subblocks aiming for a reduction in the probability of correct path elimination.
Abstract: Polar codes are constructed based on the reliability of bit-channels. This construction suits the successive cancellation (SC) decoding, where one error in the successive estimation of the bits fails the decoding. However, in SC list (SCL) decoding, the correct path may remain in the list by tolerating multiple penalties. This characteristic of list decoding demands a different approach in code construction.In this work, we modify the conventional construction by a greedy search algorithm in which a bit-swapping approach is employed to re-distribute the low-reliability bits in the subblocks aiming for a reduction in the probability of correct path elimination. The numerical results for polar codes of length 1 kb under CRC-aided SCL decoding show improvements of about 0.4 dB for R=0.8 and over 0.2 dB for R=0.5 at L=32.

18 citations


Proceedings ArticleDOI
07 Jul 2019
TL;DR: It is shown that with properly selected number of random permutations, this algorithm considerably outperforms straightforward list decoding while maintaining the same asymptotic complexity, which means that near-MAP decoding can be achieved with lower complexity cost.
Abstract: In this paper, we consider the problem of decoding Reed-Muller (RM) codes in binary erasure channel. We propose a novel algorithm, which exploits several techniques, such as list recursive (successive cancellation) decoding based on Plotkin decomposition, permutations of encoding factor graph as well as the properties of erasure channels.We show that with properly selected number of random permutations, this algorithm considerably outperforms straightforward list decoding while maintaining the same asymptotic complexity. This also means that near-MAP decoding can be achieved with lower complexity cost.

18 citations


Proceedings ArticleDOI
01 Dec 2019
TL;DR: If the target FER is low enough, the expected list size approaches one so that the average complexity of S-LVA approaches that of standard soft Viterbi on the CC, i.e., with no list decoding.
Abstract: This paper uses convolutional codes (CCs) with distance-spectrum optimal (DSO) cyclic redundancy checks (CRCs) and the serial list Viterbi algorithm (S-LVA) to approach the random coding union (RCU) bound with low decoding complexity at the target FER. We show, for example, that a 64-state zero-terminated CC with a DSO CRC can achieve performance within 0.5 dB of the RCU bound for information blocklength k=64 at FER of 10^{-3}. We also show that a tail-biting CC with a DSO CRC can achieve even better performance, within 0.05 dB of the RCU bound at FER of 10^{-4} for a 256-state CC with k=64. This paper provides analysis of decoding complexity, which for S-LVA depends on the expected list size. We show that if the target FER is low enough, the expected list size approaches one so that the average complexity of S-LVA approaches that of standard soft Viterbi on the CC, i.e., with no list decoding. We also provide DSO CRCs for CCs with k=64 and rates of 1/2, 1/3, 1/6 and 1/12 for the 5G new radio control channel and compare their performance with polar codes.

17 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider memoryless string sources that generate strings consisting of independent and identically distributed characters drawn from a finite alphabet, and characterize their corresponding guesswork, which is central to several applications in information theory: average guesswork provides a lower bound on the expected computational cost of a sequential decoder to decode successfully the transmitted message; the complementary cumulative distribution function of guesswork gives the error probability in list decoding; the logarithm of guessworks is the number of bits needed in optimal lossless one-to-one source coding; and the guesswork
Abstract: Given a collection of strings, each with an associated probability of occurrence, the guesswork of each of them is their position in a list ordered from most likely to least likely, breaking ties arbitrarily. The guesswork is central to several applications in information theory: average guesswork provides a lower bound on the expected computational cost of a sequential decoder to decode successfully the transmitted message; the complementary cumulative distribution function of guesswork gives the error probability in list decoding; the logarithm of guesswork is the number of bits needed in optimal lossless one-to-one source coding; and the guesswork is the number of trials required of an adversary to breach a password protected system in a brute-force attack. In this paper, we consider memoryless string sources that generate strings consisting of independent and identically distributed characters drawn from a finite alphabet, and characterize their corresponding guesswork. Our main tool is the tilt operation on a memoryless string source. We show that the tilt operation on a memoryless string source parametrizes an exponential family of memoryless string sources, which we refer to as the tilted family of the string source. We provide an operational meaning to the tilted families by proving that two memoryless string sources result in the same guesswork on all strings of all lengths if and only if their respective categorical distributions belong to the same tilted family. Establishing some general properties of the tilt operation, we generalize the notions of weakly typical set and asymptotic equipartition property to tilted weakly typical sets of different orders. We use this new definition to characterize the large deviations for all atypical strings and characterize the volume of tilted weakly typical sets of different orders. We subsequently build on this characterization to prove large deviation bounds on guesswork and provide an accurate approximation of its probability mass function.

15 citations


Proceedings ArticleDOI
01 Nov 2019
TL;DR: The first construction of three-round non-malleable commitments from the almost minimal assumption of injective one-way functions was given in this paper, which relies on a novel technique called bidirectional Goldreich-Levin extraction.
Abstract: We give the first construction of three-round non-malleable commitments from the almost minimal assumption of injective one-way functions. Combined with the lower bound of Pass (TCC 2013), our result is almost the best possible w.r.t. standard polynomial-time hardness assumptions (at least w.r.t. black-box reductions). Our results rely on a novel technique which we call 'bidirectional Goldreich-Levin extraction'. Along the way, we also obtain the first rewind secure delayed-input witness indistinguishable (WI) proofs from only injective one-way functions. We also obtain the first construction of an epsilon-extractable commitment scheme from injective one-way functions. We believe both of these to be of independent interest. In particular, as a direct corollary of our rewind secure WI construction, we are able to obtain a construction of 3-round promise zero-knowledge from only injective one-way functions.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: A reduced complexity algorithm for computing log-likelihood ratios (LLRs) needed for successive cancellation (SC) decoding of polar codes with 2t × 2t polarization kernels and Arikan matrix is proposed.
Abstract: We propose a reduced complexity algorithm for computing log-likelihood ratios (LLRs) needed for successive cancellation (SC) decoding of polar codes with 2t × 2t polarization kernels. This algorithm is applied to some polarization kernels of length 16 and 32 with high polarization rate. The complexity reduction is achieved by exploiting linear relationship of the considered kernels and Arikan matrix. Further complexity reduction is achieved by identification of common subexpressions. The proposed approach enables SC list decoding of polar codes with some large kernels with lower complexity compared to the codes based on the Arikan kernel with the same performance.

Posted Content
TL;DR: The main technical work in the results is proving the existence of code families of sufficiently large size with good list-decoding properties for any combination of δ,γ within the claimed feasibility region.
Abstract: We give a complete answer to the following basic question: "What is the maximal fraction of deletions or insertions tolerable by $q$-ary list-decodable codes with non-vanishing information rate?" This question has been open even for binary codes, including the restriction to the binary insertion-only setting, where the best-known result was that a $\gamma\leq 0.707$ fraction of insertions is tolerable by some binary code family. For any desired $\epsilon > 0$, we construct a family of binary codes of positive rate which can be efficiently list-decoded from any combination of $\gamma$ fraction of insertions and $\delta$ fraction of deletions as long as $ \gamma+2\delta\leq 1-\epsilon$. On the other hand, for any $\gamma,\delta$ with $\gamma+2\delta=1$ list-decoding is impossible. Our result thus precisely characterizes the feasibility region of binary list-decodable codes for insertions and deletions. We further generalize our result to codes over any finite alphabet of size $q$. Surprisingly, our work reveals that the feasibility region for $q>2$ is not the natural generalization of the binary bound above. We provide tight upper and lower bounds that precisely pin down the feasibility region, which turns out to have a $(q-1)$-piece-wise linear boundary whose $q$ corner-points lie on a quadratic curve. The main technical work in our results is proving the existence of code families of sufficiently large size with good list-decoding properties for any combination of $\delta,\gamma$ within the claimed feasibility region. We achieve this via an intricate analysis of codes introduced by [Bukh, Ma; SIAM J. Discrete Math; 2014]. Finally, we give a simple yet powerful concatenation scheme for list-decodable insertion-deletion codes which transforms any such (non-efficient) code family (with vanishing information rate) into an efficiently decodable code family with constant rate.

Journal ArticleDOI
01 Jun 2019-Entropy
TL;DR: This paper determines the capacity of the Gaussian arbitrarily-varying channel with a (possibly stochastic) encoder and a deterministic list-decoder under the average probability of error criterion and shows that if the adversary has enough power, then the decoder can be confounded by the adversarial superposition of several codewords while satisfying its power constraint with positive probability.
Abstract: In this paper, we determine the capacity of the Gaussian arbitrarily-varying channel with a (possibly stochastic) encoder and a deterministic list-decoder under the average probability of error criterion. We assume that both the legitimate and the adversarial signals are restricted by their power constraints. We also assume that there is no path between the adversary and the legitimate user but the adversary knows the legitimate user’s code. We show that for any list size L, the capacity is equivalent to the capacity of a point-to-point Gaussian channel with noise variance increased by the adversary power, if the adversary has less power than L times the transmitter power; otherwise, the capacity is zero. In the converse proof, we show that if the adversary has enough power, then the decoder can be confounded by the adversarial superposition of several codewords while satisfying its power constraint with positive probability. The achievability proof benefits from a novel variant of the Csiszar-Narayan method for the arbitrarily-varying channel.

Proceedings ArticleDOI
05 Nov 2019
TL;DR: The advantages of list decoding for short packet transmission over fading channels with an unknown state are illustrated and the proposed technique provides gains of 1 dB with respect to a traditional pilot-aided transmission scheme.
Abstract: In this paper, the advantages of list decoding for short packet transmission over fading channels with an unknown state are illustrated. The principle is applied to polar codes (under successive cancellation list decoding) and to general short binary linear block codes (under ordered-statistics decoding). The proposed decoders assume neither a-priori knowledge of the channel coefficients, nor of their statistics. The scheme relies on short pilot fields that are used only to derive an initial channel estimate. The channel estimate is required to be accurate enough to enable a good list construction, i.e., the construction of a list that contains, with high probability, the transmitted codeword. The final decision on the message is obtained by applying a non-coherent decoding metric to the codewords composing the list. This allows one to use very few pilots, thus reducing the channel estimation overhead. Numerical results are provided for the Rayleigh block-fading channel and compared to finite-length performance bounds. The proposed technique provides (in the short block length regime) gains of 1 dB with respect to a traditional pilot-aided transmission scheme.

Journal ArticleDOI
TL;DR: The Johnson-type bound is derived, i.e., a lower bound on list decoding radius in terms of minimum distance, and a list decoding algorithm of Reed–Solomon codes beyond the Johnson- type bound in the pair metric is presented.
Abstract: We investigate the list decodability of symbol-pair codes 1 in this paper. First, we show that the list decodability of every symbol-pair code does not exceed the Gilbert–Varshamov bound. On the other hand, we are able to prove that with high probability, a random symbol-pair code can be list decoded up to the Gilbert–Varshamov bound. Our second result of this paper is to derive the Johnson-type bound, i.e., a lower bound on list decoding radius in terms of minimum distance. Finally, we present a list decoding algorithm of Reed–Solomon codes beyond the Johnson-type bound in the pair metric. 1 A symbol-pair code is referred to a code in the pair metric.

Proceedings ArticleDOI
11 Feb 2019
TL;DR: In this article, a pilot-assisted transmission (PAT) scheme is proposed for short blocklengths, where the pilots are used only to derive an initial channel estimate for the list construction step.
Abstract: A pilot-assisted transmission (PAT) scheme is proposed for short blocklengths, where the pilots are used only to derive an initial channel estimate for the list construction step. The final decision of the message is obtained by applying a non-coherent decoding metric to the codewords composing the list. This allows one to use very few pilots, thus reducing the channel estimation overhead. The method is applied to anordered statistics decoder for communication over a Rayleigh block-fading channel. Gains of up to 1.2 dB as compared to traditional PAT schemes are demonstrated for short codes with QPSK signaling. The approach can be generalized to other list decoders, e.g., to list decoding of polar codes.

Posted Content
TL;DR: A generalized Singleton bound for a given list size, and conjecture that the bound is tight for most RS codes over large enough finite fields are proved, and the first explicit construction of such RS codes are given.
Abstract: List-decoding of Reed-Solomon (RS) codes beyond the so called Johnson radius has been one of the main open questions since the work of Guruswami and Sudan. It is now known by the work of Rudra and Wootters, using techniques from high dimensional probability, that over large enough alphabets most RS codes are indeed list-decodable beyond this radius. In this paper we take a more combinatorial approach which allows us to determine the precise relation (up to the exact constant) between the decoding radius and the list size. We prove a generalized Singleton bound for a given list size, and conjecture that the bound is tight for most RS codes over large enough finite fields. We also show that the conjecture holds true for list sizes $2 \text{ and }3$, and as a by product show that most RS codes with a rate of at least $1/9$ are list-decodable beyond the Johnson radius. Lastly, we give the first explicit construction of such RS codes. The main tools used in the proof are a new type of linear dependency between codewords of a code that are contained in a small Hamming ball, and the notion of cycle space from Graph Theory. Both of them have not been used before in the context of list-decoding.

Journal Article
TL;DR: The first construction of three-round non-malleable commitments from the almost minimal assumption of injective one-way functions is given, together with the lower bound of Pass (TCC 2013), which is almost the best possible w.r.t. standard polynomial-time hardness assumptions.
Abstract: We give the first construction of three-round non-malleable commitments from the almost minimal assumption of injective one-way functions. Combined with the lower bound of Pass (TCC 2013), our result is almost the best possible w.r.t. standard polynomial-time hardness assumptions (at least w.r.t. black-box reductions). Our results rely on a novel technique which we call 'bidirectional Goldreich-Levin extraction'. Along the way, we also obtain the first rewind secure delayed-input witness indistinguishable (WI) proofs from only injective one-way functions. We also obtain the first construction of an epsilon-extractable commitment scheme from injective one-way functions. We believe both of these to be of independent interest. In particular, as a direct corollary of our rewind secure WI construction, we are able to obtain a construction of 3-round promise zero-knowledge from only injective one-way functions.

Journal ArticleDOI
TL;DR: The presented results show that very short CRC sequences can be enough to reach the target FAR, and this CRC overhead reduction finally contributes to the performance improvement of polar codes.
Abstract: Polar codes have been adopted as the coding scheme for control channels in the 5G communication system, where the blind detection is required. In this letter, to mitigate the false-alarm rate (FAR) in the polar code blind detection, we propose a more efficient architecture which combines the conventional list decoding and a binary classifier. After the list decoding, the CRC-passing result will be further checked by the classifier to determine whether the received signal is the intended control information or not. The classifier works by inspecting the squared Euclidean distance ratio (SEDR) between the received signal and the hypotheses, and the decision threshold used in the classifier is automatically learnt by a neural network (NN). The presented results show that very short CRC sequences can be enough to reach the target FAR, and this CRC overhead reduction finally contributes to the performance improvement of polar codes.

Posted Content
TL;DR: By revisiting the error event of the polarized channel, a new concept, named polar spectrum, is introduced from the weight distribution of polar codes, and a systematic framework in term of the polar spectrum is established to analyze and construct polar codes.
Abstract: Polar codes are the first class of constructive channel codes achieving the symmetric capacity of the binary-input discrete memoryless channels. But the analysis and construction of polar codes involve the complex iterative-calculation. In this paper, by revisiting the error event of the polarized channel, a new concept, named polar spectrum, is introduced from the weight distribution of polar codes. Thus we establish a systematic framework in term of the polar spectrum to analyze and construct polar codes. By using polar spectrum, we derive the union bound and the union-Bhattacharyya (UB) bound of the error probability of polar codes and the upper/lower bound of the symmetric capacity of the polarized channel. The analysis based on the polar spectrum can intuitively interpret the performance of polar codes under successive cancellation (SC) decoding. Furthermore, we analyze the coding structure of polar codes and design an enumeration algorithm based on the MacWilliams identities to efficiently calculate the polar spectrum. In the end, two construction metrics named UB bound weight (UBW) and simplified UB bound weight (SUBW) respectively, are designed based on the UB bound and the polar spectrum. Not only are these two constructions simple and explicit for the practical polar coding, but they can also generate polar codes with similar (in SC decoding) or superior (in SC list decoding) performance over those based on the traditional methods.

Proceedings ArticleDOI
07 Jul 2019
TL;DR: This paper revisits the error analysis of the original SC decoding along with the concept of Arıkan’s helper genie and presents polar codes that are concatenated with an underlying high-rate convolutional code, which are shown to have superior performances over CRC-aided LSC.
Abstract: Polar coding has found its way into many realms in communications and information theory. In most implementation setups, they are accompanied with the list successive cancellation (LSC) decoding algorithm which is shown to provide a superior error performance compared to the original successive cancellation (SC) decoding method. While the SC decoding is fairly well-studied, the exact math behind LSC’s superior performance still remains to be of mystery. Multiple techniques have been proposed to further improve the LSC’s error performance or to reduce its computational complexity, which are usually motivated by heuristic reasons and shown through numerical simulations. Most notable example is the CRC-aided LSC, which drastically improves the LSC’s performance by concatenating the polar code with some high-rate cyclic redundancy check (CRC) codes.In this paper, we present polar codes that are concatenated with an underlying high-rate convolutional code, which are shown to have superior performances over CRC-aided LSC. We also present a computationally-efficient decoding algorithm for these codes which resembles the techniques used in the Viterbi algorithm, and hence is called the convolutional decoding algorithm. To do this, we revisit the error analysis of the original SC decoding along with the concept of Arikan’s helper genie. We address some shortcomings of the CRC-aided LSC and discuss how to turn around them by emulating a convolutional code instead of a CRC code. Contrary to CRC codes, most of the convolutional codes are not a proper choice for concatenation with polar codes. We introduce the bucketing algorithm to construct suitable punctured convolutional codes for this purpose. The proposed framework can accommodate any such underlying convolutional code, which allows one to search for the optimal convolutional code based on their design parameters.

Proceedings ArticleDOI
07 Jul 2019
TL;DR: A new decoding algorithm is proposed for BMST-TBCC, which integrates a serial list Viterbi algorithm (SLVA) with a soft check instead of conventional cyclic redundancy check (CRC), which shows comparable performance with the polar codes.
Abstract: This paper is concerned with block Markov superposition transmission (BMST) of tail-biting convolutional code (TBCC). We propose a new decoding algorithm for BMST-TBCC, which integrates a serial list Viterbi algorithm (SLVA) with a soft check instead of conventional cyclic redundancy check (CRC). The basic idea is that, compared with an erroneous candidate codeword, the correct candidate codeword for the first sub-frame has less influence on the output of Viterbi algorithm for the second sub-frame. The threshold is then determined by statistical learning based on the introduced empirical divergence function. The numerical results illustrate that, under the constraint of equivalent decoding delay, the BMST-TBCC has comparable performance with the polar codes. As a result, BMST-TBCCs may find applications in the scenarios of the streaming ultra-reliable and low latency communication (URLLC) data services.

Proceedings ArticleDOI
01 Dec 2019
TL;DR: In this article, a neural network is employed to identify a suspicious device, which is most likely to be falsely alarmed during the first round of the AMP algorithm, and it is expected to learn the unknown features of the false alarm event and the implicit correlation structure in the quantized pilot matrix.
Abstract: In this paper, we propose a deep learning aided list approximate message passing (AMP) algorithm to further improve the user identification performance in massive machine type communications. A neural network is employed to identify a \emph{suspicious device} which is most likely to be falsely alarmed during the first round of the AMP algorithm. The neural network returns the false alarm likelihood and it is expected to learn the unknown features of the false alarm event and the implicit correlation structure in the quantized pilot matrix. Then, via employing the idea of list decoding in the field of error control coding, we propose to enforce the suspicious device to be inactive in every iteration of the AMP algorithm in the second round. The proposed scheme can effectively combat the interference caused by the suspicious device and thus improve the user identification performance. Simulations demonstrate that the proposed algorithm improves the mean squared error performance of recovering the sparse unknown signals in comparison to the conventional AMP algorithm with the minimum mean squared error denoiser.

Proceedings ArticleDOI
06 Jan 2019
TL;DR: In this article, double samplers are used to construct code constructions with high-dimensional expanders, such as the ABNNR code construction [ABN+92].
Abstract: We develop the notion of double samplers, first introduced by Dinur and Kaufman [DK17], which are samplers with additional combinatorial properties, and whose existence we prove using high dimensional expanders.We show how double samplers give a generic way of amplifying distance in a way that enables efficient list-decoding. There are many error correcting code constructions that achieve large distance by starting with a base code C with moderate distance, and then amplifying the distance using a sampler, e.g., the ABNNR code construction [ABN+92] is such. We show that if the sampler is part of a larger double sampler then the construction has an efficient list-decoding algorithm and the list decoding algorithm is oblivious to the base code C (i.e., it runs the unique decoder for C in a black box way).Our list-decoding algorithm works as follows: it uses a local voting scheme from which it constructs a unique games constraint graph. The constraint graph is an expander, so we can solve unique games efficiently. These solutions are the output of the list decoder. This is a novel use of a unique games algorithm as a subroutine in a decoding procedure, as opposed to the more common situation in which unique games are used for demonstrating hardness results.Double samplers and high dimensional expanders are akin to pseudorandom objects in their utility, but they greatly exceed random objects in their combinatorial properties. We believe that these objects hold significant potential for coding theoretic constructions and view this work as demonstrating the power of double samplers in this context.

Posted Content
TL;DR: It is shown that Gallager's ensemble of Low-Density Parity Check (LDPC) codes achieves list-decoding capacity with high probability, and these are the first graph-based codes shown to have this property.
Abstract: We show that Gallager's ensemble of Low-Density Parity Check (LDPC) codes achieves list-decoding capacity with high probability. These are the first graph-based codes shown to have this property. This result opens up a potential avenue towards truly linear-time list-decodable codes that achieve list-decoding capacity. Our result on list decoding follows from a much more general result: any $\textit{local}$ property satisfied with high probability by a random linear code is also satisfied with high probability by a random LDPC code from Gallager's distribution. Local properties are properties characterized by the exclusion of small sets of codewords, and include list-decoding, list-recovery and average-radius list-decoding. In order to prove our results on LDPC codes, we establish sharp thresholds for when local properties are satisfied by a random linear code. More precisely, we show that for any local property $\mathcal{P}$, there is some $R^*$ so that random linear codes of rate slightly less than $R^*$ satisfy $\mathcal{P}$ with high probability, while random linear codes of rate slightly more than $R^*$ with high probability do not. We also give a characterization of the threshold rate $R^*$.

Journal ArticleDOI
TL;DR: Simulation results show that SRBO-CC with list decoding is competitive with polar codes and that the performance-complexity tradeoffs can be achieved by adjusting the statistical threshold.
Abstract: In this Letter, a list decoding algorithm is proposed for the semi-random block-oriented convolutional code (SRBO-CC). To decode each sub-frame, a list of candidates are serially computed until finding a qualified one or achieving maximum list, where the correctness of the decoding candidate is checked by a statistical threshold. Simulation results show that SRBO-CC with list decoding is competitive with polar codes and that the performance-complexity tradeoffs can be achieved by adjusting the statistical threshold.

Journal ArticleDOI
TL;DR: Two separate methods are developed, via two different approaches, for list decoding subspace codes and rank-metric codes that provide improved tradeoffs between rate and error-correction capability and a folded version of Koetter-Kschischang codes is constructed.
Abstract: The problem of list decoding algebraic subspace codes and rank-metric codes is considered. We develop two separate methods, via two different approaches, for list decoding subspace codes and rank-metric codes. These methods provide, for certain code parameters, improved tradeoffs between rate and error-correction capability than that of the Koetter-Kschischang codes, in the domain of subspace codes, and than that of the Gabidulin codes, in the domain of rank-metric codes, and several other extensions thereof. In the first approach, we introduce the notion of root multiplicities for a certain sub-ring of the ring of linearized polynomials. In the list-decoding algorithm, multiple roots are enforced for the interpolation polynomial in order to achieve an improved error-correction radius for an extended family of Koetter–Kschischang subspace codes. The normalized error-correction radius for this approach is \begin{equation*}\tau _{A} = \frac {2(L+1)}{({r+1})} -1- \frac {L(L+1)(L+n)}{r(r+1)}R,\end{equation*} where $L$ is the maximum list size, $n$ is the subspace code dimension, $R$ is the rate of the code, and $r$ is the multiplicity parameter. In the second approach, we construct a folded version of Koetter-Kschischang codes. A linear-algebraic list-decoding algorithm is proposed for these codes that achieves the error-correction radius \begin{equation*}\tau _{B} = s(1-sR),\end{equation*} where $s$ is the folding parameter. As opposed to the first approach, the size of output list in the second approach depends on the underlying field size and is at most $q^{m(s-1)}$ , where $q^{m}$ is the size of the field that message symbols are chosen from. It is also shown that the output list size is 1, with high probability, in a probabilistic setting. We utilize the techniques of the second approach in the domain of rank-metric codes to construct folded Gabidulin codes to enable a linear-algebraic list-decoding algorithm for such codes. Our algorithm makes it possible to recover the message provided that the normalized rank of error is at most $1-R-\epsilon $ , for any $\epsilon > 0$ . Notably, this achieves the Singleton bound on the error-correction radius of a rank-metric code.

Proceedings ArticleDOI
07 Jul 2019
TL;DR: In this article, the authors studied the list decodability of different ensembles of codes over the real alphabet under the assumption of an omniscient adversary, and derived a lower bound on the list size of typical random spherical codes.
Abstract: We study the list decodability of different ensembles of codes over the real alphabet under the assumption of an omniscient adversary. It is a well-known result that when the source and the adversary have power constraints P and N respectively, the list decoding capacity is equal to $\frac{1}{2}\log \frac{P}{N}$. Random spherical codes achieve capacity with constant (as a function of the blocklength) list sizes, and the goal of the present paper is to obtain a better understanding of the smallest achievable list size as a function of the gap to capacity. We show a reduction from arbitrary codes to spherical codes, and derive a lower bound on the list size of typical random spherical codes. We also give an upper bound on the list size achievable using nested Construction-A lattices and infinite Construction-A lattices. We then define and study a class of infinite constellations that generalize Construction-A lattices and prove upper and lower bounds for the same. Other goodness properties such as packing goodness and AWGN goodness of infinite constellations are proved along the way. Finally, we consider random lattices sampled from the Haar distribution and show that if a certain number-theoretic conjecture is true, then the list size grows as a polynomial function of the gap-to-capacity.

Posted Content
TL;DR: The goal of the present paper is to obtain a better understanding of the smallest achievable list size as a function of the gap to capacity, and shows a reduction from arbitrary codes to spherical codes, and derives a lower bound on the list size of typical random spherical codes.
Abstract: We study the list decodability of different ensembles of codes over the real alphabet under the assumption of an omniscient adversary. It is a well-known result that when the source and the adversary have power constraints $ P $ and $ N $ respectively, the list decoding capacity is equal to $ \frac{1}{2}\log\frac{P}{N} $. Random spherical codes achieve constant list sizes, and the goal of the present paper is to obtain a better understanding of the smallest achievable list size as a function of the gap to capacity. We show a reduction from arbitrary codes to spherical codes, and derive a lower bound on the list size of typical random spherical codes. We also give an upper bound on the list size achievable using nested Construction-A lattices and infinite Construction-A lattices. We then define and study a class of infinite constellations that generalize Construction-A lattices and prove upper and lower bounds for the same. Other goodness properties such as packing goodness and AWGN goodness of infinite constellations are proved along the way. Finally, we consider random lattices sampled from the Haar distribution and show that if a certain number-theoretic conjecture is true, then the list size grows as a polynomial function of the gap-to-capacity.