scispace - formally typeset
Search or ask a question
Author

Kangjian Qin

Bio: Kangjian Qin is an academic researcher from Zhejiang University. The author has contributed to research in topics: Decoding methods & Block Error Rate. The author has an hindex of 4, co-authored 18 publications receiving 127 citations.

Papers
More filters
Proceedings ArticleDOI
01 Dec 2017
TL;DR: In this paper, a progressive multi-level bit-flipping decoding algorithm was proposed to correct multiple errors over the multiple-layer critical sets each of which is constructed using the remaining undecoded subtree associated with the previous layer.
Abstract: In successive cancellation (SC) polar decoding, an incorrect estimate of any prior unfrozen bit may bring about severe error propagation in the following decoding, thus it is desirable to find out and correct an error as early as possible. In this paper, we first construct a critical set S of unfrozen bits, which with high probability (typically >99%) includes the bit where the first error happens. Then we develop a progressive multi- level bit-flipping decoding algorithm to correct multiple errors over the multiple-layer critical sets each of which is constructed using the remaining undecoded subtree associated with the previous layer. The level in fact indicates the number of independent errors that could be corrected. We show that as the level increases, the block error rate (BLER) performance of the proposed progressive bit flipping decoder competes with the corresponding cyclic redundancy check (CRC) aided successive cancellation list (CA-SCL) decoder, e.g., a level 4 progressive bit-flipping decoder is comparable to the CA-SCL decoder with a list size of L=32. Furthermore, the average complexity of the proposed algorithm is much lower than that of a SCL decoder (and is similar to that of SC decoding) at medium to high signal to noise ratio (SNR).

72 citations

Proceedings ArticleDOI
Wei Lyu1, Zhaoyang Zhang1, Chunxu Jiao1, Kangjian Qin1, Huazi Zhang1 
20 May 2018
TL;DR: Numerical results show that RNN has the best decoding performance, yet at the price of the highest computational overhead, and there exists a saturation length for each type of neural network, which is caused by their restricted learning abilities.
Abstract: With the demand of high data rate and low latency in fifth generation (5G), deep neural network decoder (NND) has become a promising candidate due to its capability of one-shot decoding and parallel computing. In this paper, three types of NND, i.e., multi-layer perceptron (MLP), convolution neural network (CNN) and recurrent neural network (RNN), are proposed with the same parameter magnitude. The performance of these deep neural networks are evaluated through extensive simulation. Numerical results show that RNN has the best decoding performance, yet at the price of the highest computational overhead. Moreover, we find there exists a saturation length for each type of neural network, which is caused by their restricted learning abilities.

70 citations

Journal ArticleDOI
TL;DR: Numerical results show that the proposed ET-bit-flipping decoders can provide almost the same BLER performance as the state-of-the-art cyclic redundancy check-aided SC list decmoders, with an average computational complexity and decoding latency similar to that of the SC decoder at medium to a high SNR regime.
Abstract: In successive cancellation (SC) polar decoding, an incorrect estimate of any prior unfrozen bit may bring about severe error propagation in the following decoding, and thus it is desirable to find out and correct an error as early as possible. In this paper, we investigate a progressive bit-flipping decoder which corrects at most $L$ - independent errors in SC decoding. In particular, we first study the distribution of the first error position in SC decoding, and a critical set which with high probability includes the bit where the first error occurs regardless of the channel realizations is proposed. Second, a progressive bit-flipping decoding algorithm is proposed based on a search tree, which is established with a modified critical set in a progressive manner. The maximum level of the search tree is shown to coincide well with the number of independent errors that could be corrected. On this basis, the lower bound on BLER performance of a progressive bit-flipping decoder which corrects at most $L$ errors is derived, and we show the bound can be tightly achieved by the proposed algorithm for some $L$ . Moreover, an early-terminated bit-flipping (ET-Bit-Flipping) decoder is proposed to reduce the computational complexity and decoding latency of the original progressive bit-flipping scheme. Finally, numerical results show that the proposed ET-bit-flipping decoders can provide almost the same BLER performance as the state-of-the-art cyclic redundancy check-aided SC list decoders, with an average computational complexity and decoding latency similar to that of the SC decoder at medium to a high SNR regime.

35 citations

Journal ArticleDOI
TL;DR: This paper first proposes a novel codeword searching metric which proves to be hardware-friendly, and an adaptive OSD algorithm is then developed to adaptively rule out the unpromising codewords, thus significantly reducing the latency.
Abstract: Deploying polar codes in ultra-reliable low-latency communication (URLLC) is of critical importance and is currently receiving tremendous attention in both academia and industry. However, most of the state of the art polar codes decoders like progressive bit-flipping decoder (PBF) and successive cancellation list (SCL) decoder, involve strong data dependencies and suffer from huge decoding delay. This contradicts the low-latency requirement in URLLC. To address such issue, this paper appeals to the parallel computing and proposes an adaptive ordered statistic decoder (OSD). In particular, we first propose a novel codeword searching metric which proves to be hardware-friendly, and an adaptive OSD algorithm is then developed to adaptively rule out the unpromising codewords, thus significantly reducing the latency. Secondly, to further reduce the computational complexity of the proposed algorithm, we decompose the current code sequence into several independent subcodes, and by handling these subcodes with concatenated adaptive OSDs, a good trade-off between decoding latency and complexity can be achieved. Finally, numerical results show that the proposed adaptive OSD outperforms the conventional decoders in terms of block error rate (BLER) and decoding latency.

6 citations

Proceedings ArticleDOI
01 Dec 2018
TL;DR: Capitalizing on a novel codeword searching metric, the proposed decoder proves to be hardware-friendly and can adaptively rule out the unpromising codewords, thus reducing the latency significantly and Numerical results show that the adaptive OSD outperforms the conventional decoders in terms of block error rate and decoding latency.
Abstract: Deploying polar codes in ultra-reliable low-latency communication (URLLC) has received tremendous attention in both academia and industry. However, the state of the art polar codes decoders, e.g., successive cancellation (SC) and successive cancellation list (SCL) decoder, involve strong data dependencies and suffer from huge decoding delay. This contradicts the low-latency requirement in URLLC. To address this issue, this paper appeals to parallel computing and proposes an adaptive ordered statistic decoder (OSD). Capitalizing on a novel codeword searching metric, the proposed decoder proves to be hardware-friendly and can adaptively rule out the unpromising codewords, thus reducing the latency significantly. Numerical results show that the adaptive OSD outperforms the conventional decoders in terms of block error rate (BLER) and decoding latency.

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper bridges the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas, and provides an encyclopedic review of mobile and Wireless networking research based on deep learning, which is categorize by different domains.
Abstract: The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper, we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.

975 citations

Posted Content
TL;DR: In this article, the authors provide an encyclopedic review of mobile and wireless networking research based on deep learning, which they categorize by different domains and discuss how to tailor deep learning to mobile environments.
Abstract: The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, agile management of network resource to maximize user experience, and extraction of fine-grained real-time analytics. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques to help managing the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.

300 citations

Journal ArticleDOI
01 Oct 2019
TL;DR: This work provides a comprehensive survey of the state of the art in the application of machine learning techniques to address key problems in IoT wireless communications with an emphasis on its ad hoc networking aspect.
Abstract: The Internet of Things (IoT) is expected to require more effective and efficient wireless communications than ever before. For this reason, techniques such as spectrum sharing, dynamic spectrum access, extraction of signal intelligence and optimized routing will soon become essential components of the IoT wireless communication paradigm. In this vision, IoT devices must be able to not only learn to autonomously extract spectrum knowledge on-the-fly from the network but also leverage such knowledge to dynamically change appropriate wireless parameters ( e.g. , frequency band, symbol modulation, coding rate, route selection, etc.) to reach the network’s optimal operating point. Given that the majority of the IoT will be composed of tiny, mobile, and energy-constrained devices, traditional techniques based on a priori network optimization may not be suitable, since (i) an accurate model of the environment may not be readily available in practical scenarios; (ii) the computational requirements of traditional optimization techniques may prove unbearable for IoT devices. To address the above challenges, much research has been devoted to exploring the use of machine learning to address problems in the IoT wireless communications domain. The reason behind machine learning’s popularity is that it provides a general framework to solve very complex problems where a model of the phenomenon being learned is too complex to derive or too dynamic to be summarized in mathematical terms. This work provides a comprehensive survey of the state of the art in the application of machine learning techniques to address key problems in IoT wireless communications with an emphasis on its ad hoc networking aspect. First, we present extensive background notions of machine learning techniques. Then, by adopting a bottom-up approach, we examine existing work on machine learning for the IoT at the physical, data-link and network layer of the protocol stack. Thereafter, we discuss directions taken by the community towards hardware implementation to ensure the feasibility of these techniques. Additionally, before concluding, we also provide a brief discussion of the application of machine learning in IoT beyond wireless communication. Finally, each of these discussions is accompanied by a detailed analysis of the related open problems and challenges.

194 citations

Journal ArticleDOI
09 Apr 2020
TL;DR: In this article, the Gaussian noise channel with feedback was considered, and the first family of codes obtained via deep learning was presented, which significantly outperformed state-of-the-art codes designed over several decades of research.
Abstract: The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide-ranging practical applications. In this work, we present the first family of codes obtained via deep learning, which significantly outperforms state-of-the-art codes designed over several decades of research. The communication channel under consideration is the Gaussian noise channel with feedback, whose study was initiated by Shannon; feedback is known theoretically to improve reliability of communication, but no practical codes that do so have ever been successfully constructed. We break this logjam by integrating information theoretic insights harmoniously with recurrent-neural-network based encoders and decoders to create novel codes that outperform known codes by 3 orders of magnitude in reliability and achieve a 3dB gain in terms of SNR. We also demonstrate several desirable properties of the codes: (a) generalization to larger block lengths, (b) composability with known codes, and (c) adaptation to practical constraints. This result also has broader ramifications for coding theory: even when the channel has a clear mathematical model, deep learning methodologies, when combined with channel-specific information-theoretic insights, can potentially beat state-of-the-art codes constructed over decades of mathematical research.

66 citations

Proceedings ArticleDOI
12 May 2019
TL;DR: In this paper, a low-complexity recurrent neural network (RNN) polar decoder with codebook-based weight quantization is proposed to reduce the memory overhead by 98% and alleviate computational complexity with slight performance loss.
Abstract: Polar codes have drawn much attention and been adopted in 5G New Radio (NR) due to their capacity-achieving performance. Recently, as the emerging deep learning (DL) technique has breakthrough achievements in many fields, neural network decoder was proposed to obtain faster convergence and better performance than belief propagation (BP) decoding. However, neural networks are memory-intensive and hinder the deployment of DL in communication systems. In this work, a low-complexity recurrent neural network (RNN) polar decoder with codebook-based weight quantization is proposed. Our test results show that we can effectively reduce the memory overhead by 98% and alleviate computational complexity with slight performance loss.

58 citations