scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2017"


Journal ArticleDOI
TL;DR: The design of multi-resolution beamforming sequences to enable the system to quickly search out the dominant channel direction for single-path channels are considered, which generates a multilevel beamforming sequence that strikes a balance between minimizing the training overhead and maximizing beamforming gain.
Abstract: Millimeter wave (mm-wave) communication is expected to be widely deployed in fifth generation (5G) wireless networks due to the substantial bandwidth available for licensed and unlicensed use at mm-wave frequencies. To overcome the higher path loss observed at mm-wave bands, most prior work focused on the design of directional beamforming using analog and/or hybrid beamforming techniques in large-scale multiple-input multiple-output systems. Obtaining potential gains from highly directional beamforming in practical systems hinges on sufficient levels of channel estimation accuracy, where the problem of channel estimation becomes more challenging due to the substantial training overhead needed to sound all directions using a high-resolution narrow beam. In this paper, we consider the design of multi-resolution beamforming sequences to enable the system to quickly search out the dominant channel direction for single-path channels. The resulting design generates a multilevel beamforming sequence that strikes a balance between minimizing the training overhead and maximizing beamforming gain, where a subset of multilevel beamforming vectors is chosen adaptively to maximize the average data rate within a constrained time. We propose an efficient method to design a hierarchical multi-resolution codebook utilizing a Butler matrix, i.e., a generalized discrete Fourier transform matrix. Numerical results show the effectiveness of the proposed algorithm.

221 citations


Proceedings ArticleDOI
22 Feb 2017
TL;DR: This work partitions the encoding graph into smaller sub-blocks and train them individually, closely approaching maximum a posteriori (MAP) performance per sub-block, and shows the degradation through partitioning and compares the resulting decoder to state-of-the art polar decoders such as successive cancellation list and belief propagation decoding.
Abstract: The training complexity of deep learning-based channel decoders scales exponentially with the codebook size and therefore with the number of information bits. Thus, neural network decoding (NND) is currently only feasible for very short block lengths. In this work, we show that the conventional iterative decoding algorithm for polar codes can be enhanced when sub-blocks of the decoder are replaced by neural network (NN) based components. Thus, we partition the encoding graph into smaller sub-blocks and train them individually, closely approaching maximum a posteriori (MAP) performance per sub-block. These blocks are then connected via the remaining conventional belief propagation decoding stage(s). The resulting decoding algorithm is non-iterative and inherently enables a highlevel of parallelization, while showing a competitive bit error rate (BER) performance. We examine the degradation through partitioning and compare the resulting decoder to state-of-the art polar decoders such as successive cancellation list and belief propagation decoding.

173 citations


Journal ArticleDOI
TL;DR: This paper investigates joint RF-baseband hybrid precoding for the downlink of multiuser multiantenna mmWave systems with a limited number of RF chains and proposes efficient methods to address the JWSPD problems and jointly optimize the RF and baseband precoders under the two performance measures.
Abstract: In millimeter-wave (mmWave) systems, antenna architecture limitations make it difficult to apply conventional fully digital precoding techniques but call for low-cost analog radio frequency (RF) and digital baseband hybrid precoding methods. This paper investigates joint RF-baseband hybrid precoding for the downlink of multiuser multiantenna mmWave systems with a limited number of RF chains. Two performance measures, maximizing the spectral efficiency and the energy efficiency of the system, are considered. We propose a codebook-based RF precoding design and obtain the channel state information via a beam sweep procedure. Via the codebook-based design, the original system is transformed into a virtual multiuser downlink system with the RF chain constraint. Consequently, we are able to simplify the complicated hybrid precoding optimization problems to joint codeword selection and precoder design (JWSPD) problems. Then, we propose efficient methods to address the JWSPD problems and jointly optimize the RF and baseband precoders under the two performance measures. Finally, extensive numerical results are provided to validate the effectiveness of the proposed hybrid precoders.

162 citations


Journal ArticleDOI
TL;DR: A new hierarchical codebook is proposed to achieve uniform BA performance with low overhead and a power allocation scheme used in different training stages to further improve the BA performance is proposed.
Abstract: Owing to abundant spectrum resources, millimeter wave (mmwave) communication promises to provide Gbps data rates, which, however, may be restricted by large path-loss. Thus, antenna arrays are commonly used along with beam alignment (BA) as an important step to achieve the array gain. Efficient BA relies on the beam training codebook design. In this paper, we propose a new hierarchical codebook to achieve uniform BA performance with low overhead. To better elaborate on the design principle, a single-path channel model is considered first to frame the proposal. The codebook design is formulated as an optimization problem, where the ripple in the main/side lobes is constrained such that each training beam is close to the ideal one with a flat magnitude response and a narrow transition band. Then, we propose an efficient algorithm to find such a beam training codebook. Furthermore, we derive closed-form expressions of the BA misalignment probability or error rate of the proposed beam training codebook. Our results reveal that using the proposed codebook, the error rate of tree-search-based BA exponentially decreases with the SNR for a given channel, and linearly decreases in the log–log coordinate axis for a fading channel. We further propose a power allocation scheme used in different training stages to further improve the BA performance. Finally, the proposed framework is extended to the more complex case of multi-path channels. Numerical results confirm the effectiveness of the proposed training codebook and power allocation scheme as well as the accuracy of the performance analysis.

134 citations


Journal ArticleDOI
TL;DR: This paper presents an effective image retrieval method by combining high-level features from convolutional neural network (CNN) model and low- level features from dot-diffused block truncation coding (DDBTC) to improve the overall retrieval rate.
Abstract: This paper presents an effective image retrieval method by combining high-level features from convolutional neural network (CNN) model and low-level features from dot-diffused block truncation coding (DDBTC). The low-level features, e.g., texture and color, are constructed by vector quantization -indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate and average recall rate (ARR), are employed to examine various data sets. As documented in the experimental results, the proposed schemes can achieve superior performance compared with the state-of-the-art methods with either low-or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.

118 citations


Proceedings Article
12 Feb 2017
TL;DR: The proposed collective deep quantization (CDQ) approach is the first attempt to introduce quantization in end-to-end deep architecture for cross-modal retrieval, and shows state-of-the-art results on standard benchmarks.
Abstract: Cross-modal similarity retrieval is a problem about designing a retrieval system that supports querying across content modalities, e.g., using an image to retrieve for texts. This paper presents a compact coding solution for efficient cross-modal retrieval, with a focus on the quantization approach which has already shown the superior performance over the hashing solutions in single-modal similarity retrieval. We propose a collective deep quantization (CDQ) approach, which is the first attempt to introduce quantization in end-to-end deep architecture for cross-modal retrieval. The major contribution lies in jointly learning deep representations and the quantizers for both modalities using carefully-crafted hybrid networks and well-specified loss functions. In addition, our approach simultaneously learns the common quantizer codebook for both modalities through which the cross-modal correlation can be substantially enhanced. CDQ enables efficient and effective cross-modal retrieval using inner product distance computed based on the common codebook with fast distance table lookup. Extensive experiments show that CDQ yields state of the art cross-modal retrieval results on standard benchmarks.

103 citations


Journal ArticleDOI
TL;DR: This paper proposes a heuristic approach to design a hierarchical codebook exploiting beam widening with the multi-RF-chain sub-array (BMW-MS) technique and proposes a metric, termed generalized detection probability (GDP), to evaluate the quality of an arbitrary codeword.
Abstract: In this paper, we study hierarchical codebook design for channel estimation in millimeter-wave (mmWave) communications with a hybrid precoding structure. Due to the limited saturation power of the mmWave power amplifier, we consider the per-antenna power constraint (PAPC). We first propose a metric, termed generalized detection probability (GDP), to evaluate the quality of an arbitrary codeword . This metric not only enables an optimization approach for mmWave codebook design, but also can be used to compare the performance of two different codewords/codebooks. To the best of our knowledge, GDP is the first such metric, particularly for mmWave codebook design. We then propose a heuristic approach to design a hierarchical codebook exploiting beam widening with the multi-RF-chain sub-array (BMW-MS) technique. To obtain crucial parameters of BMW-MS, we provide two solutions, namely, a low-complexity search (LCS) solution to optimize the GDP metric and a closed-form (CF) solution to pursue a flat beam pattern. Performance comparisons show that BMW-MS/LCS and BMW-MS/CF achieve very close performances, and they outperform the existing alternatives under the PAPC.

98 citations


Journal ArticleDOI
TL;DR: In this paper, an approximate message passing decoder for sparse superposition codes was proposed, whose decoding complexity scales linearly with the size of the design matrix, and the decoder was rigorously analyzed and it was shown to asymptotically achieve the AWGN capacity with an appropriate power allocation.
Abstract: Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the additive white Gaussian noise (AWGN) channel at rates approaching the channel capacity. The codebook is defined in terms of a Gaussian design matrix, and codewords are sparse linear combinations of columns of the matrix. In this paper, we propose an approximate message passing decoder for sparse superposition codes, whose decoding complexity scales linearly with the size of the design matrix. The performance of the decoder is rigorously analyzed and it is shown to asymptotically achieve the AWGN capacity with an appropriate power allocation. Simulation results are provided to demonstrate the performance of the decoder at finite blocklengths. We introduce a power allocation scheme to improve the empirical performance, and demonstrate how the decoding complexity can be significantly reduced by using Hadamard design matrices.

88 citations


Proceedings ArticleDOI
21 May 2017
TL;DR: Simulation results show that the proposed SCMA codebooks provides good BER performance in both additive white Gaussian noise (AWGN) and flat fading channels.
Abstract: Sparse code multiple access (SCMA) is a competitive non-orthogonal multiple access technique for the fifth generation (5G) wireless communications. The SCMA codebook design is a very essential problem. This paper presents a constellation-rotation-based method for designing codebooks for downlink SCMA system. The basic idea is to make the minimum Euclidean distance of the main-constellation as larger as possible, so as to achieve good BER (bit error rate) performance. By constructing proper sub-constellations and Latin matrix, it is able to achieve a good shaping gain. Simulation results show that the proposed SCMA codebooks provides good BER performance in both additive white Gaussian noise (AWGN) and flat fading channels.

81 citations


Journal ArticleDOI
TL;DR: This paper establishes fundamental limits in beam-alignment performance under both the exhaustive search and the hierarchical search that adopts multi-resolution beamforming codebooks, accounting for time-domain training overhead.
Abstract: In millimeter wave cellular communication, fast and reliable beam alignment via beam training is crucial to harvest sufficient beamforming gain for the subsequent data transmission. In this paper, we establish fundamental limits in beam-alignment performance under both the exhaustive search and the hierarchical search that adopts multi-resolution beamforming codebooks, accounting for time-domain training overhead. Specifically, we derive lower and upper bounds on the probability of misalignment for an arbitrary level in the hierarchical search, based on a single-path channel model. Using the method of large deviations, we characterize the decay rate functions of both bounds and show that the bounds coincide as the training sequence length goes large. We go on to characterize the asymptotic misalignment probability of both the hierarchical and exhaustive search, and show that the latter asymptotically outperforms the former, subject to the same training overhead and codebook resolution. We show via numerical results that this relative performance behavior holds in the non-asymptotic regime. Moreover, the exhaustive search is shown to achieve significantly higher worst case spectrum efficiency than the hierarchical search, when the pre-beamforming signal-to-noise ratio (SNR) is relatively low. This paper hence implies that the exhaustive search is more effective for users situated further from base stations, as they tend to have low SNR.

70 citations


Proceedings ArticleDOI
01 Mar 2017
TL;DR: In this paper, the authors proposed a low complexity iterative receiver based on expectation propagation algorithm (EPA), which reduces the complexity order from exponential to linear and achieves nearly the same block error rate (BLER) performance as the conventional message passing algorithm (MPA) receiver with orders less complexity.
Abstract: Sparse code multiple access (SCMA) scheme is considered to be one promising non-orthogonal multiple access technology for the future fifth generation (5G) communications. Due to the sparse nature, message passing algorithm (MPA) has been used at the receiver to achieve close to maximum likelihood (ML) detection performance with much lower complexity. However, the complexity order of MPA is still exponential with the size of codebook and the degree of signal superposition on a given resource element. In this paper, we propose a novel low complexity iterative receiver based on expectation propagation algorithm (EPA), which reduces the complexity order from exponential to linear. Simulation results demonstrate that the proposed EPA receiver achieves nearly the same block error rate (BLER) performance as the conventional message passing algorithm (MPA) receiver with orders less complexity.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this article, the authors studied the maximum codebook size for which the transmitter can guarantee reliability and LPD conditions are met over the MIMO AWGN channel, using relative entropy as their LPD metric.
Abstract: Fundamental limits of covert communication have been studied for different models of scalar channels. It was shown that, over n independent channel uses, O(√n) bits can transmitted reliably over a public channel while achieving an arbitrarily low probability of detection (LPD) by other stations. This result is well known as the square-root law and even to achieve this diminishing rate of covert communication, all existing studies utilized some form of secret shared between the transmitter and the receiver. In this paper, we establish the limits of LPD communication over the MIMO AWGN channel. In particular, using relative entropy as our LPD metric, we study the maximum codebook size for which the transmitter can guarantee reliability and LPD conditions are met. We first show that, the optimal codebook generating input distribution under δ-PD constraint is the zero-mean Gaussian distribution. Then, assuming channel state information (CSI) on only the main channel at the transmitter, we derive the optimal input covariance matrix, hence, establishing scaling laws of the codebbok size. We evaluate the codebook scaling rates in the limiting regimes for the number of channel uses (asymptotic block length) and the number of antennas (massive MIMO). We show that, in the asymptotic block-length regime, square-root law still holds for the MIMO AWGN. Meanwhile, in massive MIMO limit, the codebook size, while it scales linearly with √n, it scales exponentially with the number of transmitting antennas. The practical implication of our result is that MIMO has the potential to provide a substantial increase in the file sizes that can be covertly communicated subject to a reasonably low delay.

Journal ArticleDOI
Peng Jianjun1, Wei Chen1, Bo Bai1, Guo Xin, Chen Sun 
TL;DR: This letter presents a joint constellation with mapping matrix design for SCMA codebooks, which formulates the constellations optimization as a nonconvex quadratically constrained quadratic programming problem based on a set of well-constructed mapping matrices.
Abstract: Sparse code multiple access (SCMA) is being considered as a promising multiple access solution for 5G systems. A distinguishing feature of SCMA is that it combines the procedures of bit to constellation symbol mapping and subsequent spreading using multidimensional codebooks differentiated by users. Such codebooks dominate the system implementation as a main source of not only performance gain but also design complexity. This letter presents a joint constellation with mapping matrix design for SCMA codebooks, which formulates the constellations optimization as a nonconvex quadratically constrained quadratic programming problem based on a set of well-constructed mapping matrices. We elaborately solve the problem to achieve outperformance over existing SCMA design in terms of bit error rate (BER). For improving practicality, an approximate approach is further proposed to reduce the complexity significantly with a limited BER loss.

Journal ArticleDOI
TL;DR: This study investigates an improved codebook model for the fined-grained medical image representation with the following three advantages: instead of SIFT, the local patch (structure) is exploited as the local descriptor, which can retain all detailed information and is more suitable for the fine-graining medical image applications.
Abstract: Characterization and individual trait analysis of the focal liver lesions (FLL) is a challenging task in medical image processing and clinical site. The character analysis of a unconfirmed FLL case would be expected to benefit greatly from the accumulated FLL cases with expertsź analysis, which can be achieved by content-based medical image retrieval (CBMIR). CBMIR mainly includes discriminated feature extraction and similarity calculation procedures. Bag-of-Visual-Words (BoVW) (codebook-based model) has been proven to be effective for different classification and retrieval tasks. This study investigates an improved codebook model for the fined-grained medical image representation with the following three advantages: (1) instead of SIFT, we exploit the local patch (structure) as the local descriptor, which can retain all detailed information and is more suitable for the fine-grained medical image applications; (2) in order to more accurately approximate any local descriptor in coding procedure, the sparse coding method, instead of K-means algorithm, is employed for codebook learning and coded vector calculation; (3) we evaluate retrieval performance of focal liver lesions (FLL) using multiphase computed tomography (CT) scans, in which the proposed codebook model is separately learned for each phase. The effectiveness of the proposed method is confirmed by our experiments on FLL retrieval.

Journal ArticleDOI
TL;DR: This paper tries to separate fine-grained images by jointly learning the encoding parameters and codebooks through low-rank sparse coding (LRSC) with general and class-specific codebook generation by jointly encoding the local features within a spatial region jointly by LRSC.
Abstract: This paper tries to separate fine-grained images by jointly learning the encoding parameters and codebooks through low-rank sparse coding (LRSC) with general and class-specific codebook generation. Instead of treating each local feature independently, we encode the local features within a spatial region jointly by LRSC. This ensures that the spatially nearby local features with similar visual characters are encoded by correlated parameters. In this way, we can make the encoded parameters more consistent for fine-grained image representation. Besides, we also learn a general codebook and a number of class-specific codebooks in combination with the encoding scheme. Since images of fine-grained classes are visually similar, the difference is relatively small between the general codebook and each class-specific codebook. We impose sparsity constraints to model this relationship. Moreover, the incoherences with different codebooks and class-specific codebooks are jointly considered. We evaluate the proposed method on several public image data sets. The experimental results show that by learning general and class-specific codebooks with the joint encoding of local features, we are able to model the differences among different fine-grained classes than many other fine-grained image classification methods.

Journal ArticleDOI
TL;DR: It is shown that the dispersion term depends on the non-Gaussian noise only through its second and fourth moments, thus complementing the capacity result (Lapidoth, 1996), which depends only on the second moment.
Abstract: We study the second-order asymptotics of information transmission using random Gaussian codebooks and nearest neighbor decoding over a power-limited stationary memoryless additive non-Gaussian noise channel. We show that the dispersion term depends on the non-Gaussian noise only through its second and fourth moments, thus complementing the capacity result (Lapidoth, 1996), which depends only on the second moment. Furthermore, we characterize the second-order asymptotics of point-to-point codes over $K$ -sender interference networks with non-Gaussian additive noise. Specifically, we assume that each user’s codebook is Gaussian and that NN decoding is employed, i.e., that interference from the $K-1$ unintended users (Gaussian interfering signals) is treated as noise at each decoder. We show that while the first-order term in the asymptotic expansion of the maximum number of messages depends on the power of the interfering codewords only through their sum, this does not hold for the second-order term.

Proceedings ArticleDOI
01 Mar 2017
TL;DR: An optimized codebooks generation method for four ring star QAM based signaling constellation is proposed and it is demonstrated that by selecting the optimum design parameters, the bit error rate (BER) can be improved.
Abstract: Sparse code multiple access (SCMA) is a non- orthogonal codebook (CB) based multiple access scheme, proposed to cope with the heterogeneous and challenging performance requirements for mission critical communication and massive machine-type communication (MTC) in the fifth Generation (5G) wireless system. In this paper, the performance of SCMA has been studied and analyzed, considering the impact of the energy diversity and minimum Euclidean distance of the mother constellation, overloading of the system and the layer specific operators for codebooks generation. An optimized codebooks generation method for four ring star QAM based signaling constellation is proposed. It is demonstrated that by selecting the optimum design parameters, the bit error rate (BER) can be improved. Moreover, an overloading technique is also proposed to enable higher connectivity at lower decoding complexity

Journal ArticleDOI
TL;DR: This paper develops decoding and encoding mechanisms by engaging the theory of possibility and fuzzy relational calculus and shows that the decoded information granule is either a granular interval or interval-valued fuzzy set.
Abstract: Information granules are generic building blocks supporting the processing realized in granular computing and facilitating communication with the environment. In this paper, we are concerned with a fundamental problem of encoding–decoding of information granules. The essence of the problem is outlined as follows: given a finite collection of granular data X 1, X 2,…, XN (sets, fuzzy sets, etc.), construct an optimal codebook composed of information granules A 1 , A2 , …, Ac , where typically c N, so that any Xk represented in terms of A i 's and then decoded (reconstructed) with the help of this codebook leads to the lowest decoding error. A fundamental result is established, which states that in the proposed encoders and decoders, when encoding–decoding error is present, the information granule coming as a result of decoding is of a higher type than the original information granules (say, if Xk is information granule of type-1, then its decoded version becomes information granule of type-2). It would be beneficial to note that as the encoding–decoding process is not lossless (in general, with an exception of a few special cases), the lossy nature of the method is emphasized by the emergence of information granules of higher type (in comparison with the original data being processed). For instance, when realizing encoding–decoding of numeric data (viz., information granules of type-0), the losses occur and they are quantified in terms of intervals, fuzzy sets, probabilities, rough sets, etc., where, in fact, the result becomes an information granule of type-1. In light of the nature of the constructed result when Xk is an interval or a fuzzy set, an optimized performance index engages a distance between the bounds of the interval-valued membership function. We develop decoding and encoding mechanisms by engaging the theory of possibility and fuzzy relational calculus and show that the decoded information granule is either a granular interval or interval-valued fuzzy set. The optimization mechanism is realized with the aid of the particle swarm optimization (PSO). A series of experiments are reported with intent to illustrate the details of the encoding–decoding mechanisms and show that the PSO algorithm can efficiently optimize the granular codebook.

Journal ArticleDOI
TL;DR: It is shown that Hadamard transform can be used in RF beamsteering/beamcombining to achieve better performance in terms of average achievable spectral efficiency and low hardware cost using 1- or 2-b resolution APSs.
Abstract: This paper proposes a hybrid structure for multi-stream large-scale multi-input multi-output (MIMO) beamforming systems, in single-user scenario, using a Hadamard radio frequency (RF) codebook with low-bit resolution phase shifters. We show that Hadamard transform can be used in RF beamsteering/beamcombining to achieve better performance in terms of average achievable spectral efficiency and low hardware cost using 1- or 2-b resolution APSs. In contrast, the state-of-the-art RF codebook designs available in the literature requires more than 7-b resolution to achieve the same performance as the proposed scheme, for large antenna arrays with up to 256 elements. The performance gains of the proposed RF codebook design is thoroughly investigated using MATLAB simulations for typical mmWave MIMO system, and the simulation results are closely verified by the analytical expressions.

Posted Content
TL;DR: Two waveform strategies relying on limited feedback for multi-antenna multi-sine WPT over frequency-selective channels are proposed, and are shown to outperform a set of baselines, achieving higher harvested energy.
Abstract: Waveform design is a key technique to jointly exploit a beamforming gain, the channel frequency-selectivity and the rectifier nonlinearity, so as to enhance the end-to-end power transfer efficiency of Wireless Power Transfer (WPT). Those waveforms have been designed assuming perfect channel state information at the transmitter. This paper proposes two waveform strategies relying on limited feedback for multi-antenna multi-sine WPT over frequency-selective channels. In the waveform selection strategy, the Energy Transmitter (ET) transmits over multiple timeslots with every time a different waveform precoder within a codebook, and the Energy Receiver (ER) reports the index of the precoder in the codebook that leads to the largest harvested energy. In the waveform refinement strategy, the ET sequentially transmits two waveforms in each stage, and the ER reports one feedback bit indicating an increase/decrease in the harvested energy during this stage. Based on multiple one-bit feedback, the ET successively refines waveform precoders in a tree-structured codebook over multiple stages. By employing the framework of the generalized Lloyd's algorithm, novel algorithms are proposed for both strategies to optimize the codebooks in both space and frequency domains. The proposed limited feedback-based waveform strategies are shown to outperform a set of baselines, achieving higher harvested energy.

Journal ArticleDOI
TL;DR: This work proposes a low-complexity, near-optimal algorithm developed from a cross-entropy optimization framework and results reveal that the algorithm achieves near-Optimal performance at a much lower complexity than does the optimal ESA.
Abstract: Hybrid beamforming architecture, consisting of a low-dimensional baseband digital beamforming component and a high-dimensional analog beamforming component, has received considerable attention in the context of millimeter-wave massive multiple-input multiple-output systems. This is because it can achieve an effective compromise between hardware complexity and system performance. To avoid accurate estimation of the channel, a codebook-based technique is widely used in analog beamforming components, wherein a transmitter and receiver jointly examine an analog precoder and analog combiner pair according to predesigned codebooks, without using a priori channel information. However, identifying an optimal analog precoder and analog combiner pair using the exhaustive search algorithm (ESA) incurs exponential complexity, causing the number of radio frequency chains to proliferate and hindering the resolution of the phase shifters, which cannot be solved even for highly reasonable system parameters. To reduce the search complexity while maximizing the achievable rate, we propose a low-complexity, near-optimal algorithm developed from a cross-entropy optimization framework. Our simulation results reveal that our algorithm achieves near-optimal performance at a much lower complexity than does the optimal ESA.

Journal ArticleDOI
TL;DR: Benefiting from the proposed codebook design, the quality of initial information of MPA receiver on each resource node and the convergence reliability of the first detected user in each decision process will be improved.
Abstract: For sparse code multiple access (SCMA) with traditional codebooks, the initial information of message passing algorithm (MPA) receiver is easily susceptible to noise and multipath fading, and the convergence reliability of the first detected user in each decision process is unsatisfactory. Driven by these problems, an optimized codebook design for SCMA is presented in this paper. In the proposed SCMA codebook design, we first use turbo trellis coded modulation technology to design a basic complex multi-dimension constellation, which can increase the minimum Euclidean distance. Then, phase rotation and coordinate interleaving are added on the constellation to increase diversity and coordinate product distance between any constellation points. Based on these, we propose a novel criterion to select the most appropriate permutation set, which can capture as large as the sum of distance between dimensions of interfering codewords multiplexed on each resource node and maximize the diversity over the set of the sums of distance between dimensions of interfering codewords multiplexed on all resource nodes. Benefiting from the proposed codebook design, the quality of initial information of MPA receiver on each resource node and the convergence reliability of the first detected user in each decision process will be improved. Simulation results show that the bit error rate performance of SCMA with the proposed codebooks outperforms SCMA with traditional codebooks, low-density signature, and orthogonal frequency division multiple access under the same load.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: In this article, the basic concept of SCMA is introduced, including codebook mapping, multiple access procedure, and advanced receivers, and link level simulations are applied to verify the design of the SCMA.
Abstract: Non-orthogonal multiple access has been extensively discussed in New Radio (NR), the study item working on 5G air interface in 3GPP Sparse code multiple access (SCMA) is one of the proposed MA schemes In this paper, the basic concept of SCMA is introduced, including codebook mapping, multiple access procedure, and advanced receivers Then, link level simulations are applied to verify the design of SCMA, which show that SCMA has many excellent properties: shaping and diversity gain by sparse codebooks, resilient to inter- user interference, and robust to codebook collision, thus it is a promising candidate MA scheme for 5G

Posted Content
TL;DR: In this article, the decoding graph of polar codes is partitioned into smaller sub-blocks and the decoder is then connected via the remaining conventional belief propagation decoding stage(s).
Abstract: The training complexity of deep learning-based channel decoders scales exponentially with the codebook size and therefore with the number of information bits. Thus, neural network decoding (NND) is currently only feasible for very short block lengths. In this work, we show that the conventional iterative decoding algorithm for polar codes can be enhanced when sub-blocks of the decoder are replaced by neural network (NN) based components. Thus, we partition the encoding graph into smaller sub-blocks and train them individually, closely approaching maximum a posteriori (MAP) performance per sub-block. These blocks are then connected via the remaining conventional belief propagation decoding stage(s). The resulting decoding algorithm is non-iterative and inherently enables a high-level of parallelization, while showing a competitive bit error rate (BER) performance. We examine the degradation through partitioning and compare the resulting decoder to state-of-the-art polar decoders such as successive cancellation list and belief propagation decoding.

Patent
23 Mar 2017
TL;DR: In this article, the authors present an information transmission method, a terminal and a base station, where the terminal determines a fed-back HARQ-ACK (hybrid automatic repeat request acknowledgement) codebook and resources used for bearing an HARQACK according to at least one kind of parameters selected from the number of the pieces of DCI (downlink control information), PDCCHs, or EPDCCHs received by the terminal, or the detected result of the received PDSCHs.
Abstract: The present invention discloses an information transmission method, a terminal and a base station. The method includes the following steps that: the terminal receives PDCCHs (physical downlink control channels)or EPDCCHs (Enhanced PDCCHs) and PDSCHs (physical downlink shared channels); the terminal determines a fed-back HARQ-ACK (hybrid automatic repeat request acknowledgement) codebook and resources used for bearing an HARQ-ACK according to at least one kind of parameters selected from the number of the pieces of DCI (downlink control information), PDCCHs, or EPDCs received by the terminal, or the number of the PDSCHs received by the terminal; the terminal determines the state of the HARQ-ACK according to the detection result of the received PDSCHs or PDCCHs or EPDCCHs, wherein the state of the HARQ-ACK refers to the state of a bit sequence in the HARQ-ACK codebook; and the terminal sends the HARQ-ACK on the resources for bearing the HARQ-ACK according to the determined HARQ-ACK codebook and state of the HARQ-ACK.

Journal ArticleDOI
TL;DR: An analytical framework for the initial access in a millimeter-wave communication system is developed and an effective strategy for transmitting the reference signals (RSs) used for BS discovery is proposed.
Abstract: In this paper, we develop an analytical framework for the initial access (also known as base station (BS) discovery) in a millimeter-wave communication system and propose an effective strategy for transmitting the reference signals (RSs) used for BS discovery. Specifically, by formulating the problem of BS discovery at user equipments (UEs) as hypothesis tests, we derive a detector based on the generalized likelihood ratio test and characterize the statistical behavior of the detector. The theoretical results obtained allow analysis of the impact of key system parameters on the performance of BS discovery, and show that RS transmission with narrow beams may not be helpful in improving the overall BS discovery performance due to the cost of spatial scanning. Using the method of large deviations, we identify the desirable beam pattern that minimizes the average miss-discovery probability of UEs within a targeted detectable region. We then propose to transmit the RS with sequential scanning, using a pre-designed codebook with narrow and/or wide beams to approximate the desirable patterns. The proposed design allows flexible choices of the codebook sizes and the associated beam widths to better approximate the desirable patterns. Numerical results demonstrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: A one-shot version of IF source coding is proposed that is simple enough to potentially lead to a new design principle for analog-to-digital converters that can exploit spatial correlations between the sampled signals.
Abstract: Integer-Forcing (IF) is a new framework, based on compute-and-forward, for decoding multiple integer linear combinations from the output of a Gaussian multiple-input multiple-output channel. This paper applies the IF approach to arrive at a new low-complexity scheme, IF source coding, for distributed lossy compression of correlated Gaussian sources under a minimum mean squared error distortion measure. All encoders use the same nested lattice codebook. Each encoder quantizes its observation using the fine lattice as a quantizer and reduces the result modulo the coarse lattice, which plays the role of binning. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. In general, the linear combinations have smaller average powers than the original signals. This allows to increase the density of the coarse lattice, which in turn translates to smaller compression rates. We also propose and analyze a one-shot version of IF source coding that is simple enough to potentially lead to a new design principle for analog-to-digital converters that can exploit spatial correlations between the sampled signals.

BookDOI
14 Aug 2017
TL;DR: The authors describe the content and structure of the American Working Conditions Survey data (AWCS) which was fielded on the RAND American Life Panel (ALP) in 2015, and present a codebook describing the content of the AWCS data.
Abstract: This codebook describes the content and structure of the American Working Conditions Survey data (AWCS) which was fielded on the RAND American Life Panel (ALP) in 2015.

Journal ArticleDOI
TL;DR: In this paper, the authors quantitatively analyzed the performance of the channel-statistics-based codebook and showed that the required number of feedback bits to ensure a constant rate gap only scales linearly with the rank of the correlation matrix.
Abstract: The channel feedback overhead for massive multiple-input multiple-output systems with a large number of base station (BS) antennas is very high since the number of feedback bits of traditional codebooks scales linearly with the number of BS antennas. To reduce the feedback overhead, an effective codebook based on channel statistics has been designed, where the required number of feedback bits only scales linearly with the rank of the channel correlation matrix. However, this attractive conclusion was only proved under a particular channel assumption in the literature. To provide a rigorous theoretical proof under a general channel assumption, in this paper, we quantitatively analyze the performance of the channel-statistics-based codebook. Specifically, we first introduce the rate gap between the ideal case of perfect channel state information at the transmitter and the practical case of limited channel feedback, where we find that the rate gap depends on the quantization error of the codebook. Then, we derive an upper bound of the quantization error, based on which we prove that the required number of feedback bits to ensure a constant rate gap only scales linearly with the rank of the channel correlation matrix. Finally, numerical results are provided to verify this conclusion.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed algorithm outperforms multi-sectional search, where the former can achieve the weighted sum-rate almost twice of the latter when each block is consisted of 35 symbols and the channel SNR is 20 dB.
Abstract: The analog beamforming and analog combining in millimeter-wave (mm-Wave) massive MIMO communications are investigated. Based on a hierarchical codebook, the weighted sum-rate maximization is studied by jointly considering the duration for channel training and the receiving signal-to-noise ratio (SNR). An algorithm employing exhaustive search at the starting level of the codebook and the multi-sectional search at the other levels is proposed. At each iteration, the channel gain of the dominant path is estimated and the level of the codebook achieving the weighted sum-rate maximization is predicted by the receiver. Then the predicted level is fed back to the transmitter. If the predicted level is reached by the algorithm, the transmitter begins the data transmission with the obtained analog beamformer and the receiver uses the obtained analog combiner for data receiving. Simulation results show that the proposed algorithm outperforms multi-sectional search, where the former can achieve the weighted sum-rate almost twice of the latter when each block is consisted of 35 symbols and the channel SNR is 20 dB.