scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2014"


Proceedings ArticleDOI
04 Dec 2014
TL;DR: A systematic approach is proposed to design SCMA codebooks mainly based on the design principles of lattice constellations to show the performance gain of SCMA compared to LDS and OFDMA.
Abstract: Multicarrier CDMA is a multiple access scheme in which modulated QAM symbols are spread over OFDMA tones by using a generally complex spreading sequence. Effectively, a QAM symbol is repeated over multiple tones. Low density signature (LDS) is a version of CDMA with low density spreading sequences allowing us to take advantage of a near optimal message passing algorithm (MPA) receiver with practically feasible complexity. Sparse code multiple access (SCMA) is a multi-dimensional codebook-based non-orthogonal spreading technique. In SCMA, the procedure of bit to QAM symbol mapping and spreading are combined together and incoming bits are directly mapped to multi-dimensional codewords of SCMA codebook sets. Each layer has its dedicated codebook. Shaping gain of a multi-dimensional constellation is one of the main sources of the performance improvement in comparison to the simple repetition of QAM symbols in LDS. Meanwhile, like LDS, SCMA enjoys the low complexity reception techniques due to the sparsity of SCMA codewords. In this paper a systematic approach is proposed to design SCMA codebooks mainly based on the design principles of lattice constellations. Simulation results are presented to show the performance gain of SCMA compared to LDS and OFDMA.

611 citations


Journal ArticleDOI
TL;DR: This exploratory study was conducted to demonstrate a rigorous approach to reaching saturation through two-stage establishment of a codebook used for thematic analysis through inductive analysis and refinement of the coding system.
Abstract: Reaching a saturation point in thematic analysis is important to validity in qualitative studies, yet the process of achieving saturation is often left ambiguous. The lack of information about the process creates uncertainty in the timing of recruitment closure. This exploratory study was conducted to demonstrate a rigorous approach to reaching saturation through two-stage establishment of a codebook used for thematic analysis. The codebook development involved inductive analysis with six interviews, followed by a refinement of the coding system by applying them to an additional 33 interviews. These findings are discussed in relation to plausible pattern in code occurrence rate and suggested sample sizes for thematic analysis. Read More: http://www.amsciepub.com/doi/abs/10.2466/03.CP.3.4

382 citations


Journal ArticleDOI
TL;DR: This paper optimize PQ by minimizing quantization distortions w.r.t the space decomposition and the quantization codebooks, and evaluates the optimized product quantizers in three applications: compact encoding for exhaustive ranking, inverted multi-indexing for non-exhaustive search, and compacting image representations for image retrieval.
Abstract: Product quantization (PQ) is an effective vector quantization method. A product quantizer can generate an exponentially large codebook at very low memory/time cost. The essence of PQ is to decompose the high-dimensional vector space into the Cartesian product of subspaces and then quantize these subspaces separately. The optimal space decomposition is important for the PQ performance, but still remains an unaddressed issue. In this paper, we optimize PQ by minimizing quantization distortions w.r.t the space decomposition and the quantization codebooks. We present two novel solutions to this challenging optimization problem. The first solution iteratively solves two simpler sub-problems. The second solution is based on a Gaussian assumption and provides theoretical analysis of the optimality. We evaluate our optimized product quantizers in three applications: (i) compact encoding for exhaustive ranking [1], (ii) building inverted multi-indexing for non-exhaustive search [2], and (iii) compacting image representations for image retrieval [3]. In all applications our optimized product quantizers outperform existing solutions.

314 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: In this paper, a vector encoding and codebook learning algorithm is proposed to minimize the coding error within the proposed scheme, which leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error whenever the classifiers are learned on or applied to compressed vectors.
Abstract: We introduce a new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks. We show that the proposed scheme permits efficient distance and scalar product computations between compressed and uncompressed vectors. We further suggest vector encoding and codebook learning algorithms that can minimize the coding error within the proposed scheme. In the experiments, we demonstrate that the proposed compression can be used instead of or together with product quantization. Compared to product quantization and its optimized versions, the proposed compression approach leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error, whenever the classifiers are learned on or applied to compressed vectors.

232 citations


Proceedings ArticleDOI
10 Jun 2014
TL;DR: It is demonstrated that the 3D correlation matrix can be well approximated by a Kronecker production of azimuth and elevation correlations, laying the theoretical support for the usage of a product codebook for reduced complexity feedback from the receiver to the transmitter.
Abstract: A 2D antenna array introduces a new level of control and additional degrees of freedom in multiple-input-multiple-output (MIMO) systems particularly for the so-called “massive MIMO” systems. To accurately assess the performance gains of these large arrays, existing azimuth-only channel models have been extended to handle 3D channels by modeling both the elevation and azimuth dimensions. In this paper, we study the channel correlation matrix of a generic ray-based 3D channel model, and our analysis and simulation results demonstrate that the 3D correlation matrix can be well approximated by a Kronecker production of azimuth and elevation correlations. This finding lays the theoretical support for the usage of a product codebook for reduced complexity feedback from the receiver to the transmitter. We also present the design of a product codebook based on Grassmannian line packing.

154 citations


Patent
06 Jan 2014
TL;DR: In this paper, a codebook-based feedback mechanism is provided where each codeword indicates a particular profile to be used to provide a target performance measure (e.g., bit error rate or spectral efficiency) for the transmission channel.
Abstract: In a closed-loop wireless communication system, a codebook-based feedback mechanism is provided where each codeword indicates a particular profile to be used to provide a target performance measure (e.g., bit error rate or spectral efficiency) for the transmission channel. This may be accomplished by using a multi-stage quantization process to construct the codewords as a plurality of transmission parameters specifying a MIMO transmission scheme, precoding vector or matrix, power allocation in space/time/frequency, modulation and channel code.

113 citations


Journal ArticleDOI
TL;DR: A novel approach to defining document image structural similarity for the applications of classification and retrieval by encoding each document and model the spatial relationships between them by recursively partitioning the image and computing histograms of codewords in each partition.

93 citations


Journal ArticleDOI
TL;DR: A novel framework for human action recognition based on Bag of Words (BoWs) action representation is proposed, that unifies discriminative codebook generation and discriminant subspace learning.

87 citations


Posted Content
TL;DR: This paper proposes a novel approach, named optimized cartesian K-means (ock-mean), to better encode the data points for more accurate approximate nearest neighbor search, which can provide more flexibility and lower distortion errors than traditional methods.
Abstract: Product quantization-based approaches are effective to encode high-dimensional data points for approximate nearest neighbor search. The space is decomposed into a Cartesian product of low-dimensional subspaces, each of which generates a sub codebook. Data points are encoded as compact binary codes using these sub codebooks, and the distance between two data points can be approximated efficiently from their codes by the precomputed lookup tables. Traditionally, to encode a subvector of a data point in a subspace, only one sub codeword in the corresponding sub codebook is selected, which may impose strict restrictions on the search accuracy. In this paper, we propose a novel approach, named Optimized Cartesian $K$-Means (OCKM), to better encode the data points for more accurate approximate nearest neighbor search. In OCKM, multiple sub codewords are used to encode the subvector of a data point in a subspace. Each sub codeword stems from different sub codebooks in each subspace, which are optimally generated with regards to the minimization of the distortion errors. The high-dimensional data point is then encoded as the concatenation of the indices of multiple sub codewords from all the subspaces. This can provide more flexibility and lower distortion errors than traditional methods. Experimental results on the standard real-life datasets demonstrate the superiority over state-of-the-art approaches for approximate nearest neighbor search.

84 citations


Journal ArticleDOI
TL;DR: This work proposes a codebook-free algorithm for large scale mobile image search that achieves fast and accurate feature matching free of a huge visual codebook, and demonstrates competitive retrieval accuracy and scalability against four recent retrieval methods in literature.
Abstract: State-of-the-art image retrieval algorithms using local invariant features mostly rely on a large visual codebook to accelerate the feature quantization and matching. This codebook typically contains millions of visual words, which not only demands for considerable resources to train offline but also consumes large amount of memory at the online retrieval stage. This is hardly affordable in resource limited scenarios such as mobile image search applications. To address this issue, we propose a codebook-free algorithm for large scale mobile image search. In our method, we first employ a novel scalable cascaded hashing scheme to ensure the recall rate of local feature matching. Afterwards, we enhance the matching precision by an efficient verification with the binary signatures of these local features. Consequently, our method achieves fast and accurate feature matching free of a huge visual codebook. Moreover, the quantization and binarizing functions in the proposed scheme are independent of small collections of training images and generalize well for diverse image datasets. Evaluated on two public datasets with a million distractor images, the proposed algorithm demonstrates competitive retrieval accuracy and scalability against four recent retrieval methods in literature.

75 citations


Journal ArticleDOI
TL;DR: This paper studies an alternative to current local descriptors and BoWs model by extracting the ultrashort binary descriptor (USB) and a compact auxiliary spatial feature from each keypoint detected in images and tests the competitive accuracy, memory consumption, and significantly better efficiency of this approach.
Abstract: Currently, many local descriptors have been proposed to tackle a basic issue in computer vision: duplicate visual content matching. These descriptors either are represented as high-dimensional vectors relatively expensive to extract and compare or are binary codes limited in robustness. Bag-of-visual words (BoWs) model compresses local features into a compact representation that allows for fast matching and scalable indexing. However, the codebook training, high-dimensional feature extraction, and quantization significantly degrade the flexibility and efficiency of BoWs model. In this paper, we study an alternative to current local descriptors and BoWs model by extracting the ultrashort binary descriptor (USB) and a compact auxiliary spatial feature from each keypoint detected in images. A typical USB is a 24-bit binary descriptor, hence it directly quantizes visual clues of image keypoints to about 16 million unique IDs. USB allows fast image matching and indexing and avoids the expensive codebook training and feature quantization in BoWs model. The spatial feature complementarily captures the spatial configuration in neighbor region of each keypoint, hence is used to filter mismatched USBs in a cascade verification. In image matching task, USB shows promising accuracy and nearly one-order faster speed than SIFT. We also test USB in retrieval tasks on UKbench, Oxford5K, and 1.2 million distractor images. Comparisons with recent retrieval methods manifest the competitive accuracy, memory consumption, and significantly better efficiency of our approach.

Patent
17 Jun 2014
TL;DR: In this article, a method for generating a codebook includes applying a unitary rotation to a baseline multidimensional constellation to produce a multi-dimensional mother constellation, which is then stored as the codebook of the plurality of codebooks.
Abstract: A method for generating a codebook includes applying a unitary rotation to a baseline multidimensional constellation to produce a multidimensional mother constellation, wherein the unitary rotation is selected to optimize a distance function of the multidimensional mother constellation, and applying a set of operations to the multidimensional mother constellation to produce a set of constellation points. The method also includes storing the set of constellation points as the codebook of the plurality of codebooks.

Journal ArticleDOI
TL;DR: This paper proposes a new feature descriptor, edge orientation difference histogram (EODH) descriptor, which is a rotation-invariant and scale-Invariant feature representation, and builds an effective image retrieval system based on weighted codeword distribution using the integrated feature descriptor.
Abstract: This paper proposes a new feature descriptor, edge orientation difference histogram (EODH) descriptor, which is a rotation-invariant and scale-invariant feature representation. The main orientation of each edge pixel is obtained through steerable filter and vector sum. Based on the main orientation, we construct the EODH descriptor for each edge pixel. Finally, we integrate the EODH and Color-SIFT descriptor, and build an effective image retrieval system based on weighted codeword distribution using the integrated feature descriptor. Experiments show that the codebook-based image retrieval method achieves the best performance on the given benchmark problems comparing to the state-of-the-art methods.

Patent
10 Mar 2014
TL;DR: In this paper, the rank-1 and/or rank-2 codebooks for advanced communication systems with four transmit antennas and two-dimensional (2D) M×N transmit antenna elements are provided.
Abstract: Methods and apparatus of constructing rank-1 and/or rank-2 codebooks for advanced communication systems with 4 transmit antennas and two-dimensional (2D) M×N transmit antenna elements are provided. A double-codebook structure is considered for 4-Tx antenna configuration. Single-codebook and double-codebook structures are considered for two-dimensional (2D) M×N transmit antenna elements.

Journal ArticleDOI
TL;DR: The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance.
Abstract: We propose computationally efficient encoders and decoders for lossy compression using a sparse regression code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. It is shown to achieve the optimal distortion-rate function for independent identically distributed (i.i.d) Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to tradeoff distortion performance with encoding complexity. An example of such a tradeoff as a function of the block length $n$ is the following. With computational resource (space or time) per source sample of $O((n/\log n)^{2})$ , for a fixed distortion-level above the Gaussian distortion-rate function, the probability of excess distortion decays exponentially in $n$ . The sparse regression code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has good empirical performance, especially at low and moderate rates.

Journal ArticleDOI
TL;DR: To resist the state-of-the-art steganalysis, following a general belief that fewer and smaller cover changes are less detectable and more secure, this work presents an improved QIM steganography, which introduces random position selection to adjust the embedded rate dynamically, and employs matrix encoding strategy to enhance the embedding efficiency.
Abstract: In this study, we mainly concentrate on quantization-index-modulation (QIM) steganography in low bit-rate speech streams, and contribute to improve its security. Exploiting the characteristics of codebook division diversity in the complementary neighbor vertices algorithm, we first design a key-based codebook division strategy, which follows Kerckhoff's principle and provides a better security than the previous QIM approach. Further, to resist the state-of-the-art steganalysis, following a general belief that fewer and smaller cover changes are less detectable and more secure, we present an improved QIM steganography, which introduces random position selection to adjust the embedding rate dynamically, and employs matrix encoding strategy to enhance the embedding efficiency. The proposed approach is evaluated with ITU-T G.723.1 as the codec of cover speech and compared with the previous work. The experimental results demonstrate that the proposed approach outperforms the traditional QIM approach on both steganographic transparency and steganalysis resistance. Moreover, it is worth pointing out that our approach can effectively work in conjunction with not only G.723.1 codec but also all other parametric speech coders, and be successfully applied into Voice-over-Internet-Protocol systems.

Journal ArticleDOI
TL;DR: The goal of this paper is to suggest a flexible approach to the design of Grassmannian codebooks based on sequential smooth optimization on the Grassmannians manifold and the use of smooth penalty functions to obtain additional desirable properties.
Abstract: Grassmannian quantization codebooks play a central role in a number of limited feedback schemes for single and multi-user multiple-input multiple-output (MIMO) communication systems In practice, it is often desirable that these codebooks possess additional properties that facilitate their implementation, beyond the provision of good quantization performance Although some good codebooks exist, their design tends to be a rather intricate task The goal of this paper is to suggest a flexible approach to the design of Grassmannian codebooks based on sequential smooth optimization on the Grassmannian manifold and the use of smooth penalty functions to obtain additional desirable properties As one example, the proposed approach is used to design rank-2 codebooks that have a nested structure and elements from a phase-shift keying (PSK) alphabet In some numerical comparisons, codebooks designed using the proposed methods have larger minimum distances than some existing codebooks, and provide tangible performance gains when applied to a simple MIMO downlink scenario with zero-forcing beamforming, per-user unitary beamforming and rate control (PU 2RC), and block diagonalization signalling Furthermore, the proposed approach yields codebooks that attain desirable additional properties without incurring a substantial degradation in performance

Posted Content
TL;DR: In this paper, a trellis-extended codebook (TEC) was proposed for channel state information (CSI) quantization in FDD massive MIMO systems.
Abstract: It is of great interest to develop efficient ways to acquire accurate channel state information (CSI) for frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems for backward compatibility. It is theoretically well known that the codebook size for CSI quantization should be increased as the number of transmit antennas becomes larger, and 3GPP long term evolution (LTE) and LTE-Advanced codebooks also follow this trend. Thus, in massive MIMO, it is hard to apply the conventional approach of using pre-defined vector-quantized codebooks for CSI quantization mainly because of codeword search complexity. In this paper, we propose a trellis-extended codebook (TEC) that can be easily harmonized with current wireless standards such as LTE or LTE-Advanced by extending standardized codebooks designed for 2, 4, or 8 antennas with trellis structures. TEC exploits a Viterbi decoder and convolutional encoder in channel coding as the CSI quantizer and the CSI reconstructer, respectively. By quantizing multiple channel entries simultaneously using standardized codebooks in a state transition of trellis search, TEC can achieve fractional bits per channel entry quantization to have a practical feedback overhead. Thus, TEC can solve both the complexity and the feedback overhead issues of CSI quantization in massive MIMO systems. We also develop trellis-extended successive phase adjustment (TE-SPA) which works as a differential codebook of TEC. This is similar to the dual codebook concept of LTE-Advanced. TE-SPA can reduce CSI quantization error even with lower feedback overhead in temporally correlated channels. Numerical results verify the effectiveness of the proposed schemes in FDD massive MIMO systems.

Patent
12 Feb 2014
TL;DR: In this paper, a precoding matrix is generated for multi-antenna transmission based on precoding matrices indicator (PMI) feedback, wherein the PMI indicates a choice of matrix derived from a matrix multiplication of two matrices from a first codebook and a second codebook.
Abstract: Channel state information (CSI) feedback in a wireless communication system is disclosed. A precoding matrix is generated for multi-antenna transmission based on precoding matrix indicator (PMI) feedback, wherein the PMI indicates a choice of precoding matrix derived from a matrix multiplication of two matrices from a first codebook and a second codebook. In one embodiment, the first codebook comprises at least a first precoding matrix constructed with a first group of adjacent Discrete-Fourier-Transform (DFT) vectors. In another embodiment, the first codebook comprises at least a second precoding matrix constructed with a second group of uniformly distributed non-adjacent DFT vectors. In yet another embodiment, the first codebook comprises at least a first precoding matrix and a second precoding matrix, where said first precoding matrix is constructed with a first group of adjacent DFT vectors, and said second precoding matrix is constructed with a second group of uniformly distributed non-adjacent DFT vectors.

Patent
08 Apr 2014
TL;DR: In this article, a codebook sub-sampling method was proposed for reporting of reporting of RI, CQI, W1 and W2 in CSI mode 1 and CSI mode 2.
Abstract: This invention is codebook sub-sampling of the reporting of RI, CQI, W1 and W2. If CSI mode 1 is selected RI and W1 are jointly encoded using codebook sub-sampling in report 1. If CSI mode 2 is selected W1 and W2 are jointly encoded using codebook sub-sampling in report 2.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed NCQM not only achieves the improved watermark imperceptibility and a higher embedding capacity in high-noise regimes, but also is more robust to a wide range of attacks, e.g., valumetric scaling, Gaussian filtering, additive noise, Gamma correction, and Gray-level transformations, as compared with the state-of-the-art watermarking methods.
Abstract: A novel quantization watermarking method is presented in this paper, which is developed following the established feature modulation watermarking model. In this method, a feature signal is obtained by computing the normalized correlation (NC) between the host signal and a random signal. Information modulation is carried out on the generated NC by selecting a codeword from the codebook associated with the embedded information. In a simple case, the structured codebooks are designed using uniform quantizers for modulation. The watermarked signal is produced to provide the modulated NC in the sense of minimizing the embedding distortion. The performance of the NC-based quantization modulation (NCQM) is analytically investigated, in terms of the embedding distortion and the decoding error probability in the presence of valumetric scaling and additive noise attacks. Numerical simulations on artificial signals confirm the validity of our analyses and exhibit the performance advantage of NCQM over other modulation techniques. The proposed method is also simulated on real images by using the wavelet-based implementations, where the host signal is constructed by the detail coefficients of wavelet decomposition at the third level and transformed into the NC feature signal for the information modulation. Experimental results show that the proposed NCQM not only achieves the improved watermark imperceptibility and a higher embedding capacity in high-noise regimes, but also is more robust to a wide range of attacks, e.g., valumetric scaling, Gaussian filtering, additive noise, Gamma correction, and Gray-level transformations, as compared with the state-of-the-art watermarking methods.

Patent
25 Feb 2014
TL;DR: In this article, the channel state information (CSI) feedback in a wireless communication system is disclosed, where the UE transmits a CSI feedback signal via a PUCCH.
Abstract: Channel state information (CSI) feedback in a wireless communication system is disclosed. User equipment transmits a CSI feedback signal via a Physical Uplink Control CHannel (PUCCH). If the UE is configured in a first feedback mode, the CSI comprises a first report jointly coding a Rank Indicator (RI) and a first precoding matrix indicator (PMI1), and a second report coding Channel Quality Indicator (CQI) and a second precoding matrix indicator (PMI2). If the UE is configured in a second feedback mode, the CSI comprises a first report coding RI, and a second report coding CQI, PMI1 and PMI2. The jointly coded RI and PMI1 employs codebook sub-sampling, and the jointly coding PMI1, PMI2 and CQI employs codebook sub-sampling.

Posted Content
TL;DR: This paper observes that PQ and AQ are both compositional quantizers that lie on the extremes of the codebook dependence-independence assumption, and explores an intermediate approach that exploits a hierarchical structure in the codebooks, resulting in a method that achieves quantization error on par with or lower than AQ, while being several orders of magnitude faster.
Abstract: Recently, Babenko and Lempitsky introduced Additive Quantization (AQ), a generalization of Product Quantization (PQ) where a non-independent set of codebooks is used to compress vectors into small binary codes Unfortunately, under this scheme encoding cannot be done independently in each codebook, and optimal encoding is an NP-hard problem In this paper, we observe that PQ and AQ are both compositional quantizers that lie on the extremes of the codebook dependence-independence assumption, and explore an intermediate approach that exploits a hierarchical structure in the codebooks This results in a method that achieves quantization error on par with or lower than AQ, while being several orders of magnitude faster We perform a complexity analysis of PQ, AQ and our method, and evaluate our approach on standard benchmarks of SIFT and GIST descriptors, as well as on new datasets of features obtained from state-of-the-art convolutional neural networks

Journal ArticleDOI
TL;DR: In this article, the authors describe methods that compress visual word histograms, which require a codebook and decoding compressed signatures, and use residuals to achieve the same accuracy with much smaller codebooks and compressed domain matching.
Abstract: Mobile visual search systems compare images against a database for object recognition. If query data is transmitted over a slow network or processed on a congested server, the latency increases substantially. This article shows how on-device database matching guarantees fast recognition regardless of external conditions. The database signatures must be compact because of limited memory, capable of fast comparisons, and discriminative for robust recognition. The authors first describe methods that compress visual word histograms, which require a codebook and decoding compressed signatures. They then describe methods that use residuals to achieve the same accuracy with much smaller codebooks and compressed domain matching.

Journal ArticleDOI
TL;DR: A novel method for writer identification based on sparse representation of handwritten structural primitives, called graphemes or fraglets, which achieves better performance even with a smaller codebook.

Proceedings ArticleDOI
10 Jun 2014
TL;DR: A new sparsifying basis that reflects the long-term characteristics of the channel and a new reconstruction algorithm for CS is proposed and it is suggested that dimensionality reduction is more proper to compress, and compare performance with the conventional method.
Abstract: In this paper, we propose compressive sensing-based channel quantization feedback that is appropriate for practical massive multiple-input multiple-output (MIMO) systems. We assume that the base station (BS) has a compact (2-dimensional) uniform square array that has a highly correlated channel. To serve multiple users, the BS uses a zero-forcing precoder. Our proposed channel feedback algorithm can reduce feedback overhead as well as a codebook search complexity. Numerical simulations confirm our analytical results.

Patent
24 Jul 2014
TL;DR: In this paper, the authors present a multi-resolution PMI feedback system that finds a rank 1 or rank 2 PMI based on the signal channel matrix and interference covariance matrix, defines an error vector, obtains an orthonormal basis for the projection matrix, finds the (M-1)-dimensional vector from a codebook with the minimum Euclidean distance, and sends a feedback representing to the base station regarding the vector that it found in the codebook.
Abstract: Disclosed are methods and a device for Multi-resolution PMI Feedback. In one implementation, a user equipment finds a rank 1 or rank 2 Precoding Matrix Indicator based on the signal channel matrix and interference covariance matrix, defines an error vector, obtains an orthonormal basis for the projection matrix, finds the (M-1)-dimensional vector from a codebook (e.g., oversampled Discrete Fourier Transform) with the minimum Euclidean distance, and sends a feedback representing to the base station regarding the vector that it found in the codebook.

Proceedings ArticleDOI
TL;DR: In this article, the authors study the channel correlation matrix of a generic ray-based 3D channel model, and demonstrate that the 3D correlation matrix can be well approximated by a Kronecker production of azimuth and elevation correlations.
Abstract: A 2D antenna array introduces a new level of control and additional degrees of freedom in multiple-input-multiple-output (MIMO) systems particularly for the so-called "massive MIMO" systems. To accurately assess the performance gains of these large arrays, existing azimuth-only channel models have been extended to handle 3D channels by modeling both the elevation and azimuth dimensions. In this paper, we study the channel correlation matrix of a generic ray-based 3D channel model, and our analysis and simulation results demonstrate that the 3D correlation matrix can be well approximated by a Kronecker production of azimuth and elevation correlations. This finding lays the theoretical support for the usage of a product codebook for reduced complexity feedback from the receiver to the transmitter. We also present the design of a product codebook based on Grassmannian line packing.

Journal ArticleDOI
TL;DR: The optimum detection/decoding rule is derived, in the sense of the best tradeoff among the probabilities of decoding error, false alarm, and misdetection, for the average code in the ensemble.
Abstract: We consider the problem of coded communication, where in each time frame, the transmitter is either silent or transmits a codeword from a given (randomly selected) codebook. The task of the decoder is to decide whether transmission has taken place, and if so, to decode the message. We derive the optimum detection/decoding rule in the sense of the best tradeoff among the probabilities of decoding error, false alarm, and misdetection. For this detection/decoding rule, we then derive single-letter characterizations of the exact exponential rates of these probabilities for the average code in the ensemble. It is shown that previously proposed decoders are in general strictly suboptimal.

Journal ArticleDOI
TL;DR: This paper proposes a codebook-based OIA, in which the weight vectors are chosen from a predefined codebook with a finite size so that information of the weight vector can be sent to the belonging BS with limited feedforward, and derives the codebook size required to achieve the same user scaling condition as the SVD- based OIA case for both Grassmannian and random codebooks.
Abstract: For the multiple-input multiple-output interfering multiple-access channels (IMACs), opportunistic interference alignment (OIA) using the singular value decomposition (SVD)-based beamforming at each user fundamentally reduces the user scaling condition required to achieve any target DoF, compared to that for the single-input multiple-output IMAC. In this paper, we tackle two practical challenges of the existing SVD-based OIA: 1) the need of full feedforward of the selected users’ beamforming weight vectors and 2) a low rate achieved based on the exiting zero-forcing receiver. We first propose a codebook-based OIA, in which the weight vectors are chosen from a predefined codebook with a finite size so that information of the weight vectors can be sent to the belonging BS with limited feedforward. We derive the codebook size required to achieve the same user scaling condition as the SVD-based OIA case for both Grassmannian and random codebooks. Surprisingly, it is shown that the derived codebook size is the same for the two codebook methods. Second, we introduce an enhanced receiver at the base stations (BSs) in pursuit of further improving the achievable rate. Assuming no collaboration between the BSs, the interfering links between a BS and the selected users in neighboring cells are difficult to be acquired at the belonging BS. We propose the use of a simple minimum Euclidean distance receiver operating with no information of the interfering links. With the help of the OIA, we show that this new receiver asymptotically achieves the channel capacity as the number of users increases.