scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2016"


Journal ArticleDOI
TL;DR: A low-complexity yet near-optimal greedy frequency selective hybrid precoding algorithm is proposed based on Gram-Schmidt orthogonalization and efficient hybrid analog/digital codebooks are developed for spatial multiplexing in wideband mmWave systems.
Abstract: Hybrid analog/digital precoding offers a compromise between hardware complexity and system performance in millimeter wave (mmWave) systems. This type of precoding allows mmWave systems to leverage large antenna array gains that are necessary for sufficient link margin, while permitting low cost and power consumption hardware. Most prior work has focused on hybrid precoding for narrow-band mmWave systems, with perfect or estimated channel knowledge at the transmitter. MmWave systems, however, will likely operate on wideband channels with frequency selectivity. Therefore, this paper considers wideband mmWave systems with a limited feedback channel between the transmitter and receiver. First, the optimal hybrid precoding design for a given RF codebook is derived. This provides a benchmark for any other heuristic algorithm and gives useful insights into codebook designs. Second, efficient hybrid analog/digital codebooks are developed for spatial multiplexing in wideband mmWave systems. Finally, a low-complexity yet near-optimal greedy frequency selective hybrid precoding algorithm is proposed based on Gram–Schmidt orthogonalization. Simulation results show that the developed hybrid codebooks and precoder designs achieve very-good performance compared with the unconstrained solutions while requiring much less complexity.

529 citations


Journal ArticleDOI
TL;DR: A novel general purpose BIQA method based on high order statistics aggregation (HOSA), requiring only a small codebook, which has been extensively evaluated on ten image databases with both simulated and realistic image distortions, and shows highly competitive performance to the state-of-the-art BIZA methods.
Abstract: Blind image quality assessment (BIQA) research aims to develop a perceptual model to evaluate the quality of distorted images automatically and accurately without access to the non-distorted reference images. The state-of-the-art general purpose BIQA methods can be classified into two categories according to the types of features used. The first includes handcrafted features which rely on the statistical regularities of natural images. These, however, are not suitable for images containing text and artificial graphics. The second includes learning-based features which invariably require large codebook or supervised codebook updating procedures to obtain satisfactory performance. These are time-consuming and not applicable in practice. In this paper, we propose a novel general purpose BIQA method based on high order statistics aggregation (HOSA), requiring only a small codebook. HOSA consists of three steps. First, local normalized image patches are extracted as local features through a regular grid, and a codebook containing 100 codewords is constructed by K-means clustering. In addition to the mean of each cluster, the diagonal covariance and coskewness (i.e., dimension-wise variance and skewness) of clusters are also calculated. Second, each local feature is softly assigned to several nearest clusters and the differences of high order statistics (mean, variance and skewness) between local features and corresponding clusters are softly aggregated to build the global quality aware image representation. Finally, support vector regression is adopted to learn the mapping between perceptual features and subjective opinion scores. The proposed method has been extensively evaluated on ten image databases with both simulated and realistic image distortions, and shows highly competitive performance to the state-of-the-art BIQA methods.

371 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed two basic criteria for the hierarchical codebook design, and devised an efficient hierarchical code book by jointly exploiting sub-array and deactivation (turning-off) antenna processing techniques, where closed-form expressions are provided to generate the codebook.
Abstract: In millimeter-wave communication, large antenna arrays are required to achieve high power gain by steering toward each other with narrow beams, which poses the problem to efficiently search the best beam direction in the angle domain at both Tx and Rx sides. As the exhaustive search is time consuming, hierarchical search has been widely accepted to reduce the complexity, and its performance is highly dependent on the codebook design. In this paper, we propose two basic criteria for the hierarchical codebook design, and devise an efficient hierarchical codebook by jointly exploiting sub-array and deactivation (turning-off) antenna processing techniques, where closed-form expressions are provided to generate the codebook. Performance evaluations are conducted under different system and channel models. Results show superiority of the proposed codebook over the existing alternatives.

293 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed spatiotemporal completed local quantization patterns (STCLQP) for facial micro-expression analysis. But, their method only considers appearance and motion features from the sign-based difference between two pixels but not yet considers other useful information.

201 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art methods on scene classification of local and global spatial features.

159 citations


Proceedings ArticleDOI
15 May 2016
TL;DR: A multi-dimensional SCMA (MD-SCMA) codebook design based on constellation rotation and interleaving method is proposed for downlink SCMA systems, which outperforms that of the existing SCMA codebooks and low density signature (LDS) in downlink Rayleigh fading channels.
Abstract: Sparse code multiple access (SCMA) is a new non- orthogonal multiple access scheme, which effectively exploits the shaping gain of multi-dimensional codebook. In this paper, a multi-dimensional SCMA (MD-SCMA) codebook design based on constellation rotation and interleaving method is proposed for downlink SCMA systems. In particular, the first dimension of mother constellation is constructed by subset of lattice $\mathbf{Z}^2$. Then the other dimensions are obtained by rotating the first dimension. Further, the interleaving is used for even dimensions to improve the performance in fading channels. In this way, we can design different codebooks for the aim of spectral efficiency or power efficiency. And the simulation results show that the bit error rate (BER) performance of MD-SCMA codebooks outperforms that of the existing SCMA codebooks and low density signature (LDS) in downlink Rayleigh fading channels.

112 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed scene classification methods under the FK coding framework can greatly reduce the computational cost, and can obtain a better scene classification accuracy than the methods based on the traditional BOVW model.
Abstract: High spatial resolution (HSR) image scene classification is aimed at bridging the semantic gap between low-level features and high-level semantic concepts, which is a challenging task due to the complex distribution of ground objects in HSR images. Scene classification based on the bag-of-visual-words (BOVW) model is one of the most successful ways to acquire the high-level semantic concepts. However, the BOVW model assigns local low-level features to their closest visual words in the “visual vocabulary” (the codebook obtained by k-means clustering), which discards too many useful details of the low-level features in HSR images. In this paper, a feature coding method under the Fisher kernel (FK) coding framework is introduced to extend the BOVW model by characterizing the low-level features with a gradient vector instead of the count statistics in the BOVW model, which results in a significant decrease in the codebook size and an acceleration of the codebook learning process. By considering the differences in the distributions of the ground objects in different regions of the images, local FK (LFK) is proposed for the HSR image scene classification method. The experimental results show that the proposed scene classification methods under the FK coding framework can greatly reduce the computational cost, and can obtain a better scene classification accuracy than the methods based on the traditional BOVW model.

88 citations


Journal ArticleDOI
TL;DR: This paper develops and analyzes three limited-feedback resource allocation algorithms suitable for uplink transmission in heterogeneous wireless networks (HetNets) and reveals that the Lloyd algorithm can offer a performance close to the perfect-CSI case (without a limited number of feedback bits).
Abstract: In this paper, we develop and analyze three limited-feedback resource allocation algorithms suitable for uplink transmission in heterogeneous wireless networks (HetNets). In this setup, one macro-cell shares the spectrum with a set of underlay cognitive small-cells via the orthogonal frequency-division multiple access (OFDMA), where the interference from small-cells to the macro-cell should be kept below a predefined threshold. The resource allocation algorithms aim to maximize the weighted sum of instantaneous rates of all users over all cells by jointly optimizing power and subcarrier allocation under power constraints. Since in practice, the HetNet backhaul capacity is limited, reducing the amount of channel state information (CSI) feedback signaling passed over the backhaul links is highly desirable. To reach this goal, we apply the Lloyd algorithm to develop the limited-feedback two-phase resource allocation scheme. In the first offline phase , an optimal codebook for power and subcarrier allocation is designed and sent to all nodes. In the second online phase , based on channel realizations, the appropriate codeword of the designed codebook is chosen for transmission parameters, and the macro-cell only sends the codeword index represented by a limited number of bits for subcarrier and power allocation to its own users and small-cells. Then, each small-cell informs its own users by their related codewords. The offline phase involves a mixed-integer nonconvex resource allocation problem encountering high computational complexity. To solve it efficiently, we apply the general iterative successive convex approximation (SCA) approach, where the nonconvex optimization problem is transformed into the approximated convex optimization problem in each iteration. The simulation results reveal that the Lloyd algorithm can offer a performance close to the perfect-CSI case (without a limited number of feedback bits).

86 citations


Journal ArticleDOI
TL;DR: A new algorithm called BA-LBG is proposed which produces an efficient codebook with less computational time and results very good PSNR due to its automatic zooming feature using adjustable pulse emission rate and loudness of bats.

84 citations


Journal ArticleDOI
TL;DR: A landmark recognition framework is proposed by employing a novel discriminative feature selection method and the improved extreme learning machine (ELM) algorithm to generate a set of preliminary codewords for landmark images.
Abstract: Along with the rapid development of mobile terminal devices, landmark recognition applications based on mobile devices have been widely researched in recent years. Due to the fast response time requirement of mobile users, an accurate and efficient landmark recognition system is thus urgent for mobile applications. In this paper, we propose a landmark recognition framework by employing a novel discriminative feature selection method and the improved extreme learning machine (ELM) algorithm. The scalable vocabulary tree (SVT) is first used to generate a set of preliminary codewords for landmark images. An efficient codebook learning algorithm derived from the word mutual information and Visual Rank technique is proposed to filter out those unimportant codewords. Then, the selected visual words, as the codebook for image encoding, are used to produce a compact Bag-of-Words (BoW) histogram. The fast ELM algorithm and the ensemble approach using the ELM classifier are utilized for landmark recognition. Experiments on the Nanyang Technological University campus's landmark database and the Fifteen Scene database are conducted to illustrate the advantages of the proposed framework.

83 citations


Journal ArticleDOI
TL;DR: This letter investigates the multiuser codebook design for SCMA systems over Rayleigh fading channels, and the criterion of the proposed design is derived from the cutoff rate analysis of the equivalent multiple-input multiple-output system.
Abstract: Sparse code multiple access (SCMA) is a promising uplink multiple access technique that can achieve superior spectral efficiency, provided that multidimensional codebooks are carefully designed. In this letter, we investigate the multiuser codebook design for SCMA systems over Rayleigh fading channels. The criterion of the proposed design is derived from the cutoff rate analysis of the equivalent multiple-input multiple-output system. Furthermore, new codebooks with signal-space diversity are suggested, while simulations show that this criterion is efficient in developing codebooks with substantial performance improvement, compared with the existing ones.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: Analysis and numerical examples suggest that a denser codebook is required to compensate for beam squint, and its impact on codebook design as a function of the number of antennas and system bandwidth normalized by the carrier frequency is analyzed.
Abstract: Analog beamforming with phased arrays is a promising technique for 5G wireless communication at millimeter wave frequencies. Using a discrete codebook consisting of multiple analog beams, each beam focuses on a certain range of angles of arrival or departure and corresponds to a set of fixed phase shifts across frequency due to practical hardware considerations. However, for sufficiently large bandwidth, the gain provided by the phased array is actually frequency dependent, which is an effect called beam squint, and this effect occurs even if the radiation pattern of the antenna elements is frequency independent. This paper examines the nature of beam squint for a uniform linear array (ULA) and analyzes its impact on codebook design as a function of the number of antennas and system bandwidth normalized by the carrier frequency. The criterion for codebook design is to guarantee that each beam's minimum gain for a range of angles and for all frequencies in the wideband system exceeds a target threshold, for example 3 dB below the array's maximum gain. Analysis and numerical examples suggest that a denser codebook is required to compensate for beam squint. For example, 54% more beams are needed compared to a codebook design that ignores beam squint for a ULA with 32 antennas operating at a carrier frequency of 73 GHz and bandwidth of 2.5 GHz. Furthermore, beam squint with this design criterion limits the bandwidth or the number of antennas of the array if the other one is fixed.

Journal ArticleDOI
TL;DR: A TS-based precoding/combining scheme to intelligently search the best precoder/combiner in each iteration of Turbo-like joint search with low complexity is proposed.
Abstract: For millimeter-wave (mmWave) massive multiple-input–multiple-output (MIMO) systems, codebook-based analog beamforming (including transmit precoding and receive combining) is usually used to compensate the severe attenuation of mmWave signals. However, conventional beamforming schemes involve complicated search among predefined codebooks to find out the optimal pair of analog precoder and analog combiner. To solve this problem, by exploring the idea of turbo equalizer together with the tabu search (TS) algorithm, we propose a Turbo-like beamforming scheme based on TS, which is called Turbo-TS beamforming in this paper, to achieve near-optimal performance with low complexity. Specifically, the proposed Turbo-TS beamforming scheme is composed of the following two key components: 1) Based on the iterative information exchange between the base station (BS) and the user, we design a Turbo-like joint search scheme to find out the near-optimal pair of analog precoder and analog combiner; and 2) inspired by the idea of the TS algorithm developed in artificial intelligence, we propose a TS-based precoding/combining scheme to intelligently search the best precoder/combiner in each iteration of Turbo-like joint search with low complexity. Analysis shows that the proposed Turbo-TS beamforming can considerably reduce the searching complexity, and simulation results verify that it can achieve near-optimal performance.

Journal ArticleDOI
TL;DR: A new sparsifying basis is proposed that reflects the long-term characteristics of the channel, and needs no change as long as the spatial correlation model does not change, and a new reconstruction algorithm for CS is proposed, and dimensionality reduction as a compression method is suggested.
Abstract: Massive multiple-input multiple-output (MIMO) is a promising approach for cellular communication due to its energy efficiency and high achievable data rate. These advantages, however, can be realized only when channel state information (CSI) is available at the transmitter. Since there are many antennas, CSI is too large to feed back without compression. To compress CSI, prior work has applied compressive sensing (CS) techniques and the fact that CSI can be sparsified. The adopted sparsifying bases fail, however, to reflect the spatial correlation and channel conditions or to be feasible in practice. In this paper, we propose a new sparsifying basis that reflects the long-term characteristics of the channel, and needs no change as long as the spatial correlation model does not change. We propose a new reconstruction algorithm for CS, and also suggest dimensionality reduction as a compression method. To feed back compressed CSI in practice, we propose a new codebook for the compressed channel quantization assuming no other-cell interference. Numerical results confirm that the proposed channel feedback mechanisms show better performance in point-to-point (single-user) and point-to-multi-point (multi-user) scenarios.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: A novel SCMA codebook design scheme is proposed to maximize the sum rate, where the design of multi-dimensional constellation sets are transferred to the optimizing of series of 1-dimensional complex codewords.
Abstract: Sparse code multiple access (SCMA) is a novel non-orthogonal multiple access scheme, which exploits a multidimensional constellation based on the non-orthogonal spreading technique. The SCMA multi-user codebook design is the bottleneck of system performance and there is no design guideline from the perspective of capacity. In this paper, a novel SCMA codebook design scheme is proposed to maximize the sum rate, where we transfer the design of multi-dimensional constellation sets to the optimizing of series of 1-dimensional complex codewords. Specifically, a basic M-order pulse-amplitude modulation (PAM) is optimized and then the angles of rotation between the input 1-dimensional constellation and the basic M-PAM constellation can be obtained within a feasible calculation complexity to improve the sum-rate. Finally, the series of 1-dimensional complex codewords are combined to construct multi-dimensional codebooks based on Latin square criterion. Numerical results illustrate that the proposed codebook outperforms the existing codebook by 1.3 dB over AWGN channel and 1.1 dB over Rayleigh channel in terms of bit error rate (BER) performance.

Journal ArticleDOI
TL;DR: In this paper, a radio frequency (RF) lens-embedded massive MIMO system was investigated and the system performance of limited feedback was evaluated by utilizing a technique for generating a suitable codebook for the system.
Abstract: In this paper, we investigate a radio frequency (RF) lens-embedded massive multiple-input multiple-output (MIMO) system and evaluate the system performance of limited feedback by utilizing a technique for generating a suitable codebook for the system. We fabricate an RF lens that operates on a 77-GHz (millimeter-wave) band. Experimental results show a proper value of amplitude gain and an appropriate focusing property. In addition, using a simple numerical technique—beam propagation method—we estimate the power profile of the RF lens and verify its accordance with experimental results. We also design a codebook—multivariance codebook quantization—for limited feedback by considering the characteristics of the RF lens antenna for massive MIMO systems. Numerical results confirm that the proposed system shows significant performance enhancement over a conventional massive MIMO system without an RF lens.

Journal ArticleDOI
TL;DR: An iterative gradient ascent algorithm is proposed for designing a codebook for the analog and digital BF/combining matrices based on a vector quantization approach and achieves an ergodic rate improvement of up to 0.4 bits per channel use compared with the Gaussian input scenario.
Abstract: Recently, there has been significant research effort toward achieving high data rates in the millimeter wave bands by employing large antenna systems. These systems are considered to have only a fraction of the RF chains compared with the total number of antennas and employ analog phase shifters to steer the transmit and receive beams in addition to the conventional beamforming (BF)/combining invoked in the baseband domain. This scheme, which is popularly known as hybrid BF, has been extensively studied in the literature. To the best of our knowledge, all the existing schemes focus on obtaining the BF/combining matrices that maximize the system capacity computed using a Gaussian input alphabet. However, this choice of matrices may be suboptimal for practical systems, since they employ a finite input alphabet, such as quadrature amplitude modulation/phase-shift keying constellations. Hence, in this paper, we consider a hybrid BF/combining system operating with a finite input alphabet and optimize the analog as well as digital BF/combining matrices by maximizing the mutual information (MI). This is achieved by an iterative gradient ascent algorithm that exploits the relationship between the minimum mean-squared error and the MI. Furthermore, an iterative algorithm is proposed for designing a codebook for the analog and digital BF/combining matrices based on a vector quantization approach. Our simulation results demonstrate that the proposed gradient ascent algorithm achieves an ergodic rate improvement of up to 0.4 bits per channel use (bpcu) compared with the Gaussian input scenario. Furthermore, the gain in the ergodic rate achieved by employing the vector quantization-based codebook is about 0.5 bpcu compared with the Gaussian input scenario.

Journal ArticleDOI
TL;DR: A reversible data hiding scheme for the encoded vector quantization (VQ) index table with the improved searching order coding (ISOC) is proposed, which can achieve higher hiding capacity with lower bit rate and extension degree of index table.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: Performance comparisons show that BMW-MS/CF outperforms the existing alternatives under the per-antenna power constraint, which is adopted due to the limited saturation power of mmWave power amplifier.
Abstract: In this paper, we study hierarchical codebook design for channel estimation in millimeter-wave (mmWave) communications with a hybrid precoding structure. We propose an approach to design a hierarchical codebook exploiting BeaM Widening with Multi-RF-chain Sub-array technique (BMW-MS). To obtain crucial parameters of BMW-MS, we provide a closed-form (CF) solution to pursue a flat beam pattern. Performance comparisons show that BMW-MS/CF outperforms the existing alternatives under the per-antenna power constraint, which is adopted due to the limited saturation power of mmWave power amplifier.

Journal ArticleDOI
TL;DR: The proposed cascaded scalar quantization (CSQ) method is free of the costly visual codebook training and thus is independent of any image descriptor training set and flexible enough to accommodate new image features and scalable to index large-scale image database.
Abstract: In this paper, we investigate the problem of scalable visual feature matching in large-scale image search and propose a novel cascaded scalar quantization scheme in dual resolution. We formulate the visual feature matching as a range-based neighbor search problem and approach it by identifying hyper-cubes with a dual-resolution scalar quantization strategy. Specifically, for each dimension of the PCA-transformed feature, scalar quantization is performed at both coarse and fine resolutions. The scalar quantization results at the coarse resolution are cascaded over multiple dimensions to index an image database. The scalar quantization results over multiple dimensions at the fine resolution are concatenated into a binary super-vector and stored into the index list for efficient verification. The proposed cascaded scalar quantization (CSQ) method is free of the costly visual codebook training and thus is independent of any image descriptor training set. The index structure of the CSQ is flexible enough to accommodate new image features and scalable to index large-scale image database. We evaluate our approach on the public benchmark datasets for large-scale image retrieval. Experimental results demonstrate the competitive retrieval performance of the proposed method compared with several recent retrieval algorithms on feature quantization.

Proceedings ArticleDOI
15 May 2016
TL;DR: This paper uses spherical codes to build multidimensional mother constellations for SCMA codebooks, to lower peak to average power ratio (PAPR) as well as to improve the overall spectrum efficiency.
Abstract: In this paper, we investigate the application of spherical codes in a sparse code multiple access (SCMA) system. We use spherical codes to build multidimensional mother constellations for SCMA codebooks, to lower peak to average power ratio (PAPR) as well as to improve the overall spectrum efficiency. Furthermore, we introduce four approaches to construct good spherical codes. These codes are easy and have large coding gains. We make extensions to some of these spherical codes such that they are suitable for usage in SCMA systems. The performance of the constructed codebooks are compared to known codebooks through simulations, and the numerical results show that spherical codes based codebooks can effectively improve the system performance.

Proceedings ArticleDOI
Ziyang Li1, Wen Chen1, Fan Wei1, Feng Wang1, Xu Xiuqiang2, Yan Chen2 
27 Jul 2016
TL;DR: The uplink SCMA sum capacity is derived and a specific codebook design method is proposed, which can cancel the effect of the dependency between the non-zero entries of codewords, to achieve a near optimal solution to the uplink sum-rate optimization problem.
Abstract: Sparse Code Multiple Access (SCMA) is a non-orthogonal multi-dimensional spreading technique based on layered codebooks. In SCMA, incoming bits are directly mapped to multi-dimensional codewords of pre-defined codebooks. In comparison to simple repetition of QAM symbols in Low Density Signature (LDS), shaping gain of the multi-dimensional constellation is the main advantage of SCMA for performance improvement. Similar to LDS, SCMA can take the advantage of a near optimal Message Passing Algorithm (MPA) receiver with reasonable complexity. In this paper, the uplink SCMA sum capacity is derived and a specific codebook design method is proposed, which can cancel the effect of the dependency between the non-zero entries of codewords. Under this condition, we study the uplink sum-rate optimization problem. A joint codebook assignment and power allocation method is proposed to achieve a near optimal solution. Simulation results show the significant performance gain of the proposed algorithm.

Proceedings ArticleDOI
22 May 2016
TL;DR: A novel swap-matching algorithm is proposed in which the users and the subcarriers are considered as two sets of players, and every two users can cooperate to swap their matches so as to improve each other's profits.
Abstract: In this paper, we study the codebook-based resource allocation problem for an uplink sparse code multiple access (SCMA) network. The base station (BS) assigns to each user a set of subcarriers corresponding to a specific codebook, and each user performs power control over multiple subcarriers. We aim to optimize the subcarrier assignment and power allocation to maximize the total sum-rate. To solve the above problem, we formulate it as a many-to-many two-sided matching problem with externalities. A novel swap-matching algorithm is then proposed in which the users and the subcarriers are considered as two sets of players, and every two users can cooperate to swap their matches so as to improve each other's profits. The algorithm converges to a pair-wise stable matching after a limited number of iterations. Simulation results show that the proposed algorithm greatly outperforms the orthogonal multiple access scheme and a random allocation scheme.

Patent
Meng-Hsi Wu1, Edward Y. Chang1
27 Jul 2016
TL;DR: In this article, a codebook of representative features is constructed based on a plurality of disease-irrelevant data and supervised learning is performed based on the transfer-learned disease features to train the classifier for disease detection.
Abstract: The disclosure provides a method, an electronic apparatus, and a computer readable medium of constructing a classifier for disease detection. The method includes the following steps. A codebook of representative features is constructed based on a plurality of disease-irrelevant data. Transfer-learned disease features are extracted from disease-relevant bio-signals according to the codebook without any medical domain knowledge, where both the disease-irrelevant data and the disease-relevant bio-signals are time-series data. Supervised learning is performed based on the transfer-learned disease features to train the classifier for disease detection.

Journal ArticleDOI
TL;DR: It is found that, in codebook-based hybrid beamforming systems, exploiting the full instantaneous CSIT can only achieve a marginal SNR gain and hybrid CSIT is sufficient to achieve the first-order gain provided by the massive MIMO for most of the cases.
Abstract: Hybrid beamforming, which consists of an RF precoder and a baseband precoder, has been proposed to reduce the number of RF chains at the massive multiple input multiple output (MIMO) base station (BS). This paper studies the impact of channel state information (CSI) on the capacity of massive MIMO systems with codebook-based hybrid beamforming , where the RF precoder is selected from a finite size codebook. Two types of CSI at the BS (CSIT) are commonly assumed: full instantaneous CSIT and hybrid CSIT (channel statistics plus the low-dimensional effective channel matrix). With full instantaneous CSIT, both the RF and baseband precoders are adaptive to the full instantaneous CSI at the fast timescale. With hybrid CSIT, the RF precoder is adaptive to channel statistics only, and the baseband precoder is adaptive to the instantaneous effective channel, yielding lower implementation complexity by sacrificing some capacity. We derive asymptotic sum capacity expressions under these two types of CSIT. We find that, in codebook-based hybrid beamforming systems, exploiting the full instantaneous CSIT can only achieve a marginal SNR gain and hybrid CSIT is sufficient to achieve the first-order gain provided by the massive MIMO for most of the cases. We also propose fast and slow timescale RF precoding algorithms, which asymptotically achieve the capacity under full instantaneous CSIT and hybrid CSIT, respectively.

Journal ArticleDOI
Daosen Zhai1, Min Sheng1, Xijun Wang1, Yuzhou Li1, Jiongjiong Song1, Jiandong Li1 
TL;DR: This letter analyzes the special structure of the problem and exploits it to obtain the optimal power splitting ratio and resource allocation strategy when one of them is fixed and results indicate that the algorithm achieves a better rate-energy tradeoff compared to other schemes.
Abstract: In this letter, we investigate the fundamental tradeoff between rate and energy for sparse code multiple access (SCMA) networks with wireless power transfer. A weighted rate and energy maximization problem by jointly considering power allocation, codebook assignment, and power splitting, is formulated. To solve the hard problem, an iterative algorithm based on the univariate search technique is proposed, which has good performance with low complexity. Specifically, we analyze the special structure of the problem and exploit it to obtain the optimal power splitting ratio and resource allocation strategy when one of them is fixed. Simulation results indicate that our algorithm achieves a better rate-energy tradeoff compared to other schemes.

Journal ArticleDOI
TL;DR: Cuckoo search (CS) metaheuristic optimization algorithm is proposed, that optimizes the LBG codebook by levy flight distribution function which follows the Mantegna’s algorithm instead of Gaussian distribution.

Journal ArticleDOI
TL;DR: This work proposes an efficient aerial image categorization algorithm that focuses on learning a discriminative topological codebook of aerial images under a multitask learning framework and is competitive to several existing recognition models.
Abstract: Fast and accurately categorizing the millions of aerial images on Google Maps is a useful technique in pattern recognition. Existing methods cannot handle this task successfully due to two reasons: 1) the aerial images’ topologies are the key feature to distinguish their categories, but they cannot be effectively encoded by a conventional visual codebook and 2) it is challenging to build a realtime image categorization system, as some geo-aware Apps update over 20 aerial images per second. To solve these problems, we propose an efficient aerial image categorization algorithm. It focuses on learning a discriminative topological codebook of aerial images under a multitask learning framework. The pipeline can be summarized as follows. We first construct a region adjacency graph (RAG) that describes the topology of each aerial image. Naturally, aerial image categorization can be formulated as RAG-to-RAG matching. According to graph theory, RAG-to-RAG matching is conducted by enumeratively comparing all their respective graphlets (i.e., small subgraphs). To alleviate the high time consumption, we propose to learn a codebook containing topologies jointly discriminative to multiple categories. The learned topological codebook guides the extraction of the discriminative graphlets. Finally, these graphlets are integrated into an AdaBoost model for predicting aerial image categories. Experimental results show that our approach is competitive to several existing recognition models. Furthermore, over 24 aerial images are processed per second, demonstrating that our approach is ready for real-world applications.

Journal ArticleDOI
TL;DR: The results reveal that the proposed method produces a visual codebook with superior discriminative power and thus better retrieval performance while maintaining excellent computational efficiency.

Journal ArticleDOI
TL;DR: A reconstructive method that compresses low-rank approximation into a cluster-level rating-pattern referred to as a codebook, and then constructs an improved approximation by expending the codebook improves the prediction accuracy of the state-of theart matrix factorization and social recommendation models.