scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2018"


Journal ArticleDOI
TL;DR: A context-aware local binary multi-scale feature learning method for face recognition that exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation.
Abstract: In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors

185 citations


Journal ArticleDOI
TL;DR: A deep learning-aided SCMA (D-SCMA) in which the codebook that minimizes the bit error rate (BER) is adaptively constructed, and a decoding strategy is learned using a deep neural network-based encoder and decoder.
Abstract: Sparse code multiple access (SCMA) is a promising code-based non-orthogonal multiple-access technique that can provide improved spectral efficiency and massive connectivity meeting the requirements of 5G wireless communication systems. We propose a deep learning-aided SCMA (D-SCMA) in which the codebook that minimizes the bit error rate (BER) is adaptively constructed, and a decoding strategy is learned using a deep neural network-based encoder and decoder. One benefit of D-SCMA is that the construction of an efficient codebook can be achieved in an automated manner, which is generally difficult due to the non-orthogonality and multi-dimensional traits of SCMA. We use simulations to show that our proposed scheme provides a lower BER with a smaller computation time than conventional schemes.

180 citations


Journal ArticleDOI
TL;DR: This work proposes a novel codebook-based BIQA method by optimizing multistage discriminative dictionaries (MSDDs), which has been evaluated on five databases and experimental results well confirm its superiority over existing relevant BIZA methods.
Abstract: State-of-the-art algorithms for blind image quality assessment (BIQA) typically have two categories. The first category approaches extract natural scene statistics (NSS) as features based on the statistical regularity of natural images. The second category approaches extract features by feature encoding with respect to a learned codebook. However, several problems need to be addressed in existing codebook-based BIQA methods. First, the high-dimensional codebook-based features are memory-consuming and have the risk of over-fitting. Second, there is a semantic gap between the constructed codebook by unsupervised learning and image quality. To address these problems, we propose a novel codebook-based BIQA method by optimizing multistage discriminative dictionaries (MSDDs). To be specific, MSDDs are learned by performing the label consistent K-SVD (LC-KSVD) algorithm in a stage-by-stage manner. For each stage, a new quality consistency constraint called “quality-discriminative regularization” term is introduced and incorporated into the reconstruction error term to form a unified objective function, which can be effectively solved by LC-KSVD for discriminative dictionary learning. Then, the latter stage takes the reconstruction residual data in the former stage as input based on which LC-KSVD is repeatedly performed until the final stage is reached. Once the MSDDs are learned, multistage feature encoding is performed to extract feature codes. Finally, the feature codes are concatenated across all stages and aggregated over the entire image for quality prediction via regression. The proposed method has been evaluated on five databases and experimental results well confirm its superiority over existing relevant BIQA methods.

174 citations


Proceedings ArticleDOI
01 Jul 2018
TL;DR: In this article, a quantization method is proposed to reduce the information loss from quantization by learning a symmetric codebook for particular weight subgroups, such that the hardware simplicity of the low-precision representations is preserved.
Abstract: Inference for state-of-the-art deep neural networks is computationally expensive, making them difficult to deploy on constrained hardware environments. An efficient way to reduce this complexity is to quantize the weight parameters and/or activations during training by approximating their distributions with a limited entry codebook. For very low-precisions, such as binary or ternary networks with 1-8-bit activations, the information loss from quantization leads to significant accuracy degradation due to large gradient mismatches between the forward and backward functions. In this paper, we introduce a quantization method to reduce this loss by learning a symmetric codebook for particular weight subgroups. These subgroups are determined based on their locality in the weight matrix, such that the hardware simplicity of the low-precision representations is preserved. Empirically, we show that symmetric quantization can substantially improve accuracy for networks with extremely low-precision weights and activations. We also demonstrate that this representation imposes minimal or no hardware implications to more coarse-grained approaches. Source code is available at https://www.github.com/julianfaraone/SYQ.

135 citations


Journal ArticleDOI
TL;DR: This work proposes an angle-of-departure (AoD) adaptive subspace codebook for channel feedback in FDD massive MIMO systems and compares the proposed quantized feedback technique using the AoD-adaptive sub space codebook with a comparable analog feedback method.
Abstract: Channel feedback is essential in frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. Unfortunately, prior work on multiuser MIMO has shown that the feedback overhead scales linearly with the number of base station (BS) antennas, which is large in massive MIMO systems. To reduce the feedback overhead, we propose an angle-of-departure (AoD) adaptive subspace codebook for channel feedback in FDD massive MIMO systems. Our key insight is to leverage the observation that path AoDs vary more slowly than the path gains. Within the angle coherence time, by utilizing the constant AoD information, the proposed AoD-adaptive subspace codebook is able to quantize the channel vector in a more accurate way. From the performance analysis, we show that the feedback overhead of the proposed codebook only scales linearly with a small number of dominant (path) AoDs instead of the large number of BS antennas. Moreover, we compare the proposed quantized feedback technique using the AoD-adaptive subspace codebook with a comparable analog feedback method. Extensive simulations show that the proposed AoD-adaptive subspace codebook achieves good channel feedback quality, while requiring low overhead.

107 citations


Journal ArticleDOI
TL;DR: The simulation and analytical results show that the presented SCMA codebook outperforms the existing codebooks and low-density signature, and the proposed design is more efficient for the SCMA Codebook with large size and/or high dimension.
Abstract: In this paper, a novel codebook design method for sparse code multiple access (SCMA) is proposed and an analytical framework to evaluate the bit error rate (BER) performance is developed. In particular, to meet the codebook design criteria based on the pairwise error probability, a novel codebook with large minimum Euclidean distance employing the star quadrature amplitude modulation signal constellations is designed. In addition, with the aid of the phase distribution of the presented SCMA constellations, BER of SCMA system over downlink Rayleigh fading channel is obtained in closed-form expression. The simulation and analytical results show that the presented SCMA codebook outperforms the existing codebooks and low-density signature, and the proposed design is more efficient for the SCMA codebook with large size and/or high dimension. Moreover, the derived theoretical BER results match well the simulation results, especially in the high signal-to-noise ratio regimes.

85 citations


Journal ArticleDOI
TL;DR: Performance comparisons show that the proposed approach achieves a superior tradeoff between estimation performance and training penalty over the state-of-the-art alternatives.
Abstract: The existing channel estimation methods for millimeter-wave communications, e.g., hierarchical search and compressed sensing, either acquire only one single multipath component (MPC) or require considerably high training overhead. To realize fast yet accurate channel estimation, we propose a multipath decomposition and recovery approach in this paper. The proposed approach has two stages. In the first stage, instead of directly searching the real MPCs, we decompose each real MPC into several virtual MPCs and acquire the virtual MPCs by using the hierarchical search based on a normal-resolution codebook. Then, in the second stage, the real MPCs are recovered from the acquired virtual MPCs in the first stage, which turns out to be a sparse reconstruction problem, where the size of the dictionary matrix is greatly reduced by exploiting the results of the virtual multipath acquisition. Moreover, to make the proposed approach applicable for both analog and hybrid beamforming/combining devices with strict constant-modulus constraint, we particularly design a codebook for the hierarchical search by using an enhanced subarray technique, and the codebook is also applicable in other hierarchical search methods. Performance comparisons show that the proposed approach achieves a superior tradeoff between estimation performance and training penalty over the state-of-the-art alternatives.

73 citations


Proceedings ArticleDOI
Bin Liu1, Yue Cao1, Mingsheng Long1, Jianmin Wang1, Jingdong Wang2 
15 Oct 2018
TL;DR: Deep Triplet Quantization (DTQ), a novel approach to learning deep quantization models from the similarity triplets, can generate high-quality and compact binary codes, which yields state-of-the-art image retrieval performance on three benchmark datasets, NUS-WIDE, CIFAR-10, and MS-COCO.
Abstract: Deep hashing establishes efficient and effective image retrieval by end-to-end learning of deep representations and hash codes from similarity data. We present a compact coding solution, focusing on deep learning to quantization approach that has shown superior performance over hashing solutions for similarity retrieval. We propose Deep Triplet Quantization (DTQ), a novel approach to learning deep quantization models from the similarity triplets. To enable more effective triplet training, we design a new triplet selection approach, Group Hard, that randomly selects hard triplets in each image group. To generate compact binary codes, we further apply a triplet quantization with weak orthogonality during triplet training. The quantization loss reduces the codebook redundancy and enhances the quantizability of deep representations through back-propagation. Extensive experiments demonstrate that DTQ can generate high-quality and compact binary codes, which yields state-of-the-art image retrieval performance on three benchmark datasets, NUS-WIDE, CIFAR-10, and MS-COCO.

70 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the spectral efficiencies of two typical hybrid precoding structures, i.e., the sub-connected structure and the fully connected structure, under a more realistic hardware network model, particularly, with inevitable dissipation.
Abstract: In this paper, we study the hybrid precoding structures over limited feedback channels for massive multiuser multiple-input multiple-output (MIMO) systems. We focus on the system performance of hybrid precoding under a more realistic hardware network model, particularly, with inevitable dissipation. The effect of quantized analog and digital precoding is characterized. We investigate the spectral efficiencies of two typical hybrid precoding structures, i.e., the sub-connected structure and the fully connected structure. It is revealed that increasing signal power can compensate for the performance loss incurred by quantized analog precoding. In addition, by capturing the nature of the effective channels for hybrid processing, we employ a channel correlation-based codebook and demonstrate that the codebook shows a great advantage over the conventional random vector quantization codebook. It is also discovered that, if the channel correlation-based codebook is utilized, the sub-connected structure always outperforms the fully connected structure in either massive MIMO or low signal-to-noise ratio scenarios; otherwise, the fully-connected structrue achieves better performance. Simulation results under both Rayleigh fading channels and millimeter wave (mm-wave) channels verify the conclusions above.

61 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed two waveform strategies relying on limited feedback for multi-antenna multi-sine WPT over frequency-selective channels, where the energy transmitter (ET) transmits over multiple timeslots with every time a different waveform precoder within a codebook, and the energy receiver (ER) reports the index of the precoder in the codebook that leads to the largest harvested energy.
Abstract: Waveform design is a key technique to jointly exploit a beamforming gain, the channel frequency selectivity, and the rectifier nonlinearity, so as to enhance the end-to-end power transfer efficiency of wireless power transfer (WPT). Those waveforms have been designed, assuming perfect channel state information at the transmitter. This paper proposes two waveform strategies relying on limited feedback for multi-antenna multi-sine WPT over frequency-selective channels. In the waveform selection strategy, the energy transmitter (ET) transmits over multiple timeslots with every time a different waveform precoder within a codebook, and the energy receiver (ER) reports the index of the precoder in the codebook that leads to the largest harvested energy. In the waveform refinement strategy, the ET sequentially transmits two waveforms in each stage, and the ER reports one feedback bit indicating an increase/decrease in the harvested energy during this stage. Based on multiple one-bit feedback, the ET successively refines waveform precoders in a tree-structured codebook over multiple stages. By employing the framework of the generalized Lloyd’s algorithm, novel algorithms are proposed for both strategies to optimize the codebooks in both space and frequency domains. The proposed limited feedback-based waveform strategies are shown to outperform a set of baselines, achieving higher harvested energy.

60 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a power domain sparse code multiple access (PSMA) for 5G networks, where the same codebook can be reused in the coverage area of each base station more than one time.
Abstract: In this paper, a new approach for multiple access in the fifth generation (5G) of cellular networks called power domain sparse code multiple access (PSMA) is proposed. In PSMA, we adopt both the power domain and the code domain to transmit multiple users’ signals over a subcarrier simultaneously. In such a model, the same sparse code multiple-access (SCMA) codebook can be used by multiple users, where, for these users, the power domain non-orthogonal multiple access (PD-NOMA) technique is used to send signals non-orthogonally. Although the signal of different SCMA codebooks can be detected orthogonally, the same codebook used by multiple users produces interference over these users. With PSMA, a codebook can be reused in the coverage area of each base station more than one time, which can improve the spectral efficiency. We investigate the signal model as well as the receiver and transmitter of the PSMA method. In the receiver side, we propose a message passing algorithm-based successive interference cancellation detector to detect the signal of each user. To evaluate the performance of PSMA, we consider a heterogeneous cellular network. In this case, our design objective is to maximize the system sum rate of the network subject to some system level and QoS constraints such as transmit power constraints. We formulate the proposed resource allocation problem as an optimization problem and solve it by successive convex approximation techniques. Moreover, we compare PSMA with SCMA and PD-NOMA from the performance and computational complexity perspective. Finally, the effectiveness of the proposed approach is investigated using numerical results. We show that by a reasonable increase in complexity, PSMA can improve the spectral efficiency about 50% compared with SCMA and PD-NOMA.

Proceedings ArticleDOI
17 Jun 2018
TL;DR: This work considers the storage-retrieval rate tradeoff in private information retrieval systems using a Shannon-theoretic approach and proposes a coding scheme based on random codebook generation, joint typicality encoding, and the binning technique for the canonical two-message two-database case.
Abstract: We consider the storage-retrieval rate tradeoff in private information retrieval systems using a Shannon-theoretic approach. Our focus is on the canonical two-message two-database case, for which a coding scheme based on random codebook generation, joint typicality encoding, and the binning technique is proposed. It is first shown that when the retrieval rate is kept optimal, the proposed non-linear scheme uses less storage than the optimal linear scheme. Since the other extreme point corresponding to using the minimum storage requires both messages to be retrieved, the performance through space-sharing of the two points can also be achieved. However, using the proposed scheme, further improvement can be achieved over this simple strategy. Although the random-coding based scheme has a diminishing but nonzero probability of error, the coding error can be eliminated if variable-length codes are allowed. Novel outer bounds are finally provided and used to establish the superiority of the non-linear codes over linear codes.

Proceedings ArticleDOI
15 Oct 2018
TL;DR: An online hashing scheme, termed Hadamard Codebook based Online Hashing (HCOH), which aims to solving the above problems towards robust and supervised online hashing and can be embedded with supervised labels and is not limited to a predefined category number.
Abstract: In recent years, binary code learning, a.k.a. hashing, has received extensive attention in large-scale multimedia retrieval. It aims to encode high-dimensional data points into binary codes, hence the original high-dimensional metric space can be efficiently approximated via Hamming space. However, most existing hashing methods adopted offline batch learning, which is not suitable to handle incremental datasets with streaming data or new instances. In contrast, the robustness of the existing online hashing remains as an open problem, while the embedding of supervised/semantic information hardly boosts the performance of the online hashing, mainly due to the defect of unknown category numbers in supervised learning. In this paper, we propose an online hashing scheme, termed Hadamard Codebook based Online Hashing (HCOH), which aims to solving the above problems towards robust and supervised online hashing. In particular, we first assign an appropriate high-dimensional binary codes to each class label, which is generated randomly by Hadamard codes. Subsequently, LSH is adopted to reduce the length of such Hadamard codes in accordance with the hash bits, which can adapt the predefined binary codes online, and theoretically guarantee the semantic similarity. Finally, we consider the setting of stochastic data acquisition, which facilitates our method to efficiently learn the corresponding hashing functions via stochastic gradient descend (SGD) online. Notably, the proposed HCOH can be embedded with supervised labels and is not limited to a predefined category number. Extensive experiments on three widely-used benchmarks demonstrate the merits of the proposed scheme over the state-of-the-art methods.

Journal ArticleDOI
TL;DR: This paper designs the multiuser codebook and the advanced decoding from the perspective of the theoretical capacity and the system feasibility, and proposes an iterative joint detection and decoding scheme with only partial inner iterations, which exhibits significant performance gain over the traditional one with separate detection and decode.
Abstract: Sparse code multiple access (SCMA) is a promising non-orthogonal air-interface technology for its ability to support massive connections. In this paper, we design the multiuser codebook and the advanced decoding from the perspective of the theoretical capacity and the system feasibility. First, different from the lattice-based constellation in point-to-point channels, we propose a novel codebook for maximizing the constellation constrained capacity. We optimize a series of 1-D superimposed constellations to construct multi-dimensional codewords. An effective dimensional permutation switching algorithm is proposed to further obtain the capacity gain. Consequently, it shows that the performance of the proposed codebook approaches the Shannon limit and achieves significant gains over the other existing ones. Furthermore, we provide a symbol-based extrinsic information transfer tool to analyze the convergence of SCMA iterative detection, where the complex codewords are considered in modeling the a priori probabilities instead of assuming the binary inputs in previous literature. Finally, to approach the capacity, we develop a low-density parity-check code-based SCMA receiver. Most importantly, by utilizing the EXIT charts, we propose an iterative joint detection and decoding scheme with only partial inner iterations, which exhibits significant performance gain over the traditional one with separate detection and decoding.

Journal ArticleDOI
TL;DR: In this paper, an advanced directional precoding structure for multi-user multi-input multi-output (MIMO) transmissions for millimeter-wave systems with a hybrid precoding architecture at the base station is proposed.
Abstract: The focus of this paper is on multi-user multi-input multi-output transmissions for millimeter-wave systems with a hybrid precoding architecture at the base station To enable multiuser transmissions, the base station uses a cell-specific codebook of beamforming vectors over an initial beam alignment phase Each user uses a user-specific codebook of beamforming vectors to learn the top- $P$ (where $P \geq 1$ ) beam pairs in terms of the observed signal-to-noise ratio ( ${\text{SNR}}$ ) in a single-user setting The top- $P$ beam indices along with their ${\text{SNR}}$ s are fed back from each user and the base station leverages this information to generate beam weights for simultaneous transmissions A typical method to generate the beam weights is to use only the best beam for each user and either steer energy along this beam, or to utilize this information to reduce multi-user interference The other beams are used as fall-back options to address blockage or mobility Such an approach completely discards information learned about the channel condition(s) even though each user feeds back this information With this background, this paper develops an advanced directional precoding structure for simultaneous transmissions at the cost of an additional marginal feedback overhead This construction relies on three main innovations: first, additional feedback to allow the base station to reconstruct a rank- $P$ approximation of the channel matrix between it and each user; second, a zero-forcing structure that leverages this information to combat multi-user interference by remaining agnostic of the receiver beam knowledge in the precoder design; and third, a hybrid precoding architecture that allows both amplitude and phase control at low complexity and cost to allow the implementation of the zero-forcing structure Numerical studies show that the proposed scheme results in a significant sum rate performance improvement over naive schemes even with a coarse initial beam alignment codebook

Journal ArticleDOI
TL;DR: A deep learning framework for the design of on-off keying (OOK) based binary signaling transceiver in dimmable visible light communication (VLC) systems using an autoencoder (AE) approach to learn a neural network of the encoder-decoder pair that reconstructs the output identical to an input.
Abstract: This paper develops a deep learning framework for the design of on-off keying (OOK) based binary signaling transceiver in dimmable visible light communication (VLC) systems. The dimming support for the OOK optical signal is achieved by adjusting the number of ones in a binary codeword, which boils down to a combinatorial design problem for the codebook of a constant weight code (CWC) over signal-dependent noise channels. To tackle this challenge, we employ an autoencoder (AE) approach to learn a neural network of the encoder-decoder pair that reconstructs the output identical to an input. In addition, optical channel layers and binarization techniques are introduced to reflect the physical and discrete nature of the OOK-based VLC systems. The VLC transceiver is designed and optimized via the end-to-end training procedure for the AE. Numerical results verify that the proposed transceiver performs better than baseline CWC schemes.

Journal ArticleDOI
TL;DR: A variety of techniques are described to improve sparse superposition codes with approximate message passing (AMP) decoding, and these include an iterative algorithm for SPARC power allocation, guidelines for choosing codebook parameters, and estimating a critical decoding parameter online instead of precomputation.
Abstract: Sparse superposition codes are a recent class of codes introduced by Barron and Joseph for efficient communication over the AWGN channel. With an appropriate power allocation, these codes have been shown to be asymptotically capacity-achieving with computationally feasible decoding. However, a direct implementation of the capacity-achieving construction does not give good finite length error performance. In this paper, we consider sparse superposition codes with approximate message passing (AMP) decoding, and describe a variety of techniques to improve their finite length performance. These include an iterative algorithm for SPARC power allocation, guidelines for choosing codebook parameters, and estimating a critical decoding parameter online instead of precomputation. We also show how partial outer codes can be used in conjunction with AMP decoding to obtain a steep waterfall in the error performance curves. We compare the error performance of AMP-decoded sparse superposition codes with coded modulation using LDPC codes from the WiMAX standard.

Proceedings ArticleDOI
15 Oct 2018
TL;DR: In this paper, the authors proposed an adaptive codebook optimization for IEEE 802.11ad devices to optimize the transmit beam patterns for the current channel. But the authors do not expose the CSI directly, they generate a codebook with phase-shifted probing beams that enables them to obtain the CSI by combining strategically selected magnitude measurements.
Abstract: Beamforming is vital to overcome the high attenuation in wireless millimeter-wave networks. It enables nodes to steer their antennas in the direction of communication. To cope with complexity and overhead, the IEEE 802.11ad standard uses a sector codebook with distinct steering directions. In current off-the-shelf devices, we find codebooks with generic pre-defined beam patterns. While this approach is simple and robust, the antenna modules that are typically deployed in such devices are capable of generating much more precise antenna beams. In this paper, we adaptively adjust the sector codebook of IEEE 802.11ad devices to optimize the transmit beam patterns for the current channel. To achieve this, we propose a mechanism to extract full channel state information (CSI) regarding phase and magnitude from coarse signal strength readings on off-the-shelf IEEE 802.11ad devices. Since such devices do not expose the CSI directly, we generate a codebook with phase-shifted probing beams that enables us to obtain the CSI by combining strategically selected magnitude measurements. Using this CSI, transmitters dynamically compute a transmit beam pattern that maximizes the signal strength at the receiver. Thereby, we automatically exploit reflectors in the environment and improve the received signal quality. Our implementation of this mechanism on off-the-shelf devices demonstrates that adaptive codebook optimization achieves a significantly higher throughput of about a factor of two in typical real-world scenarios.

Proceedings ArticleDOI
17 Jun 2018
TL;DR: This work considers binary codebooks that allow for unique string reconstruction and proposes a new method, termed repeat replacement, to create the codebook, to solve the problem of coded string reconstruction from multiset substring spectra.
Abstract: The problem of reconstructing strings from their substring spectra has a long history and in its most simple incarnation asks for determining under which conditions the spectrum uniquely determines the string. We study the problem of coded string reconstruction from multiset substring spectra, where the strings are restricted to lie in some codebook. In particular, we consider binary codebooks that allow for unique string reconstruction and propose a new method, termed repeat replacement, to create the codebook. Our contributions include algorithmic solutions for repeat replacement and constructive redundancy bounds for the underlying coding schemes. The study is motivated by applications in DNA-based data storage systems that use high throughput readout sequencers.

Posted Content
TL;DR: In this paper, a multi-armed bandit framework is used to develop online learning algorithms for beam pair selection and refinement, which can achieve on average 1dB gain over the exhaustive search (over 271x271 beam pairs) on the unrefined codebook with a training budget of only 30 beam pairs.
Abstract: Accurate beam alignment is essential for beam-based millimeter wave communications. Conventional beam sweeping solutions often have large overhead, which is unacceptable for mobile applications like vehicle-to-everything. Learning-based solutions that leverage sensor data like position to identify good beam directions are one approach to reduce the overhead. Most existing solutions, though, are supervised-learning where the training data is collected beforehand. In this paper, we use a multi-armed bandit framework to develop online learning algorithms for beam pair selection and refinement. The beam pair selection algorithm learns coarse beam directions in some predefined beam codebook, e.g., in discrete angles separated by the 3dB beamwidths. The beam refinement fine-tunes the identified directions to match the peak of the power angular spectrum at that position. The beam pair selection uses the upper confidence bound (UCB) with a newly proposed risk-aware feature, while the beam refinement uses a modified optimistic optimization algorithm. The proposed algorithms learn to recommend good beam pairs quickly. When using 16x16 arrays at both the transmitter and receiver, it can achieve on average 1dB gain over the exhaustive search (over 271x271 beam pairs) on the unrefined codebook within 100 time-steps with a training budget of only 30 beam pairs.

Proceedings ArticleDOI
01 Jun 2018
TL;DR: The proposed method significantly outperforms state-of-the-art methods on CPU and GPU for high dimensional nearest neighbor queries on billion-scale datasets in terms of query time and accuracy regardless of the batch size.
Abstract: We present a new method for Product Quantization (PQ) based approximated nearest neighbor search (ANN) in high dimensional spaces. Specifically, we first propose a quantization scheme for the codebook of coarse quantizer, product quantizer, and rotation matrix, to reduce the cost of accessing these codebooks. Our approach also combines a highly parallel k-selection method, which can be fused with the distance calculation to reduce the memory overhead. We implement the proposed method on Intel HARPv2 platform using OpenCL-FPGA. The proposed method significantly outperforms state-of-the-art methods on CPU and GPU for high dimensional nearest neighbor queries on billion-scale datasets in terms of query time and accuracy regardless of the batch size. To our best knowledge, this is the first work to demonstrate FPGA performance superior to CPU and GPU on high-dimensional, large-scale ANN datasets.

Journal ArticleDOI
TL;DR: A novel codebook construction design based on CUR-decomposition technique to reduce the dimensionality problem in channel correlation matrix (rotation matrix) is proposed and can be applied in the fifth generation massive antenna multi-user system with over a hundred antenna elements.
Abstract: Millimeter wave (mm-Wave) communications are emerging to meet the increasing demand for high transmission data rate in high user density areas. Meanwhile, the mm-Wave base station (BS) needs to employ a large number of antenna elements to increase the gain as well as serve a huge number of users. However, a vast number of antenna elements causes dimensionality problem in channel correlation matrix (rotation matrix). Therefore, we propose a novel codebook construction design based on CUR-decomposition technique to reduce the dimensionality problem. In this paper, the original correlation matrix is decomposed to the product of three low dimension matrices ( $\mathbf {C}$ , $\mathbf {U}$ , and $\mathbf {R}$ ). The new rotated codebook is then constructed by the new rotation matrix. Moreover, we evaluate the new decomposition matrix with the original matrix in terms of compression ratio and mismatch error. We also provide the achievable sum rate capacities for singular value decomposition, zero forcing, and a matched filter techniques to compare with the proposed method. Furthermore, the system capacity enhancement related to the number of antenna elements and the required feedback bits are analyzed. Simulation results show that the proposed method achieves much better system performance since the dimensionality problem is solved. The proposed method can be applied in the fifth generation massive antenna multi-user system with over a hundred antenna elements.


Journal ArticleDOI
TL;DR: An efficient high-dimensional codebook design is conceived for sparse code multiple access systems that has the compelling benefit that its power efficiency is monotonically increased with its dimensionality.
Abstract: An efficient high-dimensional codebook design is conceived for sparse code multiple access systems. This generalized technique has the compelling benefit that its power efficiency is monotonically increased with its dimensionality. A striking further practical benefit is that this increased power efficiency is achieved without increasing the per-symbol detection complexity.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed EEDDS outperforms the traditional data delivery scheme in terms of energy consumption in the case when the applied baseband encoding or carrier-modulation schemes occupy the ECD.
Abstract: Wireless sensor networks (WSNs) have been widely deployed in intelligent transportation systems. Battery-free WSN (BF-WSN) scavenges energy from the environment. In the BF-WSN, whose nodes harvest radio frequency (RF) energy, delivering data with low energy consumption is important for the BF-WSN nodes to have a longer runtime. Some existing baseband encoding or carrier-modulation schemes exhibit the energy consumption disparity (ECD) between transmitting bit 0 and bit 1. An example is FM0 (bi-phase space) baseband encoding, which is widely applied in the backscattering communications in radio frequency identification systems and wireless identification sensing platform. With FM0, transmitting bit 0 consumes much greater energy than bit 1. To achieve energy-saving in delivering data with the ECD, we, in this paper, formulate the optimization problem (OP) that maximizes percentage of energy saving. The solution of the OP leads to the optimal energy-efficient codebook. Based on the derived codebook, we present the energy-efficient data delivery scheme (EEDDS), with which the sender transmits the codewords in the codebook and the receiver recovers the corresponding original data blocks using the codebook. Simulation results show that the proposed EEDDS outperforms the traditional data delivery scheme in terms of energy consumption in the case when the applied baseband encoding or carrier-modulation schemes occupy the ECD.

Journal ArticleDOI
TL;DR: This correspondence proposes a dual-function hybrid beamforming architecture, where the antenna array is split into sub-arrays that are separated by a sufficiently large distance so that each sub-array experiences independent fading, and shows that the architecture attains the dual-functions of beamforming and diversity.
Abstract: In this correspondence, we propose a dual-function hybrid beamforming architecture, where the antenna array is split into sub-arrays that are separated by a sufficiently large distance so that each sub-array experiences independent fading. The proposed architecture attains the dual-functions of beamforming and diversity. We then demonstrate that splitting the array into two sub-arrays provides the best performance in terms of the achievable rate as a benefit of the diversity gain obtained in addition to the beamforming gain. However, the performance starts depleting if the array is partitioned into more than two sub-arrays because of diminishing additional diversity gains, which fails to compensate for the beamforming gain erosion due to splitting the antenna arrays. Additionally, we analyze the so-called discrete Fourier transform-mutually unbiased bases (DFT-MUB) aided codebook invoked for the conceived design, which imposes an appealingly low complexity. Explicitly, we show that for the proposed dual-function sub-array-connected design, the DFT-MUB assisted codebook outperforms the state-of-the-art precoding benchmarks and performs close to the optimal precoding matrix.

Journal ArticleDOI
TL;DR: Simulation results show that the developed joint iterative training method having a fast convergence can achieve similar array gain compared with the systems equipped with the continuous PSs, and the proposed hybrid precoding utilizing low-resolution PSs can offer a sum-rate comparable to the fixed-rank fully-digital multiple-input multiple-output systems, but with limited hardware cost and energy consumption.
Abstract: Large antenna array systems are favored in next-generation wireless communications, as it can offer multiplexing and array gains that enhance the system sum-rate. However, the large antenna array systems often necessitate the use of high-cost and power-hungry radio frequency (RF) devices. To reduce the hardware complexity and avoid the explicit high-dimensional channel estimation, we propose a joint iterative training based hybrid precoding using low-resolution phase shifters (PSs). Different from the existing works based on the predefined codebook, the iterative training is applied for the hybrid architectures. The iterative training converges to the dominant steering vectors that align with the direction of the largest channel gain, thus it can harvest more array gains than the predefined codebook method. In addition, the performance loss induced by the finite phase quantization is analytically investigated for multiple RF chains. Simulation results show that the developed joint iterative training method having a fast convergence can achieve similar array gain compared with the systems equipped with the continuous PSs. Furthermore, the proposed hybrid precoding utilizing low-resolution PSs can offer a sum-rate comparable to the fixed-rank fully-digital multiple-input multiple-output systems, but with limited hardware cost and energy consumption.

Journal ArticleDOI
TL;DR: A novel framework for seizure prediction is proposed by learning synchronization patterns, and bag-of-wave (BoWav) feature extraction is proposed for modeling synchronization pattern of electroencephalogram (EEG) signal.
Abstract: Epileptic seizure prediction has the potential to promote epilepsy care and treatment. However, the seizure prediction accuracy does not satisfy the application requirements. In this paper, a novel framework for seizure prediction is proposed by learning synchronization patterns. For better representation, bag-of-wave (BoWav) feature extraction is proposed for modeling synchronization pattern of electroencephalogram (EEG) signal. An interictal codebook and preictal codebook, representing the local segments, are constructed by a clustering algorithm. Within a period of EEG signal on all electrodes, local segments are projected onto the learned codebooks. The proposed feature expresses the synchronization pattern of EEG signal with the histogram feature. Moreover, extreme learning machine (ELM) is used to classify the sequence of features. Experiments are performed on the Kaggle seizure prediction challenge dataset and the CHB-MIT dataset. The experiment on the CHB-MIT achieves a sensitivity of 88.24% and a false prediction rate per hour of 0.25.

Journal ArticleDOI
TL;DR: A novel codebook design scheme for orthogonal frequency-division multiplexing with index modulation (OFDM-IM) is proposed to improve system performance and can potentially provide a tradeoff between diversity and transmission rate.
Abstract: In this paper, we propose a novel codebook design scheme for orthogonal frequency-division multiplexing with index modulation (OFDM-IM) to improve system performance. The optimization process can be implemented efficiently by the lexicographic ordering principle. By applying the proposed codebook design, all subcarrier activation patterns with a fixed number of active subcarriers will be explored. Furthermore, as the number of active subcarriers is fixed, the computational complexity for estimation at the receiver is reduced and the zero-active subcarrier dilemma is solved without involving complex higher layer transmission protocols. It is found that the codebook design can potentially provide a tradeoff between diversity and transmission rate. We investigate the diversity mechanism and formulate three diversity-rate optimization problems for the proposed OFDM-IM system. Based on the genetic algorithm, the method of solving these formulated optimization problems is provided and verified to be effective. Then, we analyze the average block error rate and bit error rate of the OFDM-IM systems applying the codebook design. Finally, all analyses are numerically verified by the Monte Carlo simulations. In addition, a series of comparisons are provided, by which the superiority of the codebook design is confirmed.

Posted Content
TL;DR: This letter designs an over-sampling codebook (OSC)-based hybrid minimum sum-mean-square-error (min-SMSE) precoding to optimize the bit-error-rate (BER) and proposes an OSC-based joint analog precoder/combiner design.
Abstract: Hybrid precoding design is challenging for millimeter-wave (mmWave) massive MIMO. Most prior hybrid precoding schemes are designed to maximize the sum spectral efficiency (SSE), while seldom investigate the bit-error-rate (BER). Therefore, we propose an over-sampling codebook (OSC)-based hybrid minimum sum-mean-square-error (min-SMSE) precoding scheme for mmWave multi-user three-dimensional (3D)-MIMO systems to optimize the BER, where multi-user transmission with multiple data streams for each user is considered. Specifically, given the effective baseband channel consisting of the real channel and analog precoding, we first design the digital precoder/combiner based on min-SMSE criterion to optimize the BER. To further reduce the SMSE between the transmit and receive signals, we propose an OSC-based joint analog precoder/combiner (JAPC) design. Simulation results show that the proposed scheme can achieve the better performance than its conventional counterparts.