scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2021"


Journal ArticleDOI
TL;DR: This letter proposes a quadratic phase-shift design, derive its coefficients as a function of the codebook size, and analyzes its performance to show that the proposed design yields a higher power efficiency for small codebooks than the linear baseline designs.
Abstract: In this letter, we focus on large intelligent reflecting surfaces (IRSs) and propose a new codebook construction method to obtain a set of predesigned phase-shift configurations for the IRS unit cells. Since the overhead for channel estimation and the complexity of online optimization for IRS-assisted communications scale with the size of the phase-shift codebook, the design of small codebooks is of high importance. We show that there exists a fundamental tradeoff between power efficiency and the size of the codebook. We first analyze this tradeoff for baseline designs that employ a linear phase-shift across the IRS. Subsequently, we show that an efficient design for small codebooks mandates higher-order phase-shift variations across the IRS. Consequently, we propose a quadratic phase-shift design, derive its coefficients as a function of the codebook size, and analyze its performance. Our simulation results show that the proposed design yields a higher power efficiency for small codebooks than the linear baseline designs.

33 citations


Journal ArticleDOI
TL;DR: It is revealed that DCMA gives rise to significantly improved error rate performance in Rayleigh fading channels, whilst having decoding complexity comparable to that of SCMA.
Abstract: This paper is focused on code-domain non-orthogonal multiple access (CD-NOMA), which is an emerging paradigm to support massive connectivity for future machine-type wireless networks. We take a comparative approach to study two types of overloaded CD-NOMA, i.e., sparse code multiple access (SCMA) and dense code multiple access (DCMA), which are distinctive from each other in terms of their codebooks having sparsity or not. By analysing their individual diversity orders (DO) in Rayleigh fading channels, it is found that DCMA can be designed with the aid of generalized sphere decoder (i.e., a nonlinear multiuser detector) to enjoy full DO which is equal to the maximum number of resource nodes in the system. This is in contrast to SCMA whose error rate suffers from limited DO equal to the codebook sparsity (i.e., the effective number of resource nodes occupied by each user). We conduct theoretical analysis for the codebook design criteria and propose to use generalized sphere decoder for DCMA detection. We numerically evaluate two types of multiple access schemes under “ $4\times 6$ ” (i.e., six users communicate over four subcarriers) and “ $5\times 10$ ” NOMA settings and reveal that DCMA gives rise to significantly improved error rate performance in Rayleigh fading channels, whilst having decoding complexity comparable to that of SCMA.

29 citations


Journal ArticleDOI
TL;DR: In this paper, a concatenated coding construction for U-RA on the AWGN channel is presented, in which a sparse regression code (SPARC) is used as an inner code to create an effective outer OR-channel, and an outer code is used to resolve the multiple access interference in the OR-MAC.
Abstract: Unsourced random-access (U-RA) is a type of grant-free random access with a virtually unlimited number of users, of which only a certain number $K_{a}$ are active on the same time slot. Users employ exactly the same codebook, and the task of the receiver is to decode the list of transmitted messages. We present a concatenated coding construction for U-RA on the AWGN channel, in which a sparse regression code (SPARC) is used as an inner code to create an effective outer OR-channel. Then an outer code is used to resolve the multiple-access interference in the OR-MAC. We propose a modified version of the approximate message passing (AMP) algorithm as an inner decoder and give a precise asymptotic analysis of the error probabilities of the AMP decoder and of a hypothetical optimal inner MAP decoder. This analysis shows that the concatenated construction under optimal decoding can achieve a vanishing per-user error probability in the limit of large blocklength and a large number of active users at sum-rates up to the symmetric Shannon capacity, i.e. as long as $K_{a}R . This extends previous point-to-point optimality results about SPARCs to the unsourced multiuser scenario. Furthermore, we give an optimization algorithm to find the power allocation for the inner SPARC code that minimizes the ${\mathsf {SNR}}$ required to achieve a given target per-user error probability with the AMP decoder.

29 citations


Journal ArticleDOI
TL;DR: In this article, a novel RSSI-based unsupervised deep learning method was proposed to design the hybrid beamforming in massive MIMO systems, which not only greatly increases the spectral efficiency especially in frequency division duplex (FDD) communication by using partial CSI feedback, but also has near-optimal sum-rate and outperforms other state-of-the-art full CSI solutions.
Abstract: Hybrid beamforming is a promising technique to reduce the complexity and cost of massive multiple-input multiple-output (MIMO) systems while providing high data rate. However, the hybrid precoder design is a challenging task requiring channel state information (CSI) feedback and solving a complex optimization problem. This paper proposes a novel RSSI-based unsupervised deep learning method to design the hybrid beamforming in massive MIMO systems. Furthermore, we propose i) a method to design the synchronization signal (SS) in initial access (IA); and ii) a method to design the codebook for the analog precoder. We also evaluate the system performance through a realistic channel model in various scenarios. We show that the proposed method not only greatly increases the spectral efficiency especially in frequency-division duplex (FDD) communication by using partial CSI feedback, but also has near-optimal sum-rate and outperforms other state-of-the-art full-CSI solutions.

25 citations


Journal ArticleDOI
TL;DR: In this paper, an SCMA codebook design approach is proposed based on uniquely decomposable constellation group (UDCG), which helps improve spectrum efficiency (SE) and enhance connectivity, has been proposed as a NOMA scheme for 5G systems.
Abstract: Sparse code multiple access (SCMA), which helps improve spectrum efficiency (SE) and enhance connectivity, has been proposed as a non-orthogonal multiple access (NOMA) scheme for 5G systems. In SCMA, codebook design determines system overload ratio and detection performance at a receiver. In this paper, an SCMA codebook design approach is proposed based on uniquely decomposable constellation group (UDCG). We show that there are $N+1 (N \geq 1)$ constellations in the proposed UDCG, each of which has $M (M \geq 2)$ constellation points. These constellations are allocated to users sharing the same resource. Combining the constellations allocated on multiple resources of each user, we can obtain UDCG-based codebook sets. Bit error ratio (BER) performance will be discussed in terms of coding gain maximization with superimposed constellations and UDCG-based codebooks. Simulation results demonstrate that the superimposed constellation of each resource has large minimum Euclidean distance (MED) and meets uniquely decodable constraint. Thus, BER performance of the proposed codebook design approach outperforms that of the existing codebook design schemes in both uncoded and coded SCMA systems, especially for large-size codebooks.

25 citations


Journal ArticleDOI
TL;DR: Experimental results show that the improved algorithm proposed not only inherits the merits of the original scheme, but also has stronger security against the differential cryptanalysis.

23 citations


Journal ArticleDOI
TL;DR: It is proved that the proposed security OFDM-WDM-PON encryption scheme is compatible with the traditional WDM system, which can make full use of bandwidth resources and enhance the security with a large key space.
Abstract: A chaotic ribonucleic acid (RNA) and deoxyribonucleic acid (DNA) encryption scheme is firstly proposed for security OFDM-WDM-PON in this paper. We adopt a dynamic key agreement based on the messenger RNA (mRNA) codebook to distribute the key, and the security and randomness of this key are enhanced by a pre-sharing key parameter set instead of transmission of a key directly. Also, the security key can be dynamically updated in real-time according to the needs of the users. The real (I) and imaginary (Q) parts of the QAM symbol matrix after modulation are encrypted by the correspondence between transfer RNA (tRNA) and amino acids and the selection mapping of DNA base complementary rules. Also, we add cubic permutation to ensure all data security encryption. The encrypted signals of 35.29 Gb/s on different wavelength channels are successfully demonstrated over a 25-km standard single-mode fiber (SSMF) and a back-to-back (BTB) system. It is proved that the proposed security OFDM-WDM-PON encryption scheme is compatible with the traditional WDM system, which can make full use of bandwidth resources and enhance the security with a large key space.

23 citations


Journal ArticleDOI
TL;DR: This work proposes a novel network-based intrusion detection method which learns patterns of benign flows in a temporal codebook, and proposes a feature representation method to transform the raw flow-based statistical features into more discriminative representations, called TempoCode-IoT.
Abstract: In the recent years, the Internet of Things has been becoming a vulnerable target of intrusion attacks. As the academia and industry move towards bringing the Internet of Things (IoT) to every sector of our lives, much attention needs to be given to develop advanced Intrusion Detection Systems (IDS) to detect such attacks. In this work, we propose a novel network-based intrusion detection method which learns patterns of benign flows in a temporal codebook. Based on the temporally learnt codebook, we propose a feature representation method to transform the raw flow-based statistical features into more discriminative representations, called TempoCode-IoT. We develop an ensemble of machine learning-based classifiers optimized to discriminate the malicious flows from the benign ones, based on the proposed TempoCode-IoT. The effectiveness of the proposed method is empirically evaluated on a state-of-the-art realistic intrusion detection dataset as well as on a real botnet-infected IoT dataset, achieving high accuracies and low false positive rates across a variety of intrusion attacks. Moreover, the proposed method outperforms several state-of-the-art works based on the used datasets, proving the effectiveness of Tempo-Code-IoT over raw flow features, both in terms of accuracies and processing speeds.

22 citations


Journal ArticleDOI
TL;DR: A novel framework that learns the channel angle-of-departure (AoD) statistics at a base station (BS) and uses this information to efficiently acquire channel measurements and the upper confidence bound (UCB) algorithm to learn the AoD statistics and the CS matrix.
Abstract: Millimeter wave (mmWave) communication is one viable solution to support Gbps sensor data sharing in vehicular networks. The use of large antenna arrays at mmWave and high mobility in vehicular communication make it challenging to design fast beam alignment solutions. In this paper, we propose a novel framework that learns the channel angle-of-departure (AoD) statistics at a base station (BS) and uses this information to efficiently acquire channel measurements. Our framework integrates online learning for compressive sensing (CS) codebook learning and the optimized codebook is used for CS-based beam alignment. We formulate a CS matrix optimization problem based on the AoD statistics available at the BS. Furthermore, based on the CS channel measurements, we develop techniques to update and learn such channel AoD statistics at the BS. We use the upper confidence bound (UCB) algorithm to learn the AoD statistics and the CS matrix. Numerical results show that the CS matrix in the proposed framework provides faster beam alignment than standard CS matrix designs. Simulation results indicate that the proposed beam training technique can reduce overhead by 80% compared to exhaustive beam search, and 70% compared to standard CS solutions that do not exploit any AoD statistics.

21 citations


Journal ArticleDOI
TL;DR: In this article, a deep reinforcement learning framework is proposed to optimize the codebook beam patterns relying only on the receive power measurements, which can adapt the beam patterns based on the surrounding environment, user distribution, hardware impairments, and array geometry.
Abstract: Millimeter wave (mmWave) and terahertz MIMO systems rely on pre-defined beamforming codebooks for both initial access and data transmission. These pre-defined codebooks, however, are commonly not optimized for specific environments, user distributions, and/or possible hardware impairments. This leads to large codebook sizes with high beam training overhead which makes it hard for these systems to support highly mobile applications. To overcome these limitations, this paper develops a deep reinforcement learning framework that learns how to optimize the codebook beam patterns relying only on the receive power measurements. The developed model learns how to adapt the beam patterns based on the surrounding environment, user distribution, hardware impairments, and array geometry. Further, this approach does not require any knowledge about the channel, RF hardware, or user positions. To reduce the learning time, the proposed model designs a novel Wolpertinger-variant architecture that is capable of efficiently searching the large discrete action space. The proposed learning framework respects the RF hardware constraints such as the constant-modulus and quantized phase shifter constraints. Simulation results confirm the ability of the developed framework to learn near-optimal beam patterns for line-of-sight (LOS), non-LOS (NLOS), mixed LOS/NLOS scenarios and for arrays with hardware impairments without requiring any channel knowledge.

20 citations


Journal ArticleDOI
TL;DR: Each feature is transformed to a set of feature codes, where one code is treated as a visual word required to construct the inverted index file whereas the others are embedded into the index file to further verify the feature matching based on the visual words.
Abstract: For scalable feature matching in large-scale web image search, the bag-of-visual-words-based (BOW) approaches generally code local features as visual words to construct an inverted index file to match features efficiently. Both the popular feature coding techniques, i.e., K-means-based vector quantization and scalar quantization, directly quantize features to generate visual words. K-means-based vector quantization requires expensive visual codebook training, whereas scalar quantization leads to the miss of many matches due to the low stability of individual components of feature vectors. To address the above issues, we demonstrate that the corresponding sub-vectors of similar features generally have similar distances to multiple reference points in feature subspace and propose a multiple distance-based feature coding scheme for scalable feature matching. Specifically, based on the distances between the sub-vectors and multiple distinct reference points, we transform each feature to a set of feature codes, where one code is treated as a visual word required to construct the inverted index file whereas the others are embedded into the index file to further verify the feature matching based on the visual words. The proposed coding scheme does not need visual codebook training and shows desirable stability and discriminability. Moreover, in the matching verification, a feature-distance estimation method is proposed to estimate the Euclidean distances between features for an accurate matching verification. Extensive experimental results demonstrate the superiority of the proposed approach in comparison to the other approaches using recent feature quantization methods for large-scale web image search.

Proceedings ArticleDOI
30 Aug 2021
TL;DR: Wav2vec-C as mentioned in this paper is a self-supervised representation learning method combining elements from Wav2Vec 2.0 and VQ-VAE that learns to reproduce quantized representations from partially masked speech encoding using a contrastive loss.
Abstract: Wav2vec-C introduces a novel representation learning technique combining elements from wav2vec 2.0 and VQ-VAE. Our model learns to reproduce quantized representations from partially masked speech encoding using a contrastive loss in a way similar to Wav2vec 2.0. However, the quantization process is regularized by an additional consistency network that learns to reconstruct the input features to the wav2vec 2.0 network from the quantized representations in a way similar to a VQ-VAE model. The proposed self-supervised model is trained on 10k hours of unlabeled data and subsequently used as the speech encoder in a RNN-T ASR model and fine-tuned with 1k hours of labeled data. This work is one of only a few studies of self-supervised learning on speech tasks with a large volume of real far-field labeled data. The Wav2vec-C encoded representations achieves, on average, twice the error reduction over baseline and a higher codebook utilization in comparison to wav2vec 2.0

Journal ArticleDOI
TL;DR: In this article, an SCMA codebook design approach is proposed based on uniquely decomposable constellation group (UDCG), which helps improve spectrum efficiency (SE) and enhance connectivity, has been proposed as a NOMA scheme for 5G systems.
Abstract: Sparse code multiple access (SCMA), which helps improve spectrum efficiency (SE) and enhance connectivity, has been proposed as a non-orthogonal multiple access (NOMA) scheme for 5G systems. In SCMA, codebook design determines system overload ratio and detection performance at a receiver. In this paper, an SCMA codebook design approach is proposed based on uniquely decomposable constellation group (UDCG). We show that there are $N+1$ ( $N\ge 1$ ) constellations in the proposed UDCG, each of which has $M (M\ge 2)$ constellation points. These constellations are allocated to users sharing the same resource. Combining the constellations allocated on multiple resources of each user, we can obtain UDCG-based codebook sets. Bit error ratio (BER) performance will be discussed in terms of coding gain maximization with superimposed constellations and UDCG-based codebooks. Simulation results demonstrate that the superimposed constellation of each resource has large minimum Euclidean distance (MED) and meets uniquely decodable constraint. Thus, BER performance of the proposed codebook design approach outperforms that of the existing codebook design schemes in both uncoded and coded SCMA systems, especially for large-size codebooks.

Journal ArticleDOI
TL;DR: This paper has proposed a novel combination of visual codebook generation using deep features with the non-linear Chi2 SVM classifier to tackle the imbalance problem that arises while dealing with multi-class image datasets.
Abstract: Classification of imbalanced multi-class image datasets is a challenging problem in computer vision. Most of the real-world datasets are imbalanced in nature because of the uneven distribution of the samples in each class. The problem with an imbalanced dataset is that the minority class having a smaller number of instance samples is left undetected. Most of the traditional machine learning algorithms can detect the majority class efficiently but lag behind in the efficient detection of the minority class, which ultimately degrades the overall performance of the classification model. In this paper, we have proposed a novel combination of visual codebook generation using deep features with the non-linear Chi2 SVM classifier to tackle the imbalance problem that arises while dealing with multi-class image datasets. The low-level deep features are first extracted by transfer learning using the ResNet-50 pre-trained network, and clustered using k-means. The center of each cluster is a visual word in the codebook. Each image is then translated into a set of features called the Bag-of-Visual-Words (BOVW) derived from the histogram of visual words in the vocabulary. The non-linear Chi2 SVM classifier is found most optimal for classifying the ensuing features, as proved by a detailed empirical analysis. Hence with the right combination of learning tools, we are able to tackle classification of multi-class imbalanced image datasets in an effective manner. This is proved from the higher scores of accuracy, F1-score and AUC metrics in our experiments on two challenging multi-class datasets: Graz-02 and TF-Flowers, as compared to the state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an effective codebook adaptive to any Rician factors, while guaranteeing comparable performance to the optimal codebook, by sharing only a single common codebook between the transmitter and receiver.
Abstract: When a channel consists of a line-of-sight (LoS) path as well as non-LoS components, codebook design for channel state information (CSI) quantization is required to take account of both of them. However, the conventional codebook design requires infinitely many optimal codebooks corresponding to all possible Rician factors, which is impossible in practice. In this regard, we propose an effective codebook adaptive to any Rician factors, while guaranteeing comparable performance to the optimal codebook. Contrary to the conventional approaches, the adaptation to Rician factors suffices by sharing only a single common codebook between the transmitter and receiver. We first investigate the distribution of the angle between the channel vector and the LoS component, where the distribution depends on Rician factors that reflect the power ratio of LoS and non-LoS components. Driven by the analysis, we devise a band-structured non-homogeneous codebook and derive the upper bound of the quantization error of the proposed codebook. The design parameters of the proposed codebook are optimized to minimize the quantization error bound. Using an approximation, we also derive a tractable near-optimal solution of the parameters determining the proposed codebook. Numerical results exhibit that the proposed codebook substantially outperforms conventional methods and achieves near-optimal performance in terms of the average quantization distortion and average sum rate.

Proceedings ArticleDOI
11 Jul 2021
TL;DR: In this article, a reconfigurable intelligent surface (RIS) aided millimeter wave (mmWave) multiple-input-multiple-output (MIMO) radar system for multi-target localization is proposed.
Abstract: In this paper, we propose to utilize a reconfigurable intelligent surface (RIS) aided millimeter wave (mmWave) multiple-input-multiple-output (MIMO) radar system for multi-target localization. The goal is to detect multiple targets with sufficient accuracy using an adaptive localization algorithm utilizing the concept of hierarchical codebook design. The simulation results show that in case of a blocked line of sight (LOS) between the transmitter and the targets, the receiver can still localize all the targets with very good accuracy using the RIS.

Proceedings ArticleDOI
11 Jul 2021
TL;DR: Xia et al. as mentioned in this paper proposed an extremely memory-efficient factorization machine (xLightFM), where each category embedding is composited with latent vectors selected from codebooks.
Abstract: The factorization-based models have achieved great success in online advertisements and recommender systems due to the capability of efficiently modeling combinational features. These models encode feature interactions by the vector product between feature embedding. Despite the improvement of generalization, the memory consumption of these models grows significantly, because they usually take hundreds to thousands of large categorical features as input. Several existing works try to reduce the memory footprint by hashing, randomized embedding composition, and dimensionality search, but they suffer from either substantial performance degradation or limited memory compression. To this end, in this paper, we propose an extremely memory-efficient Factorization Machine (xLightFM), where each category embedding is composited with latent vectors selected from codebooks. Based on the characteristics of each categorical feature, we further propose to adapt the codebook size with the neural architecture search techniques for compositing the embedding of each categorical feature. This further pushes the limits of memory compression while incurring negligible degradation or even some improvements in prediction performance. We extensively evaluate the proposed algorithm with two real-world datasets. The results demonstrate that xLightFM can outperform the state-of-the-art lightweight factorization-based methods in terms of both prediction quality and memory footprint, and achieve more than 18x and 27x memory compression compared to the vanilla FM on these two datasets, respectively.

Journal ArticleDOI
TL;DR: It is shown that the proposed algorithm is capable of generating codebooks that include superior metric values and optimized signal constellations based on low levels of searching complexity.
Abstract: Sparse Code Multiple Access (SCMA) is a promising technique for next generation mobile communication systems. In this letter, the problems surrounding the design of an SCMA codebook problem are confronted through the use of Artificial Intelligence (AI) techniques. The suggested algorithm is based on Reinforcement Learning (RL). The design parameters include a set of actions, a set of states, and a reward function. It is shown that the proposed algorithm is capable of generating codebooks that include superior metric values and optimized signal constellations based on low levels of searching complexity. The low-complexity feature in the RL-based construction algorithm ensures that it is more suitable to applications that rely on large-scale SCMA schemes.

Journal ArticleDOI
TL;DR: This work investigates unsourced and grant-free massive random access in which all the users employ the same codebook, and the basestation is only interested in decoding the distinct messages sent.
Abstract: We investigate unsourced and grant-free massive random access in which all the users employ the same codebook, and the basestation is only interested in decoding the distinct messages sent. To resolve the colliding user packets, a novel approach relying on user transmissions with random-like amplitudes selected from a large number of possible signatures is proposed. The scheme is combined with convolutional coding for error correction. The receiver operates by first identifying the signatures used by the transmitting nodes employing a sparsity-based detection algorithm, and then utilizing a trellis-based decoding algorithm. Despite its simplicity, the proposed solution offers excellent performance.

Journal ArticleDOI
TL;DR: In this paper, the authors considered an information-centric approach based on joint source-channel (JSC) coding via a non-orthogonal generalization of type-based multiple access (TBMA).
Abstract: A fog-radio access network (F-RAN) architecture is studied for an Internet-of-Things (IoT) system in which wireless sensors monitor a number of multi-valued events and transmit in the uplink using grant-free random access to multiple edge nodes (ENs). Each EN is connected to a central processor (CP) via a finite-capacity fronthaul link. In contrast to conventional information-agnostic protocols based on separate source-channel (SSC) coding, where each device uses a separate codebook, this paper considers an information-centric approach based on joint source-channel (JSC) coding via a non-orthogonal generalization of type-based multiple access (TBMA). By leveraging the semantics of the observed signals, all sensors measuring the same event share the same codebook (with non-orthogonal codewords), and all such sensors making the same local estimate of the event transmit the same codeword. The F-RAN architecture directly detects the events values without first performing individual decoding for each device. Cloud and edge detection schemes based on Bayesian message passing are designed and trade-offs between cloud and edge processing are assessed.

Posted Content
TL;DR: In this article, a new iterative algorithm based on alternating maximization with exact penalty is proposed for the MED maximization problem, which achieves a set of codebooks of all users whose MED is larger than any previously reported results.
Abstract: Sparse code multiple access (SCMA), as a codebook-based non-orthogonal multiple access (NOMA) technique, has received research attention in recent years. The codebook design problem for SCMA has also been studied to some extent since codebook choices are highly related to the system's error rate performance. In this paper, we approach the SCMA codebook design problem by formulating an optimization problem to maximize the minimum Euclidean distance (MED) of superimposed codewords under power constraints. While SCMA codebooks with a larger MED are expected to obtain a better BER performance, no optimal SCMA codebook in terms of MED maximization, to the authors' best knowledge, has been reported in the SCMA literature yet. In this paper, a new iterative algorithm based on alternating maximization with exact penalty is proposed for the MED maximization problem. The proposed algorithm, when supplied with appropriate initial points and parameters, achieves a set of codebooks of all users whose MED is larger than any previously reported results. A Lagrange dual problem is derived which provides an upper bound of MED of any set of codebooks. Even though there is still a nonzero gap between the achieved MED and the upper bound given by the dual problem, simulation results demonstrate clear advantages in error rate performances of the proposed set of codebooks over all existing ones not only in AWGN channels but also in some downlink scenarios that fit in 5G/NR applications, making it a good codebook candidate thereof. The proposed set of SCMA codebooks, however, are not shown to outperform existing ones in uplink channels or in the case where non-consecutive OFDMA subcarriers are used. The correctness and accuracy of error curves in the simulation results are further confirmed by the coincidences with the theoretical upper bounds of error rates derived for any given set of codebooks.

Journal ArticleDOI
TL;DR: Linde–Buzo–Gray (LBG) algorithm was developed with vector quantization (VQ) for compressing the images, and it results in decent image quality, which is compared with other existing approaches.
Abstract: In recent times, the medical imaging becomes an indispensable tool in clinical practice. Due to the large volume of medical images, compression is needed to lessen the redundancies in the image and also to represent the image in shorter manner for effective transmission. In this paper, Linde–Buzo–Gray (LBG) algorithm was developed with vector quantization (VQ) for compressing the images, and it results in decent image quality. To further increase the image quality, optimization techniques [particle swarm optimization (PSO) and firefly algorithm (FA)] were used in LBG method to optimize the codebook for generating the global codebook. In the proposed work, LBG method was used to get the local codebooks and the obtained local codebooks were optimized by utilizing PSO. The optimized codebooks from PSO were again optimized by using FA that results in good quality of the image. In the experimental phase, the performance of the proposed work was compared with individual optimization techniques like PSO and FA. From the experimental study, the proposed work showed 1.2–6 dB improvement in image compression related to other existing approaches.

Journal ArticleDOI
TL;DR: In this article, the authors proposed to integrate the conformal array (CA) with the surface of each UAV, which enables the full spatial coverage and the agile beam tracking in highly dynamic UAV mmWave networks.
Abstract: Millimeter wave (mmWave) communications can potentially meet the high data-rate requirements of unmanned-aerial-vehicle (UAV) networks. However, as the prerequisite of mmWave communications, the narrow directional beam tracking is very challenging because of the 3-D mobility and attitude variation of UAVs. Aiming to address the beam tracking difficulties, we propose to integrate the conformal array (CA) with the surface of each UAV, which enables the full spatial coverage and the agile beam tracking in highly dynamic UAV mmWave networks. More specifically, the key contributions of our work are threefold: 1) a new mmWave beam tracking framework is established for the CA-enabled UAV mmWave network; 2) a specialized hierarchical codebook is constructed to drive the directional radiating element (DRE)-covered cylindrical CA, which contains both the angular beam pattern and the subarray pattern to fully utilize the potential of the CA; and 3) a codebook-based multiuser beam tracking scheme is proposed, where the Gaussian process machine learning-enabled UAV position/attitude prediction is developed to improve the beam tracking efficiency in conjunction with the tracking-error aware adaptive beamwidth control. Simulation results validate the effectiveness of the proposed codebook-based beam tracking scheme in the CA-enabled UAV mmWave network, and demonstrate the advantages of CA over the conventional planner array in terms of spectrum efficiency and outage probability in the highly dynamic scenarios.

Journal ArticleDOI
TL;DR: In this paper, two probabilistic codebook (PCB) techniques of prioritized beams are proposed to fasten the current 5G standard approach, targeting an efficient 6G design.

Journal ArticleDOI
TL;DR: T-VLAD encodes long term temporal structure of the video employing single stream convolutional features over short segments, which works equally well on a dynamic background dataset, UCF101.

Posted ContentDOI
TL;DR: Based on an extended Kalman filter (EKF), a beam-tracking algorithm was developed in this article to enable reliable radio connections between vehicles and roadside units (RSUs) by estimating the rapid changes in the beam direction of a high-mobility vehicle.
Abstract: A vehicle-to-everything communication system is a strong candidate for improving the driving experience and automotive safety by linking vehicles to wireless networks. To guarantee the full benefits of vehicle connectivity, it is essential to ensure a stable network connection between the roadside unit (RSU) and fast-moving vehicles. Based on an extended Kalman filter (EKF), we develop a beam-tracking algorithm to enable reliable radio connections. For the vehicle tracking algorithm, we focus on estimating the rapid changes in the beam direction of a high-mobility vehicle while reducing the feedback overhead. Furthermore, we design a beamforming codebook that considers the road layout and RSU. By leveraging the proposed beamforming codebook, vehicles on the road can expect a service quality similar to that of conventional cellular services. Finally, a beamformer selection algorithm is developed to secure sufficient gain for a link budget. Numerical results verify that the EKF-based vehicle tracking algorithm and the proposed beamforming structure are more suitable for vehicle-to-infrastructure networks compared to existing schemes.

Proceedings ArticleDOI
13 Sep 2021
TL;DR: In this article, a group-based deep neural network active user detection (AUD) scheme was proposed for the grant-free sparse code multiple access (SCMA) system in mMTC uplink framework.
Abstract: Grant-free random access and uplink non- orthogonal multiple access (NOMA) have been introduced to reduce transmission latency and signaling overhead in massive machine-type communication (mMTC). In this paper, we propose two novel group-based deep neural network active user detection (AUD) schemes for the grant-free sparse code multiple access (SCMA) system in mMTC uplink framework. The proposed AUD schemes learn the nonlinear mapping, i.e., multi-dimensional codebook structure and the channel characteristic. This is accomplished through the received signal which incorporates the sparse structure of device activity with the training dataset. Moreover, the offline pre-trained model is able to detect the active devices without any channel state information and prior knowledge of the device sparsity level. Simulation results show that with several active devices, the proposed schemes obtain more than twice the probability of detection compared to the conventional AUD schemes over the signal to noise ratio range of interest.

Journal ArticleDOI
03 Mar 2021-PeerJ
TL;DR: This paper proposed a document representation method based on a supervised codebook to represent the Nepali documents, where their codebook contains only semantic tokens without outliers, which is domain-specific as it is based on tokens in a given corpus that have higher similarities with the class labels in the corpus.
Abstract: Document representation with outlier tokens exacerbates the classification performance due to the uncertain orientation of such tokens. Most existing document representation methods in different languages including Nepali mostly ignore the strategies to filter them out from documents before learning their representations. In this article, we propose a novel document representation method based on a supervised codebook to represent the Nepali documents, where our codebook contains only semantic tokens without outliers. Our codebook is domain-specific as it is based on tokens in a given corpus that have higher similarities with the class labels in the corpus. Our method adopts a simple yet prominent representation method for each word, called probability-based word embedding. To show the efficacy of our method, we evaluate its performance in the document classification task using Support Vector Machine and validate against widely used document representation methods such as Bag of Words, Latent Dirichlet allocation, Long Short-Term Memory, Word2Vec, Bidirectional Encoder Representations from Transformers and so on, using four Nepali text datasets (we denote them shortly as A1, A2, A3 and A4). The experimental results show that our method produces state-of-the-art classification performance (77.46% accuracy on A1, 67.53% accuracy on A2, 80.54% accuracy on A3 and 89.58% accuracy on A4) compared to the widely used existing document representation methods. It yields the best classification accuracy on three datasets (A1, A2 and A3) and a comparable accuracy on the fourth dataset (A4). Furthermore, we introduce the largest Nepali document dataset (A4), called NepaliLinguistic dataset, to the linguistic community.

Journal ArticleDOI
TL;DR: In this paper, the authors considered an information-centric approach based on joint source-channel (JSC) coding via a non-orthogonal generalization of type-based multiple access (TBMA).
Abstract: A fog-radio access network (F-RAN) architecture is studied for an Internet-of-Things (IoT) system in which wireless sensors monitor a number of multi-valued events and transmit in the uplink using grant-free random access to multiple edge nodes (ENs). Each EN is connected to a central processor (CP) via a finite-capacity fronthaul link. In contrast to conventional information-agnostic protocols based on separate source-channel (SSC) coding, where each device uses a separate codebook, this paper considers an information-centric approach based on joint source-channel (JSC) coding via a non-orthogonal generalization of type-based multiple access (TBMA). By leveraging the semantics of the observed signals, all sensors measuring the same event share the same codebook (with non-orthogonal codewords), and all such sensors making the same local estimate of the event transmit the same codeword. The F-RAN architecture directly detects the events’ values without first performing individual decoding for each device. Cloud and edge detection schemes based on Bayesian message passing are designed and trade-offs between cloud and edge processing are assessed.

Posted Content
TL;DR: In this article, the problem of spatial signal design for multipath-assisted mm-wave positioning under limited prior knowledge on the user's location and clock bias is considered, and an optimal robust design and a codebook-based heuristic design with optimized beam power allocation are proposed.
Abstract: We consider the problem of spatial signal design for multipath-assisted mmWave positioning under limited prior knowledge on the user's location and clock bias. We propose an optimal robust design and a codebook-based heuristic design with optimized beam power allocation by exploiting the low-dimensional precoder structure under perfect prior knowledge. Through numerical results, we characterize different position-error-bound (PEB) regimes with respect to clock bias uncertainty and show that the proposed low-complexity codebook-based designs outperform the conventional directional beam codebook and achieve near-optimal PEB performance for both analog and digital architectures.