scispace - formally typeset
Proceedings ArticleDOI

Sparse Autoencoder for Sparse Code Multiple Access

13 Apr 2021-pp 353-358

TL;DR: In this paper, the authors proposed a SCMA sparse autoencoder that has better BER performance than a conventional auto-encoder, without paying in terms of computational complexity.

AbstractIn the forthcoming 5G technology, Sparse Code Multiple Access (SCMA) is the most promising scheme that aims at improving spectral efficiency further and providing massive connectivity. The challenge behind implementing SCMA scheme is: constructing optimized codebooks in order to obtain minimum BER while keeping the receiver complexity minimum. To address this problem, we resort to the usage of an efficient deep learning technique, autoencoders, that club the encoder and the decoder part and automatically learn the most optimum codeword that could give the least BER. In this paper, SCMA sparse autoencoder, which is a variant of the autoencoder, is proposed, that has better BER performance than a conventional autoencoder, without paying in terms of computational complexity.

...read more


Citations
More filters
Posted Content
TL;DR: In this article, the authors proposed new architectures for autoencoder and the underlying training methodology that aim at joint optimization of resource mapping and a constellation design with bit-to-symbol mapping, hopefully approaching a bit error rate (BER) performance of the equivalent single-user MDM (SU-MDM) model.
Abstract: A general form of codebook design for code-domain non-orthogonal multiple access (CD-NOMA) can be considered equivalent to a constellation design for multi-user multi-dimensional modulation (MU-MDM). Motivated by recent works on deep learning (DL)-based design of MDM, we propose new architectures for autoencoder and the underlying training methodology that aim at joint optimization of resource mapping and a constellation design with bit-to-symbol mapping, hopefully approaching a bit error rate (BER) performance of the equivalent single-user MDM (SU-MDM) model. The novelty and contribution of the paper lies in the proposed architectures of autoencoder and the underlying training framework for optimum codeword design for CD-NOMA, which without knowing other-user symbols, approaches the performance of SU-MDM with the same spectral efficiency. It includes the trainable architectures of DL-based designs for SU-MDM and MU-MDM, which can be equivalently compared to each other. They are implemented to demonstrate that the proposed design for MU-MDM can achieve the BER performance of DL-based single-user codebook design within 0.3 dB in the additive white Gaussian noise channel, thus serving as the best existing design that can be realized with its low-complex decoder.

References
More filters
Proceedings Article
31 Mar 2010
TL;DR: The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.
Abstract: Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs less deep architectures. All these experimental results were obtained with new initialization or training mechanisms. Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. We first observe the influence of the non-linear activations functions. We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. Surprisingly, we find that saturated units can move out of saturation by themselves, albeit slowly, and explaining the plateaus sometimes seen when training neural networks. We find that a new non-linearity that saturates less can often be beneficial. Finally, we study how activations and gradients vary across layers and during training, with the idea that training may be more difficult when the singular values of the Jacobian associated with each layer are far from 1. Based on these considerations, we propose a new initialization scheme that brings substantially faster convergence. 1 Deep Neural Networks Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. They include Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: WC Weston et al., 2008). Much attention has recently been devoted to them (see (Bengio, 2009) for a review), because of their theoretical appeal, inspiration from biology and human cognition, and because of empirical success in vision (Ranzato et al., 2007; Larochelle et al., 2007; Vincent et al., 2008) and natural language processing (NLP) (Collobert & Weston, 2008; Mnih & Hinton, 2009). Theoretical results reviewed and discussed by Bengio (2009), suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Most of the recent experimental results with deep architecture are obtained with models that can be turned into deep supervised neural networks, but with initialization or training schemes different from the classical feedforward neural networks (Rumelhart et al., 1986). Why are these new algorithms working so much better than the standard random initialization and gradient-based optimization of a supervised training criterion? Part of the answer may be found in recent analyses of the effect of unsupervised pretraining (Erhan et al., 2009), showing that it acts as a regularizer that initializes the parameters in a “better” basin of attraction of the optimization procedure, corresponding to an apparent local minimum associated with better generalization. But earlier work (Bengio et al., 2007) had shown that even a purely supervised but greedy layer-wise procedure would give better results. So here instead of focusing on what unsupervised pre-training or semi-supervised criteria bring to deep architectures, we focus on analyzing what may be going wrong with good old (but deep) multilayer neural networks. Our analysis is driven by investigative experiments to monitor activations (watching for saturation of hidden units) and gradients, across layers and across training iterations. We also evaluate the effects on these of choices of activation function (with the idea that it might affect saturation) and initialization procedure (since unsupervised pretraining is a particular form of initialization and it has a drastic impact).

9,463 citations

Proceedings ArticleDOI
Hosein Nikopour1, Hadi Baligh1
25 Nov 2013
TL;DR: A new multiple access scheme so called sparse code multiple access (SCMA) is proposed which still enjoys the low complexity reception technique but with better performance compared to LDS, allowing us to take advantage of a near optimal ML receiver with practically feasible complexity.
Abstract: Multicarrier CDMA is a multiplexing approach in which modulated QAM symbols are spread over multiple OFDMA tones by using a generally complex spreading sequence. Effectively, a QAM symbol is repeated over multiple tones. Low density signature (LDS) is a version of CDMA with low density spreading sequence allowing us to take advantage of a near optimal ML receiver with practically feasible complexity. In this paper, we propose a new multiple access scheme so called sparse code multiple access (SCMA) which still enjoys the low complexity reception technique but with better performance compared to LDS. In SCMA, the procedure of bit to QAM symbol mapping and spreading are combined together and incoming bits are directly mapped to a multidimensional codeword of an SCMA codebook set. Each layer or user has its dedicated codebook. Shaping gain of a multidimensional constellation is the main source of the performance improvement in comparison to the simple repetition of QAM symbols in LDS. In general, SCMA codebook design is an optimization problem. A systematic sub-optimal approach is proposed here for SCMA codebook design.

1,006 citations

Proceedings ArticleDOI
16 Jul 2016
TL;DR: A novel deep learning method for improving the belief propagation algorithm by assigning weights to the edges of the Tanner graph that allows for only a single codeword instead of exponential number of codewords.
Abstract: A novel deep learning method for improving the belief propagation algorithm is proposed. The method generalizes the standard belief propagation algorithm by assigning weights to the edges of the Tanner graph. These edges are then trained using deep learning techniques. A well-known property of the belief propagation algorithm is the independence of the performance on the transmitted codeword. A crucial property of our new method is that our decoder preserved this property. Furthermore, this property allows us to learn only a single codeword instead of exponential number of codewords. Improvements over the belief propagation algorithm are demonstrated for various high density parity check codes.

283 citations

Book
27 Dec 2010
TL;DR: Brent and Zimmermann as discussed by the authors present algorithms that are ready to implement in your favorite language, while keeping a high-level description and avoiding too low-level or machine-dependent details.
Abstract: Modern Computer Arithmetic focuses on arbitrary-precision algorithms for efficiently performing arithmetic operations such as addition, multiplication and division, and their connections to topics such as modular arithmetic, greatest common divisors, the Fast Fourier Transform (FFT), and the computation of elementary and special functions. Brent and Zimmermann present algorithms that are ready to implement in your favorite language, while keeping a high-level description and avoiding too low-level or machine-dependent details. The book is intended for anyone interested in the design and implementation of efficient high-precision algorithms for computer arithmetic, and more generally efficient multiple-precision numerical algorithms. It may also be used in a graduate course in mathematics or computer science, for which exercises are included. These vary considerably in difficulty, from easy to small research projects, and expand on topics discussed in the text. Solutions are available from the authors.

203 citations

Journal ArticleDOI
TL;DR: A deep learning-aided SCMA (D-SCMA) in which the codebook that minimizes the bit error rate (BER) is adaptively constructed, and a decoding strategy is learned using a deep neural network-based encoder and decoder.
Abstract: Sparse code multiple access (SCMA) is a promising code-based non-orthogonal multiple-access technique that can provide improved spectral efficiency and massive connectivity meeting the requirements of 5G wireless communication systems. We propose a deep learning-aided SCMA (D-SCMA) in which the codebook that minimizes the bit error rate (BER) is adaptively constructed, and a decoding strategy is learned using a deep neural network-based encoder and decoder. One benefit of D-SCMA is that the construction of an efficient codebook can be achieved in an automated manner, which is generally difficult due to the non-orthogonality and multi-dimensional traits of SCMA. We use simulations to show that our proposed scheme provides a lower BER with a smaller computation time than conventional schemes.

125 citations