scispace - formally typeset
Search or ask a question

Showing papers on "Communication channel published in 2018"


Journal ArticleDOI
TL;DR: The proposed deep learning-based approach to handle wireless OFDM channels in an end-to-end manner is more robust than conventional methods when fewer training pilots are used, the cyclic prefix is omitted, and nonlinear clipping noise exists.
Abstract: This letter presents our initial results in deep learning for channel estimation and signal detection in orthogonal frequency-division multiplexing (OFDM) systems. In this letter, we exploit deep learning to handle wireless OFDM channels in an end-to-end manner. Different from existing OFDM receivers that first estimate channel state information (CSI) explicitly and then detect/recover the transmitted symbols using the estimated CSI, the proposed deep learning-based approach estimates CSI implicitly and recovers the transmitted symbols directly. To address channel distortion, a deep learning model is first trained offline using the data generated from simulation based on channel statistics and then used for recovering the online transmitted data directly. From our simulation results, the deep learning based approach can address channel distortion and detect the transmitted symbols with performance comparable to the minimum mean-square error estimator. Furthermore, the deep learning-based approach is more robust than conventional methods when fewer training pilots are used, the cyclic prefix is omitted, and nonlinear clipping noise exists. In summary, deep learning is a promising tool for channel estimation and signal detection in wireless communications with complicated channel distortion and interference.

1,357 citations


Journal ArticleDOI
Liang Liu1, Wei Yu1
TL;DR: It is shown that in the asymptotic massive multiple-input multiple-output regime, both the missed device detection and the false alarm probabilities for activity detection can always be made to go to zero by utilizing compressed sensing techniques that exploit sparsity in the user activity pattern.
Abstract: This two-part paper considers an uplink massive device communication scenario in which a large number of devices are connected to a base station (BS), but user traffic is sporadic so that in any given coherence interval, only a subset of users is active. The objective is to quantify the cost of active user detection and channel estimation and to characterize the overall achievable rate of a grant-free two-phase access scheme in which device activity detection and channel estimation are performed jointly using pilot sequences in the first phase and data is transmitted in the second phase. In order to accommodate a large number of simultaneously transmitting devices, this paper studies an asymptotic regime where the BS is equipped with a massive number of antennas. The main contributions of Part I of this paper are as follows. First, we note that as a consequence of having a large pool of potentially active devices but limited coherence time, the pilot sequences cannot all be orthogonal. However, despite the nonorthogonality, this paper shows that in the asymptotic massive multiple-input multiple-output regime, both the missed device detection and the false alarm probabilities for activity detection can always be made to go to zero by utilizing compressed sensing techniques that exploit sparsity in the user activity pattern. Part II of this paper further characterizes the achievable rates using the proposed scheme and quantifies the cost of using nonorthogonal pilot sequences for channel estimation in achievable rates.

594 citations


Journal ArticleDOI
TL;DR: The learned denoising-based approximate message passing (LDAMP) network is exploited and significantly outperforms state-of-the-art compressed sensing-based algorithms even when the receiver is equipped with a small number of RF chains.
Abstract: Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave massive multiple-input and multiple-output systems. To solve this problem, we exploit a learned denoising-based approximate message passing (LDAMP) network. This neural network can learn channel structure and estimate channel from a large number of training data. Furthermore, we provide an analytical framework on the asymptotic performance of the channel estimator. Based on our analysis and simulation results, the LDAMP neural network significantly outperforms state-of-the-art compressed sensing-based algorithms even when the receiver is equipped with a small number of RF chains.

587 citations


Journal ArticleDOI
TL;DR: Simulation results corroborate that the proposed deep learning based scheme can achieve better performance in terms of the DOA estimation and the channel estimation compared with conventional methods, and the proposed scheme is well investigated by extensive simulation in various cases for testing its robustness.
Abstract: The recent concept of massive multiple-input multiple-output (MIMO) can significantly improve the capacity of the communication network, and it has been regarded as a promising technology for the next-generation wireless communications. However, the fundamental challenge of existing massive MIMO systems is that high computational complexity and complicated spatial structures bring great difficulties to exploit the characteristics of the channel and sparsity of these multi-antennas systems. To address this problem, in this paper, we focus on channel estimation and direction-of-arrival (DOA) estimation, and a novel framework that integrates the massive MIMO into deep learning is proposed. To realize end-to-end performance, a deep neural network (DNN) is employed to conduct offline learning and online learning procedures, which is effective to learn the statistics of the wireless channel and the spatial structures in the angle domain. Concretely, the DNN is first trained by simulated data in different channel conditions with the aids of the offline learning, and then corresponding output data can be obtained based on current input data during online learning process. In order to realize super-resolution channel estimation and DOA estimation, two algorithms based on the deep learning are developed, in which the DOA can be estimated in the angle domain without additional complexity directly. Furthermore, simulation results corroborate that the proposed deep learning based scheme can achieve better performance in terms of the DOA estimation and the channel estimation compared with conventional methods, and the proposed scheme is well investigated by extensive simulation in various cases for testing its robustness.

577 citations


Proceedings ArticleDOI
05 Sep 2018
TL;DR: In this paper, an RIS-enhanced point-to-point multiple-input-single-output (MISO) wireless system where one RIS is deployed to assist in the communication from an access point (AP) to a single-antenna user is considered, where the user simultaneously receives the signal sent directly from the AP as well as that reflected by the RIS.
Abstract: Intelligent reflecting surface (IRS) is envisioned to have abundant applications in future wireless networks by smartly reconfiguring the signal propagation for performance enhance- ment. Specifically, an IRS consists of a large number of low- cost passive elements each reflecting the incident signal with a certain phase shift to collaboratively achieve beamforming and suppress interference at one or more designated receivers. In this paper, we study an IRS-enhanced point-to-point multiple- input single-output (MISO) wireless system where one IRS is deployed to assist in the communication from a multi-antenna access point (AP) to a single-antenna user. As a result, the user simultaneously receives the signal sent directly from the AP as well as that reflected by the IRS. We aim to maximize the total received signal power at the user by jointly optimizing the (active) transmit beamforming at the AP and (passive) reflect beamforming by the phase shifters at the IRS. We first propose a centralized algorithm based on the technique of semidefinite relaxation (SDR) by assuming the global channel state information (CSI) available at the IRS. Since the centralized implementation requires excessive channel estimation and signal exchange overheads, we further propose a low-complexity distributed algorithm where the AP and IRS independently adjust the transmit beamforming and the phase shifts in an alternating manner until the convergence is reached. Simulation results show that significant performance gains can be achieved by the proposed algorithms as compared to benchmark schemes. Moreover, it is verified that the IRS is able to drastically enhance the link quality and/or coverage over the conventional setup without the IRS.

557 citations


Journal ArticleDOI
TL;DR: An extensive survey of the measurement methods proposed for UAV channel modeling that use low altitude platforms and discusses various channel characterization efforts is provided.
Abstract: Unmanned aerial vehicles (UAVs) have attracted great interest in rapid deployment for both civil and military applications. UAV communication has its own distinctive channel characteristics compared to the widely used cellular or satellite systems. Accurate channel characterization is crucial for the performance optimization and design of efficient UAV communication. However, several challenges exist in UAV channel modeling. For example, the propagation characteristics of UAV channels are under explored for spatial and temporal variations in non–stationary channels. Additionally, airframe shadowing has not yet been investigated for small size rotary UAVs. This paper provides an extensive survey of the measurement methods proposed for UAV channel modeling that use low altitude platforms and discusses various channel characterization efforts. We also review from a contemporary perspective of UAV channel modeling approaches, and outline future research challenges in this domain.

532 citations


Journal ArticleDOI
TL;DR: A survey of the mmWave propagation characteristics, channel modeling, and design guidelines, such as system and antenna design considerations for mmWave, including the link budget of the network, which are essential for mm Wave communication systems design is presented.
Abstract: The millimeter wave (mmWave) frequency band spanning from 30 to 300 GHz constitutes a substantial portion of the unused frequency spectrum, which is an important resource for future wireless communication systems in order to fulfill the escalating capacity demand. Given the improvements in integrated components and enhanced power efficiency at high frequencies, wireless systems can operate in the mmWave frequency band. In this paper, we present a survey of the mmWave propagation characteristics, channel modeling, and design guidelines, such as system and antenna design considerations for mmWave, including the link budget of the network, which are essential for mmWave communication systems. We commence by introducing the main channel propagation characteristics of mmWaves followed by channel modeling and design guidelines. Then, we report on the main measurement and modeling campaigns conducted in order to understand the mmWave band’s properties and present the associated channel models. We survey the different channel models focusing on the channel models available for the 28, 38, 60, and 73 GHz frequency bands. Finally, we present the mmWave channel model and its challenges in the context of mmWave communication systems design.

512 citations


Proceedings Article
17 Jul 2018
TL;DR: Bottleneck Attention Module (BAM) as discussed by the authors infers an attention map along two separate pathways, channel and spatial, and constructs a hierarchical attention at bottlenecks with a number of parameters and it is trainable in an end-to-end manner jointly with any feed-forward models.
Abstract: Recent advances in deep neural networks have been developed via architecture search for stronger representational power. In this work, we focus on the effect of attention in general deep neural networks. We propose a simple and effective attention module, named Bottleneck Attention Module (BAM), that can be integrated with any feed-forward convolutional neural networks. Our module infers an attention map along two separate pathways, channel and spatial. We place our module at each bottleneck of models where the downsampling of feature maps occurs. Our module constructs a hierarchical attention at bottlenecks with a number of parameters and it is trainable in an end-to-end manner jointly with any feed-forward models. We validate our BAM through extensive experiments on CIFAR-100, ImageNet-1K, VOC 2007 and MS COCO benchmarks. Our experiments show consistent improvement in classification and detection performances with various models, demonstrating the wide applicability of BAM. The code and models will be publicly available.

463 citations


Journal ArticleDOI
TL;DR: A novel and effective deep learning (DL)-aided NOMA system, in which several N OMA users with random deployment are served by one base station, and a long short-term memory (LSTM) network based on DL is incorporated into a typical NOMa system, enabling the proposed scheme to detect the channel characteristics automatically.
Abstract: Nonorthogonal multiple access (NOMA) has been considered as an essential multiple access technique for enhancing system capacity and spectral efficiency in future communication scenarios. However, the existing NOMA systems have a fundamental limit: high computational complexity and a sharply changing wireless channel make exploiting the characteristics of the channel and deriving the ideal allocation methods very difficult tasks. To break this fundamental limit, in this paper, we propose a novel and effective deep learning (DL)-aided NOMA system, in which several NOMA users with random deployment are served by one base station. Since DL is advantageous in that it allows training the input signals and detecting sharply changing channel conditions, we exploit it to address wireless NOMA channels in an end-to-end manner. Specifically, it is employed in the proposed NOMA system to learn a completely unknown channel environment. A long short-term memory (LSTM) network based on DL is incorporated into a typical NOMA system, enabling the proposed scheme to detect the channel characteristics automatically. In the proposed strategy, the LSTM is first trained by simulated data under different channel conditions via offline learning, and then the corresponding output data can be obtained based on the current input data used during the online learning process. In general, we build, train and test the proposed cooperative framework to realize automatic encoding, decoding and channel detection in an additive white Gaussian noise channel. Furthermore, we regard one conventional user activity and data detection scheme as an unknown nonlinear mapping operation and use LSTM to approximate it to evaluate the data detection capacity of DL based on NOMA. Simulation results demonstrate that the proposed scheme is robust and efficient compared with conventional approaches. In addition, the accuracy of the LSTM -aided NOMA scheme is studied by introducing the well-known tenfold cross-validation procedure.

418 citations


Journal ArticleDOI
TL;DR: The requirements of the 5G channel modeling are summarized, an extensive review of the recent channel measurements and models are provided, and future research directions forChannel measurements and modeling are provided.
Abstract: The fifth generation (5G) mobile communication systems will be in use around 2020. The aim of 5G systems is to provide anywhere and anytime connectivity for anyone and anything. Several new technologies are being researched for 5G systems, such as massive multiple-input multiple-output communications, vehicle-to-vehicle communications, high-speed train communications, and millimeter wave communications. Each of these technologies introduces new propagation properties and sets specific requirements on 5G channel modeling. Considering the fact that channel models are indispensable for system design and performance evaluation, accurate and efficient channel models covering various 5G technologies and scenarios are urgently needed. This paper first summarizes the requirements of the 5G channel modeling, and then provides an extensive review of the recent channel measurements and models. Finally, future research directions for channel measurements and modeling are provided.

407 citations


Proceedings ArticleDOI
19 Apr 2018
TL;DR: TasNet as mentioned in this paper directly models the signal in the time-domain using an encoder-decoder framework and performs the source separation on nonnegative encoder outputs, which is then synthesized by the decoder.
Abstract: Robust speech processing in multi-talker environments requires effective speech separation. Recent deep learning systems have made significant progress toward solving this problem, yet it remains challenging particularly in real-time, short latency applications. Most methods attempt to construct a mask for each source in time-frequency representation of the mixture signal which is not necessarily an optimal representation for speech separation. In addition, time-frequency decomposition results in inherent problems such as phase/magnitude decoupling and long time window which is required to achieve sufficient frequency resolution. We propose Time-domain Audio Separation Network (TasNet) to overcome these limitations. We directly model the signal in the time-domain using an encoder-decoder framework and perform the source separation on nonnegative encoder outputs. This method removes the frequency decomposition step and reduces the separation problem to estimation of source masks on encoder outputs which is then synthesized by the decoder. Our system outperforms the current state-of-the-art causal and noncausal speech separation algorithms, reduces the computational cost of speech separation, and significantly reduces the minimum required latency of the output. This makes TasNet suitable for applications where low-power, real-time implementation is desirable such as in hearable and telecommunication devices.

Posted Content
TL;DR: In this article, Orthogonal Time Frequency Space (OTFS) modulation is proposed to exploit the full channel diversity over both time and frequency, which obviates the need for transmitter adaptation, and greatly simplifies system operation.
Abstract: This paper introduces a new two-dimensional modulation technique called Orthogonal Time Frequency Space (OTFS) modulation. OTFS has the novel and important feature of being designed in the delay-Doppler domain. When coupled with a suitable equalizer, OTFS modulation is able to exploit the full channel diversity over both time and frequency. Moreover, it converts the fading, time-varying wireless channel experienced by modulated signals such as OFDM into a time-independent channel with a complex channel gain that is essentially constant for all symbols. This design obviates the need for transmitter adaptation, and greatly simplifies system operation. The paper describes the basic operating principles of OTFS as well as a possible implementation as an overlay to current or anticipated standardized systems. OTFS is shown to provide significant performance improvement in systems with high Doppler, short packets, and/or large antenna array. In particular, simulation results indicate at least several dB of block error rate performance improvement for OTFS over OFDM in all of these settings.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the massive connectivity application in which a large number of devices communicate with a base station (BS) in a sporadic fashion, and proposed an approximate message passing (AMP) algorithm design that exploits the statistics of the wireless channel and provided an analytical characterization of the probabilities of false alarm and missed detection via state evolution.
Abstract: This paper considers the massive connectivity application in which a large number of devices communicate with a base-station (BS) in a sporadic fashion. Device activity detection and channel estimation are central problems in such a scenario. Due to the large number of potential devices, the devices need to be assigned non-orthogonal signature sequences. The main objective of this paper is to show that by using random signature sequences and by exploiting sparsity in the user activity pattern, the joint user detection and channel estimation problem can be formulated as a compressed sensing single measurement vector (SMV) or multiple measurement vector (MMV) problem depending on whether the BS has a single antenna or multiple antennas and efficiently solved using an approximate message passing (AMP) algorithm. This paper proposes an AMP algorithm design that exploits the statistics of the wireless channel and provides an analytical characterization of the probabilities of false alarm and missed detection via state evolution. We consider two cases depending on whether or not the large-scale component of the channel fading is known at the BS and design the minimum mean squared error denoiser for AMP according to the channel statistics. Simulation results demonstrate the substantial advantage of exploiting the channel statistics in AMP design; however, knowing the large-scale fading component does not appear to offer tangible benefits. For the multiple-antenna case, we employ two different AMP algorithms, namely the AMP with vector denoiser and the parallel AMP-MMV, and quantify the benefit of deploying multiple antennas.

Journal ArticleDOI
TL;DR: In this article, a broadband channel estimation algorithm for mmWave multiple input multiple output (MIMO) systems with few-bit analog-to-digital converters (ADCs) is proposed.
Abstract: We develop a broadband channel estimation algorithm for millimeter wave (mmWave) multiple input multiple output (MIMO) systems with few-bit analog-to-digital converters (ADCs). Our methodology exploits the joint sparsity of the mmWave MIMO channel in the angle and delay domains. We formulate the estimation problem as a noisy quantized compressed-sensing problem and solve it using efficient approximate message passing (AMP) algorithms. In particular, we model the angle-delay coefficients using a Bernoulli–Gaussian-mixture distribution with unknown parameters and use the expectation-maximization forms of the generalized AMP and vector AMP algorithms to simultaneously learn the distributional parameters and compute approximately minimum mean-squared error (MSE) estimates of the channel coefficients. We design a training sequence that allows fast, fast Fourier transform based implementation of these algorithms while minimizing peak-to-average power ratio at the transmitter, making our methods scale efficiently to large numbers of antenna elements and delays. We present the results of a detailed simulation study that compares our algorithms to several benchmarks. Our study investigates the effect of SNR, training length, training type, ADC resolution, and runtime on channel estimation MSE, mutual information, and achievable rate. It shows that, in a mmWave MIMO system, the methods we propose to exploit joint angle-delay sparsity allow 1-bit ADCs to perform comparably to infinite-bit ADCs at low SNR, and 4-bit ADCs to perform comparably to infinite-bit ADCs at medium SNR.

Journal ArticleDOI
TL;DR: This work considers detection based on deep learning, and shows it is possible to train detectors that perform well without any knowledge of the underlying channel models, and demonstrates that the bit error rate performance of the proposed SBRNN detector is better than that of a Viterbi detector with imperfect CSI.
Abstract: We consider detection based on deep learning, and show it is possible to train detectors that perform well without any knowledge of the underlying channel models Moreover, when the channel model is known, we demonstrate that it is possible to train detectors that do not require channel state information (CSI) In particular, a technique we call a sliding bidirectional recurrent neural network (SBRNN) is proposed for detection where, after training, the detector estimates the data in real time as the signal stream arrives at the receiver We evaluate this algorithm, as well as other neural network (NN) architectures, using the Poisson channel model, which is applicable to both optical and molecular communication systems In addition, we also evaluate the performance of this detection method applied to data sent over a molecular communication platform, where the channel model is difficult to model analytically We show that SBRNN is computationally efficient, and can perform detection under various channel conditions without knowing the underlying channel model We also demonstrate that the bit error rate performance of the proposed SBRNN detector is better than that of a Viterbi detector with imperfect CSI as well as that of other NN detectors that have been previously proposed Finally, we show that the SBRNN can perform well in rapidly changing channels, where the coherence time is on the order of a single symbol duration

Journal ArticleDOI
TL;DR: In this article, the authors provide a tutorial on a recently developed full-stack mmWave module integrated into the widely used ns-3 simulator, which includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray tracing data.
Abstract: Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation (5G) cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns–3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The physical and medium access control layers are modular and highly customizable, making it easy to integrate algorithms or compare orthogonal frequency division multiplexing numerologies, for example. The module is interfaced with the core network of the ns–3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.

Proceedings ArticleDOI
01 Feb 2018
TL;DR: An IF interface to the analog baseband is desired for low power consumption in the handset or user equipment (UE) active antenna and to enable use of arrays of transceivers for customer premises equipment (CPE) or basestation (BS) antenna arrays with a low-loss IF power-combining/splitting network implemented on an antenna backplane carrying multiple tiled antenna modules.
Abstract: Developing next-generation cellular technology (5G) in the mm-wave bands will require low-cost phased-array transceivers [1]. Even with the benefit of beamforming, due to space constraints in the mobile form-factor, increasing TX output power while maintaining acceptable PA PAE, LNA NF, and overall transceiver power consumption is important to maximizing link budget allowable path loss and minimizing handset case temperature. Further, the phased-array transceiver will need to be able to support dual-polarization communication. An IF interface to the analog baseband is desired for low power consumption in the handset or user equipment (UE) active antenna and to enable use of arrays of transceivers for customer premises equipment (CPE) or basestation (BS) antenna arrays with a low-loss IF power-combining/splitting network implemented on an antenna backplane carrying multiple tiled antenna modules.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: In this article, the suitability of LIS for green communications in terms of energy efficiency was investigated, which is expressed as the number of bits per Joule, and the transmit powers per user and the values for the surface elements that jointly maximize the system's EE performance.
Abstract: We consider a multi-user Multiple-Input Single-Output (MISO) communication system comprising of a multiantenna base station communicating in the downlink simultaneously with multiple single-antenna mobile users. This communication is assumed to be assisted by a Large Intelligent Surface (LIS) that consists of many nearly passive antenna elements, whose parameters can be tuned according to desired objectives. The latest design advances on these surfaces suggest cheap elements effectively acting as low resolution (even 1-bit resolution) phase shifters, whose joint configuration affects the electromagnetic behavior of the wireless propagation channel. In this paper, we investigate the suitability of LIS for green communications in terms of Energy Efficiency (EE), which is expressed as the number of bits per Joule. In particular, for the considered multi-user MISO system, we design the transmit powers per user and the values for the surface elements that jointly maximize the system's EE performance. Our representative simulation results show that LIS-assisted communication, even with nearly passive 1-bit resolution antenna elements, provides significant EE gains compared to conventional relay-assisted communication.

Posted Content
TL;DR: This work designs a low-latency multi-access scheme for edge learning based on a popular privacy-preserving framework, federated edge learning (FEEL), and derives two tradeoffs between communication-and-learning metrics, which are useful for network planning and optimization.
Abstract: The popularity of mobile devices results in the availability of enormous data and computational resources at the network edge. To leverage the data and resources, a new machine learning paradigm, called edge learning, has emerged where learning algorithms are deployed at the edge for providing fast and intelligent services to mobile users. While computing speeds are advancing rapidly, the communication latency is becoming the bottleneck of fast edge learning. To address this issue, this work is focused on designing a low latency multi-access scheme for edge learning. We consider a popular framework, federated edge learning (FEEL), where edge-server and on-device learning are synchronized to train a model without violating user-data privacy. It is proposed that model updates simultaneously transmitted by devices over broadband channels should be analog aggregated "over-the-air" by exploiting the superposition property of a multi-access channel. Thereby, "interference" is harnessed to provide fast implementation of the model aggregation. This results in dramatical latency reduction compared with the traditional orthogonal access (i.e., OFDMA). In this work, the performance of FEEL is characterized targeting a single-cell random network. First, due to power alignment between devices as required for aggregation, a fundamental tradeoff is shown to exist between the update-reliability and the expected update-truncation ratio. This motivates the design of an opportunistic scheduling scheme for FEEL that selects devices within a distance threshold. This scheme is shown using real datasets to yield satisfactory learning performance in the presence of high mobility. Second, both the multi-access latency of the proposed analog aggregation and the OFDMA scheme are analyzed. Their ratio, which quantifies the latency reduction of the former, is proved to scale almost linearly with device population.

Journal ArticleDOI
TL;DR: This paper studies an AmBC system by leveraging the ambient orthogonal frequency division multiplexing (OFDM) modulated signals in the air, and proposes a novel joint design for BD waveform and receiver detector.
Abstract: Ambient backscatter communication (AmBC) enables radio-frequency (RF) powered backscatter devices (BDs) (e.g., sensors and tags) to modulate their information bits over ambient RF carriers in an over-the-air manner. This technology, also called “modulation in the air,” has emerged as a promising solution to achieve green communication for future Internet of Things. This paper studies an AmBC system by leveraging the ambient orthogonal frequency division multiplexing (OFDM) modulated signals in the air. We first model such AmBC system from a spread-spectrum communication perspective, upon which a novel joint design for BD waveform and receiver detector is proposed. The BD symbol period is designed as an integer multiplication of the OFDM symbol period, and the waveform for BD bit “0” maintains the same state within the BD symbol period, while the waveform for BD bit “1” has a state transition in the middle of each OFDM symbol period within the BD symbol period. In the receiver detector design, we construct the test statistic that cancels out the direct-link interference by exploiting the repeating structure of the ambient OFDM signals due to the use of cyclic prefix. For the system with a single-antenna receiver, the maximum-likelihood detector is proposed to recover the BD bits, for which the optimal threshold is obtained in closed-form expression. For the system with a multi-antenna receiver, we propose a new test statistic which is a linear combination of the per-antenna test statistics and derive the corresponding optimal detector. The proposed optimal detectors require only knowing the strength of the backscatter channel, thus simplifying their implementation. Moreover, practical timing synchronization algorithms are proposed for the designed AmBC system, and we also analyze the effect of various system parameters on the transmission rate and detection performance. Finally, extensive numerical results are provided to verify that the proposed transceiver design can improve the system bit-error-rate performance and the operating range significantly and achieve much higher data rate, as compared with the conventional design.

Journal ArticleDOI
TL;DR: In this paper, a unified framework of geometry-based stochastic models for the 5G wireless communication systems is proposed, which aims at capturing small-scale fading channel characteristics of key 5G communication scenarios, such as massive MIMO, high-speed train, vehicle-to-vehicle, and millimeter wave communications.
Abstract: A novel unified framework of geometry-based stochastic models for the fifth generation (5G) wireless communication systems is proposed in this paper. The proposed general 5G channel model aims at capturing small-scale fading channel characteristics of key 5G communication scenarios, such as massive multiple-input multiple-output, high-speed train, vehicle-to-vehicle, and millimeter wave communications. It is a 3-D non-stationary channel model based on the WINNER II and Saleh-Valenzuela channel models considering array-time cluster evolution. Moreover, it can easily be reduced to various simplified channel models by properly adjusting model parameters. Statistical properties of the proposed general 5G small-scale fading channel model are investigated to demonstrate its capability of capturing channel characteristics of various scenarios, with excellent fitting to some corresponding channel measurements.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the channel hardening and favorable propagation properties of a realistic stochastic access point (AP) deployment in CF massive MIMO networks and show that channel hardness only appears in special cases, for example, when the pathloss exponent is small.
Abstract: Cell-free (CF) massive multiple-input multiple-output (MIMO) is an alternative topology for future wireless networks, where a large number of single-antenna access points (APs) are distributed over the coverage area. There are no cells but all users are jointly served by the APs using network MIMO methods. Prior works have claimed that the CF massive MIMO inherits the basic properties of cellular massive MIMO, namely, channel hardening and favorable propagation. In this paper, we evaluate if one can rely on these properties when having a realistic stochastic AP deployment. Our results show that channel hardening only appears in special cases, for example, when the pathloss exponent is small. However, by using 5–10 antennas per AP, instead of one, we can substantially improve the hardening. Only spatially well-separated users will exhibit favorable propagation, but when adding more antennas and/or reducing the pathloss exponent, it becomes more likely for favorable propagation to occur. The conclusion is that we cannot rely on the channel hardening and the favorable propagation when analyzing and designing the CF massive MIMO networks, but we need to use achievable rate expressions and resource allocation schemes that work well also in the absence of these properties. Some options are reviewed in this paper.

Journal ArticleDOI
TL;DR: This paper investigates spatial- and frequency-wideband effects in massive MIMO systems from the array signal processing point of view, and develops the efficient uplink and downlink channel estimation strategies that require much less amount of training overhead and cause no pilot contamination.
Abstract: When there are a large number of antennas in massive MIMO systems, the transmitted wideband signal will be sensitive to the physical propagation delay of electromagnetic waves across the large array aperture, which is called the spatial-wideband effect. In this scenario, the transceiver design is different from most of the existing works, which presume that the bandwidth of the transmitted signals is not that wide, ignore the spatial-wideband effect, and only address the frequency selectivity. In this paper, we investigate spatial- and frequency-wideband effects, called dual-wideband effects in massive MIMO systems from the array signal processing point of view. Taking millimeter-wave-band communications as an example, we describe the transmission process to address the dual-wideband effects. By exploiting the channel sparsity in the angle domain and the delay domain, we develop the efficient uplink and downlink channel estimation strategies that require much less amount of training overhead and cause no pilot contamination. Thanks to the array signal processing techniques, the proposed channel estimation is suitable for both TDD and FDD massive MIMO systems. Numerical examples demonstrate that the proposed transmission design for massive MIMO systems can effectively deal with the dual-wideband effects.

Journal ArticleDOI
TL;DR: In this paper, a joint source and channel coding (JSCC) technique was proposed for wireless image transmission that does not rely on explicit codes for either compression or error correction; instead, it directly maps the image pixel values to the complex-valued channel input symbols.
Abstract: We propose a joint source and channel coding (JSCC) technique for wireless image transmission that does not rely on explicit codes for either compression or error correction; instead, it directly maps the image pixel values to the complex-valued channel input symbols. We parameterize the encoder and decoder functions by two convolutional neural networks (CNNs), which are trained jointly, and can be considered as an autoencoder with a non-trainable layer in the middle that represents the noisy communication channel. Our results show that the proposed deep JSCC scheme outperforms digital transmission concatenating JPEG or JPEG2000 compression with a capacity achieving channel code at low signal-to-noise ratio (SNR) and channel bandwidth values in the presence of additive white Gaussian noise (AWGN). More strikingly, deep JSCC does not suffer from the ``cliff effect'', and it provides a graceful performance degradation as the channel SNR varies with respect to the SNR value assumed during training. In the case of a slow Rayleigh fading channel, deep JSCC learns noise resilient coded representations and significantly outperforms separation-based digital communication at all SNR and channel bandwidth values.

Patent
15 Jan 2018
TL;DR: In this paper, the authors describe a method for extracting first channel signals from first-guided electromagnetic waves bound to an outer surface of a transmission medium of a guided wave communication system.
Abstract: Aspects of the subject disclosure may include, for example, a method that includes extracting first channel signals from first guided electromagnetic waves bound to an outer surface of a transmission medium of a guided wave communication system; amplifying the first channel signals to generate amplified first channel signals in accordance with a phase correction; selecting one or more of the amplified first channel signals to wirelessly transmit to at least one client device via an antenna; and guiding the amplified first channel signals to the transmission medium of the guided wave communication system to propagate as second guided electromagnetic waves, wherein the phase correction aligns a phase of the second guided electromagnetic waves to add in-phase with a residual portion of the first guided electromagnetic waves that continues propagation along the transmission medium

Journal ArticleDOI
TL;DR: A new Q-learning-based transmission scheduling mechanism using deep learning for the CIoT is proposed to solve the problem of how to achieve the appropriate strategy to transmit packets of different buffers through multiple channels to maximize the system throughput.
Abstract: Cognitive networks (CNs) are one of the key enablers for the Internet of Things (IoT), where CNs will play an important role in the future Internet in several application scenarios, such as healthcare, agriculture, environment monitoring, and smart metering. However, the current low packet transmission efficiency of IoT faces a problem of the crowded spectrum for the rapidly increasing popularities of various wireless applications. Hence, the IoT that uses the advantages of cognitive technology, namely the cognitive radio-based IoT (CIoT), is a promising solution for IoT applications. A major challenge in CIoT is the packet transmission efficiency using CNs. Therefore, a new Q-learning-based transmission scheduling mechanism using deep learning for the CIoT is proposed to solve the problem of how to achieve the appropriate strategy to transmit packets of different buffers through multiple channels to maximize the system throughput. A Markov decision process-based model is formulated to describe the state transformation of the system. A relay is used to transmit packets to the sink for the other nodes. To maximize the system utility in different system states, the reinforcement learning method, i.e., the Q learning algorithm, is introduced to help the relay to find the optimal strategy. In addition, the stacked auto-encoders deep learning model is used to establish the mapping between the state and the action to accelerate the solution of the problem. Finally, the experimental results demonstrate that the new action selection method can converge after a certain number of iterations. Compared with other algorithms, the proposed method can better transmit packets with less power consumption and packet loss.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a non-coherent transmission scheme for mMTC and specifically for grant-free random access, which leverages elements from the approximate message passing (AMP) algorithm.
Abstract: A key challenge of massive MTC (mMTC), is the joint detection of device activity and decoding of data. The sparse characteristics of mMTC makes compressed sensing (CS) approaches a promising solution to the device detection problem. However, utilizing CS-based approaches for device detection along with channel estimation, and using the acquired estimates for coherent data transmission is suboptimal, especially when the goal is to convey only a few bits of data. First, we focus on the coherent transmission and demonstrate that it is possible to obtain more accurate channel state information by combining conventional estimators with CS-based techniques. Moreover, we illustrate that even simple power control techniques can enhance the device detection performance in mMTC setups. Second, we devise a new non-coherent transmission scheme for mMTC and specifically for grant-free random access. We design an algorithm that jointly detects device activity along with embedded information bits. The approach leverages elements from the approximate message passing (AMP) algorithm, and exploits the structured sparsity introduced by the non-coherent transmission scheme. Our analysis reveals that the proposed approach has superior performance compared with application of the original AMP approach.

Journal ArticleDOI
TL;DR: A method for uniquely identifying a specific radio among nominally similar devices using a combination of SDR sensing capability and machine learning (ML) techniques, demonstrating up to 90-99 percent experimental accuracy at transmitter- receiver distances varying between 2-50 ft over a noisy, multi-path wireless channel.
Abstract: Advances in software defined radio (SDR) technology allow unprecedented control on the entire processing chain, allowing modification of each functional block as well as sampling the changes in the input waveform This article describes a method for uniquely identifying a specific radio among nominally similar devices using a combination of SDR sensing capability and machine learning (ML) techniques The key benefit of this approach is that ML operates on raw I/Q samples and distinguishes devices using only the transmitter hardware-induced signal modifications that serve as a unique signature for a particular device No higher-level decoding, feature engineering, or protocol knowledge is needed, further mitigating challenges of ID spoofing and coexistence of multiple protocols in a shared spectrum The contributions of the article are as follows: (i) The operational blocks in a typical wireless communications processing chain are modified in a simulation study to demonstrate RF impairments, which we exploit (ii) Using an overthe- air dataset compiled from an experimental testbed of SDRs, an optimized deep convolutional neural network architecture is proposed, and results are quantitatively compared with alternate techniques such as support vector machines and logistic regression (iii) Research challenges for increasing the robustness of the approach, as well as the parallel processing needs for efficient training, are described Our work demonstrates up to 90-99 percent experimental accuracy at transmitter- receiver distances varying between 2-50 ft over a noisy, multi-path wireless channel

Journal ArticleDOI
TL;DR: In this paper, the authors considered a frequency-selective mm-wave channel and proposed compressed sensing-based strategies to estimate the channel in the frequency domain, and evaluated different algorithms and computed their complexity to expose tradeoffs in complexity overhead performance as compared with those of previous approaches.
Abstract: Channel estimation is useful in millimeter wave (mm-wave) MIMO communication systems. Channel state information allows optimized designs of precoders and combiners under different metrics, such as mutual information or signal-to-interference noise ratio. At mm-wave, MIMO precoders and combiners are usually hybrid, since this architecture provides a means to trade-off power consumption and achievable rate. Channel estimation is challenging when using these architectures, however, since there is no direct access to the outputs of the different antenna elements in the array. The MIMO channel can only be observed through the analog combining network, which acts as a compression stage of the received signal. Most of the prior work on channel estimation for hybrid architectures assumes a frequency-flat mm-wave channel model. In this paper, we consider a frequency-selective mm-wave channel and propose compressed sensing-based strategies to estimate the channel in the frequency domain. We evaluate different algorithms and compute their complexity to expose tradeoffs in complexity overhead performance as compared with those of previous approaches.

Journal ArticleDOI
TL;DR: The properties of the quantum communication channel, the various capacity measures and the fundamental differences between the classical and quantum channels are reviewed.
Abstract: Quantum information processing exploits the quantum nature of information. It offers fundamentally new solutions in the field of computer science and extends the possibilities to a level that cannot be imagined in classical communication systems. For quantum communication channels, many new capacity definitions were developed in comparison to classical counterparts. A quantum channel can be used to realize classical information transmission or to deliver quantum information, such as quantum entanglement. Here we review the properties of the quantum communication channel, the various capacity measures and the fundamental differences between the classical and quantum channels.