scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Communications in 2016"


Journal ArticleDOI
TL;DR: Numerical results show that by optimizing the trajectory of the relay and power allocations adaptive to its induced channel variation, mobile relaying is able to achieve significant throughput gains over the conventional static relaying.
Abstract: In this paper, we consider a novel mobile relaying technique, where the relay nodes are mounted on unmanned aerial vehicles (UAVs) and hence are capable of moving at high speed. Compared with conventional static relaying, mobile relaying offers a new degree of freedom for performance enhancement via careful relay trajectory design. We study the throughput maximization problem in mobile relaying systems by optimizing the source/relay transmit power along with the relay trajectory, subject to practical mobility constraints (on the UAV’s speed and initial/final relay locations), as well as the information-causality constraint at the relay. It is shown that for the fixed relay trajectory, the throughput-optimal source/relay power allocations over time follow a “staircase” water filling structure, with non-increasing and non-decreasing water levels at the source and relay, respectively. On the other hand, with given power allocations, the throughput can be further improved by optimizing the UAV’s trajectory via successive convex optimization. An iterative algorithm is thus proposed to optimize the power allocations and relay trajectory alternately. Furthermore, for the special case with free initial and final relay locations, the jointly optimal power allocation and relay trajectory are derived. Numerical results show that by optimizing the trajectory of the relay and power allocations adaptive to its induced channel variation, mobile relaying is able to achieve significant throughput gains over the conventional static relaying.

1,079 citations


Journal ArticleDOI
TL;DR: This paper investigates partial computation offloading by jointly optimizing the computational speed of smart mobile device (SMD), transmit power of SMD, and offloading ratio with two system design objectives: energy consumption of ECM minimization and latency of application execution minimization.
Abstract: The incorporation of dynamic voltage scaling technology into computation offloading offers more flexibilities for mobile edge computing. In this paper, we investigate partial computation offloading by jointly optimizing the computational speed of smart mobile device (SMD), transmit power of SMD, and offloading ratio with two system design objectives: energy consumption of SMD minimization (ECM) and latency of application execution minimization (LM). Considering the case that the SMD is served by a single cloud server, we formulate both the ECM problem and the LM problem as nonconvex problems. To tackle the ECM problem, we recast it as a convex one with the variable substitution technique and obtain its optimal solution. To address the nonconvex and nonsmooth LM problem, we propose a locally optimal algorithm with the univariate search technique. Furthermore, we extend the scenario to a multiple cloud servers system, where the SMD could offload its computation to a set of cloud servers. In this scenario, we obtain the optimal computation distribution among cloud servers in closed form for the ECM and LM problems. Finally, extensive simulations demonstrate that our proposed algorithms can significantly reduce the energy consumption and shorten the latency with respect to the existing offloading schemes.

819 citations


Journal ArticleDOI
TL;DR: A low-complexity yet near-optimal greedy frequency selective hybrid precoding algorithm is proposed based on Gram-Schmidt orthogonalization and efficient hybrid analog/digital codebooks are developed for spatial multiplexing in wideband mmWave systems.
Abstract: Hybrid analog/digital precoding offers a compromise between hardware complexity and system performance in millimeter wave (mmWave) systems. This type of precoding allows mmWave systems to leverage large antenna array gains that are necessary for sufficient link margin, while permitting low cost and power consumption hardware. Most prior work has focused on hybrid precoding for narrow-band mmWave systems, with perfect or estimated channel knowledge at the transmitter. MmWave systems, however, will likely operate on wideband channels with frequency selectivity. Therefore, this paper considers wideband mmWave systems with a limited feedback channel between the transmitter and receiver. First, the optimal hybrid precoding design for a given RF codebook is derived. This provides a benchmark for any other heuristic algorithm and gives useful insights into codebook designs. Second, efficient hybrid analog/digital codebooks are developed for spatial multiplexing in wideband mmWave systems. Finally, a low-complexity yet near-optimal greedy frequency selective hybrid precoding algorithm is proposed based on Gram–Schmidt orthogonalization. Simulation results show that the developed hybrid codebooks and precoder designs achieve very-good performance compared with the unconstrained solutions while requiring much less complexity.

529 citations


Journal ArticleDOI
TL;DR: A near maximum likelihood detector for uplink multiuser massive MIMO systems is proposed where each antenna is connected to a pair of one-bit ADCs, i.e., one for each real and imaginary component of the baseband signal.
Abstract: In massive multiple-input multiple-output (MIMO) systems, it may not be power efficient to have a pair of high-resolution analog-to-digital converters (ADCs) for each antenna element. In this paper, a near maximum likelihood (nML) detector for uplink multiuser massive MIMO systems is proposed where each antenna is connected to a pair of one-bit ADCs, i.e., one for each real and imaginary component of the baseband signal. The exhaustive search over all the possible transmitted vectors required in the original maximum likelihood (ML) detection problem is relaxed to formulate an ML estimation problem. Then, the ML estimation problem is converted into a convex optimization problem which can be efficiently solved. Using the solution, the base station can perform simple symbol-by-symbol detection for the transmitted signals from multiple users. To further improve detection performance, we also develop a two-stage nML detector that exploits the structures of both the original ML and the proposed (one-stage) nML detectors. Numerical results show that the proposed nML detectors are efficient enough to simultaneously support multiple uplink users adopting higher-order constellations, e.g., 16 quadrature amplitude modulation. Since our detectors exploit the channel state information as part of the detection, an ML channel estimation technique with one-bit ADCs that shares the same structure with our proposed nML detector is also developed. The proposed detectors and channel estimator provide a complete low power solution for the uplink of a massive MIMO system.

491 citations


Journal ArticleDOI
Junho Lee1, Gye-Tae Gil1, Yong Hoon Lee1
TL;DR: An efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency beamformers with large antenna arrays followed by a baseband MIMO processor is proposed.
Abstract: We propose an efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency (RF) beamformers with large antenna arrays followed by a baseband MIMO processor. A sparse signal recovery problem exploiting the sparse nature of mm-wave channels is formulated for channel estimation based on the parametric channel model with quantized angles of departures/arrivals (AoDs/AoAs), called the angle grids. The problem is solved by the orthogonal matching pursuit (OMP) algorithm employing a redundant dictionary consisting of array response vectors with finely quantized angle grids. We suggest the use of non-uniformly quantized angle grids and show that such grids reduce the coherence of the redundant dictionary. The lower and upper bounds of the sum-of-squared errors of the proposed OMP-based estimator are derived analytically: the lower bound is derived by considering the oracle estimator that assumes the knowledge of AoDs/AoAs, and the upper bound is derived based on the results of the OMP performance guarantees. The design of training vectors (or sensing matrix) is particularly important in hybrid MIMO systems, because the RF beamformer prevents the use of independent and identically distributed random training vectors, which are popular in compressed sensing. We design training vectors so that the total coherence of the equivalent sensing matrix is minimized for a given RF beamforming matrix, which is assumed to be unitary. It is observed that the estimation accuracy can be improved significantly by randomly permuting the columns of the RF beamforming matrix. The simulation results demonstrate the advantage of the proposed OMP with a redundant dictionary over the existing methods such as the least squares method and the OMP based on the virtual channel model.

447 citations


Journal ArticleDOI
TL;DR: This paper proposes a low-complexity suboptimal algorithm, which includes energy-efficient subchannel assignment and power proportional factors determination for subchannel multiplexed users and proposes a novel power allocation across subchannels to further maximize energy efficiency.
Abstract: Non-orthogonal multiple access (NOMA) is a promising technique for the fifth generation mobile communication due to its high spectral efficiency. By applying superposition coding and successive interference cancellation techniques at the receiver, multiple users can be multiplexed on the same subchannel in NOMA systems. Previous works focus on subchannel assignment and power allocation to achieve the maximization of sum rate; however, the energy-efficient resource allocation problem has not been well studied for NOMA systems. In this paper, we aim to optimize subchannel assignment and power allocation to maximize the energy efficiency for the downlink NOMA network. Assuming perfect knowledge of the channel state information at base station, we propose a low-complexity suboptimal algorithm, which includes energy-efficient subchannel assignment and power proportional factors determination for subchannel multiplexed users. We also propose a novel power allocation across subchannels to further maximize energy efficiency. Since both optimization problems are non-convex, difference of convex programming is used to transform and approximate the original non-convex problems to convex optimization problems. Solutions to the resulting optimization problems can be obtained by solving the convex sub-problems iteratively. Simulation results show that the NOMA system equipped with the proposed algorithms yields much better sum rate and energy efficiency performance than the conventional orthogonal frequency division multiple access scheme.

411 citations


Journal ArticleDOI
TL;DR: A new transmission model is formulated, the data detection algorithm is designed, and two closed-form detection thresholds are derived to approximately achieve the minimum sum bit error rate (BER).
Abstract: Ambient backscatter technology that utilizes the ambient radio frequency signals to enable the communications of battery-free devices has attracted much attention recently. In this paper, we study the problem of signal detection for an ambient backscatter communication system that adopts the differential encoding to eliminate the necessity of channel estimation. Specifically, we formulate a new transmission model, design the data detection algorithm, and derive two closed-form detection thresholds. One threshold is used to approximately achieve the minimum sum bit error rate (BER), while the other yields balanced error probabilities for “0” bit and “1” bit. The corresponding BER expressions are derived to fully characterize the detection performance. In addition, the lower and the upper bounds of BER at high signal-to-noise ratio regions are also examined to simplify a performance analysis. Simulation results are then provided to corroborate the theoretical studies.

362 citations


Journal ArticleDOI
TL;DR: The results demonstrate that NOMA can achieve superior performance compared to the traditional orthogonal multiple access (OMA) and the derived expressions for the outage probability and the average sum rate match well with the Monte Carlo simulations.
Abstract: In this paper, a downlink single-cell non-orthogonal multiple access (NOMA) network with uniformly deployed users is considered and an analytical framework to evaluate its performance is developed. Particularly, the performance of NOMA is studied by assuming two types of partial channel state information (CSI). For the first one, which is based on imperfect CSI , we present a simple closed-form approximation for the outage probability and the average sum rate, as well as their high signal-to-noise ratio (SNR) expressions. For the second type of CSI, which is based on second order statistics (SOS) , we derive a closed-form expression for the outage probability and an approximate expression for the average sum rate for the special case two users. For the addressed scenario with the two types of partial CSI, the results demonstrate that NOMA can achieve superior performance compared to the traditional orthogonal multiple access (OMA). Moreover, SOS-based NOMA always achieves better performance than that with imperfect CSI, while it can achieve similar performance to the NOMA with perfect CSI at the low SNR region. The provided numerical results confirm that the derived expressions for the outage probability and the average sum rate match well with the Monte Carlo simulations.

350 citations


Journal ArticleDOI
TL;DR: Numerical results show that in addition to the ESR gains, the benefits of RS also include relaxed CSIT quality requirements and enhanced achievable rate regions compared with conventional transmission with no rate-splitting.
Abstract: This paper considers the sum-rate (SR) maximization problem in downlink multi-user multiple input simgle output (MU-MISO) systems under imperfect channel state information at the transmitter (CSIT). Contrary to existing works, we consider a rather unorthodox transmission scheme. In particular, the message intended to one of the users is split into two parts: a common part which can be recovered by all users, and a private part recovered by the corresponding user. On the other hand, the rest of users receive their information through private messages. This rate-splitting (RS) approach was shown to boost the achievable degrees of freedom when CSIT errors decay with increased SNR. In this paper, the RS strategy is married with linear precoder design and optimization techniques to achieve a maximized ergodic SR (ESR) performance over the entire range of SNRs. Precoders are designed based on partial CSIT knowledge by solving a stochastic rate optimization problem using means of sample average approximation coupled with the weighted minimum mean square error approach. Numerical results show that in addition to the ESR gains, the benefits of RS also include relaxed CSIT quality requirements and enhanced achievable rate regions compared with conventional transmission with no rate-splitting.

326 citations


Journal ArticleDOI
TL;DR: This paper considers the downlink communication of a massive multiuser MIMO (MU-MIMO) system and proposes a low-complexity hybrid block diagonalization (Hy-BD) scheme to approach the capacity performance of the traditional BD processing method.
Abstract: For a massive multiple-input multiple-output (MIMO) system, restricting the number of RF chains to far less than the number of antenna elements can significantly reduce the implementation cost compared to the full complexity RF chain configuration In this paper, we consider the downlink communication of a massive multiuser MIMO (MU-MIMO) system and propose a low-complexity hybrid block diagonalization (Hy-BD) scheme to approach the capacity performance of the traditional BD processing method We aim to harvest the large array gain through the phase-only RF precoding and combining and then digital BD processing is performed on the equivalent baseband channel The proposed Hy-BD scheme is examined in both the large Rayleigh fading channels and millimeter wave (mmWave) channels A performance analysis is further conducted for single-path channels and large number of transmit and receive antennas Finally, simulation results demonstrate that our Hy-BD scheme, with a lower implementation and computational complexity, achieves a capacity performance that is close to (sometimes even higher than) that of the traditional high-dimensional BD processing

305 citations


Journal ArticleDOI
TL;DR: A new lens antenna array enabled mmWave multiple-input multiple-output (MIMO) communication system is studied and it is shown that the proposed design achieves significant throughput gains as well as complexity and cost reductions, thus leading to a promising new paradigm for mmWave MIMO communications.
Abstract: Millimeter wave (mmWave) communication is a promising technology for future wireless systems, while one practical challenge is to achieve its large-antenna gains with only limited radio frequency (RF) chains for cost-effective implementation. To this end, we study in this paper a new lens antenna array enabled mmWave multiple-input multiple-output (MIMO) communication system. We first show that the array response of lens antenna arrays follows a “sinc” function, where the antenna element with the peak response is determined by the angle of arrival (AoA)/departure (AoD) of the received/transmitted signal. By exploiting this unique property along with the multi-path sparsity of mmWave channels, we propose a novel low-cost and capacity-achieving spatial multiplexing scheme for both narrow-band and wide-band mmWave communications, termed path division multiplexing (PDM) , where parallel data streams are transmitted over different propagation paths with simple per-path processing. We further propose a simple path grouping technique with group-based small-scale MIMO processing to effectively mitigate the inter-stream interference due to similar AoAs/AoDs. Numerical results are provided to compare the performance of the proposed mmWave lens MIMO against the conventional MIMO with uniform planar arrays (UPAs) and hybrid analog/digital processing. It is shown that the proposed design achieves significant throughput gains as well as complexity and cost reductions, thus leading to a promising new paradigm for mmWave MIMO communications.

Journal ArticleDOI
TL;DR: In this article, an access threshold-based secrecy mobile association policy was proposed to associate each user with the BS providing the maximum truncated average received signal power beyond a threshold, and the connection probability and secrecy probability of a randomly located user were investigated.
Abstract: The heterogeneous cellular network (HCN) is a promising approach to the deployment of 5G cellular networks. This paper comprehensively studies physical layer security in a multitier HCN where base stations (BSs), authorized users, and eavesdroppers are all randomly located. We first propose an access threshold-based secrecy mobile association policy that associates each user with the BS providing the maximum truncated average received signal power beyond a threshold. Under the proposed policy, we investigate the connection probability and secrecy probability of a randomly located user and provide tractable expressions for the two metrics. Asymptotic analysis reveals that setting a larger access threshold increases the connection probability while decreases the secrecy probability. We further evaluate the network-wide secrecy throughput and the minimum secrecy throughput per user with both connection and secrecy probability constraints. We show that introducing a properly chosen access threshold significantly enhances the secrecy throughput performance of a HCN.

Journal ArticleDOI
TL;DR: In this article, the uplink performance of a quantized massive MIMO system that deploys orthogonal frequency division multiplexing (OFDM) for wideband communication is investigated.
Abstract: Coarse quantization at the base station (BS) of a massive multi-user (MU) multiple-input multiple-output (MIMO) wireless system promises significant power and cost savings. Coarse quantization also enables significant reductions of the raw analog-to-digital converter data that must be transferred from a spatially separated antenna array to the baseband processing unit. The theoretical limits as well as practical transceiver algorithms for such quantized MU-MIMO systems operating over frequency-flat, narrowband channels have been studied extensively. However, the practically relevant scenario where such communication systems operate over frequency-selective, wideband channels is less well understood. This paper investigates the uplink performance of a quantized massive MU-MIMO system that deploys orthogonal frequency-division multiplexing (OFDM) for wideband communication. We propose new algorithms for quantized maximum a posteriori channel estimation and data detection, and we study the associated performance/quantization tradeoffs. Our results demonstrate that coarse quantization (e.g., four to six bits, depending on the ratio between the number of BS antennas and the number of users) in massive MU-MIMO-OFDM systems entails virtually no performance loss compared with the infinite-precision case at no additional cost in terms of baseband processing complexity.

Journal ArticleDOI
TL;DR: A transfer learning-based approach to improve the estimate of the cost function of an optimal random caching strategy proposed here, which is based on the popularity profile of cached content and modeled using a parametric family of distributions.
Abstract: A heterogenous network with base stations (BSs), small base stations (SBSs), and users distributed according to independent Poisson point processes is considered. SBS nodes are assumed to possess high storage capacity and to form a distributed caching network. Popular files are stored in local caches of SBSs, so that a user can download the desired files from one of the SBSs in its vicinity. The offloading-loss is captured via a cost function that depends on the random caching strategy proposed here. The popularity profile of cached content is unknown and estimated using instantaneous demands from users within a specified time interval. An estimate of the cost function is obtained from which an optimal random caching strategy is devised. The training time to achieve an $\epsilon > 0$ difference between the achieved and optimal costs is finite provided the user density is greater than a predefined threshold, and scales as $N^2$ , where $N$ is the support of the popularity profile. A transfer learning-based approach to improve this estimate is proposed. The training time is reduced when the popularity profile is modeled using a parametric family of distributions; the delay is independent of $N$ and scales linearly with the dimension of the distribution parameter.

Journal ArticleDOI
TL;DR: A genetic algorithm-based method for solving the VNF scheduling problem efficiently is developed and it is shown that dynamically adjusting the bandwidths on virtual links connecting virtual machines, hosting the network functions, reduces the schedule makespan by 15%-20% in the simulated scenarios.
Abstract: To accelerate the implementation of network functions/middle boxes and reduce the deployment cost, recently, the concept of network function virtualization (NFV) has emerged and become a topic of much interest attracting the attention of researchers from both industry and academia. Unlike the traditional implementation of network functions, a software-oriented approach for virtual network functions (VNFs) creates more flexible and dynamic network services to meet a more diversified demand. Software-oriented network functions bring along a series of research challenges, such as VNF management and orchestration, service chaining, VNF scheduling for low latency and efficient virtual network resource allocation with NFV infrastructure, among others. In this paper, we study the VNF scheduling problem and the corresponding resource optimization solutions. Here, the VNF scheduling problem is defined as a series of scheduling decisions for network services on network functions and activating the various VNFs to process the arriving traffic. We consider VNF transmission and processing delays and formulate the joint problem of VNF scheduling and traffic steering as a mixed integer linear program. Our objective is to minimize the makespan/latency of the overall VNFs’ schedule. Reducing the scheduling latency enables cloud operators to service (and admit) more customers, and cater to services with stringent delay requirements, thereby increasing operators’ revenues. Owing to the complexity of the problem, we develop a genetic algorithm-based method for solving the problem efficiently. Finally, the effectiveness of our heuristic algorithm is verified through numerical evaluation. We show that dynamically adjusting the bandwidths on virtual links connecting virtual machines, hosting the network functions, reduces the schedule makespan by 15%–20% in the simulated scenarios.

Journal ArticleDOI
TL;DR: A structured compressive sensing (SCS)-based spatio-temporal joint channel estimation scheme to reduce the required pilot overhead and is capable of approaching the optimal oracle least squares estimator.
Abstract: Massive MIMO is a promising technique for future 5G communications due to its high spectrum and energy efficiency. To realize its potential performance gain, accurate channel estimation is essential. However, due to massive number of antennas at the base station (BS), the pilot overhead required by conventional channel estimation schemes will be unaffordable, especially for frequency division duplex (FDD) massive MIMO. To overcome this problem, we propose a structured compressive sensing (SCS)-based spatio-temporal joint channel estimation scheme to reduce the required pilot overhead, whereby the spatio-temporal common sparsity of delay-domain MIMO channels is leveraged. Particularly, we first propose the nonorthogonal pilots at the BS under the framework of CS theory to reduce the required pilot overhead. Then, an adaptive structured subspace pursuit (ASSP) algorithm at the user is proposed to jointly estimate channels associated with multiple OFDM symbols from the limited number of pilots, whereby the spatio-temporal common sparsity of MIMO channels is exploited to improve the channel estimation accuracy. Moreover, by exploiting the temporal channel correlation, we propose a space-time adaptive pilot scheme to further reduce the pilot overhead. Additionally, we discuss the proposed channel estimation scheme in multicell scenario. Simulation results demonstrate that the proposed scheme can accurately estimate channels with the reduced pilot overhead, and it is capable of approaching the optimal oracle least squares estimator.

Journal ArticleDOI
TL;DR: Finite-blocklength, finite-SNR upper and lower bounds on the maximum coding rate achievable over multiple-antenna Rayleigh block-fading channels are obtained and reveal the existence of a tradeoff between the rate gain obtainable by spreading each codeword over all available time-frequency-spatial degrees of freedom.
Abstract: Motivated by the current interest in ultra-reliable, low-latency, machine-type communication systems, we investigate the tradeoff between reliability, throughput, and latency in the transmission of information over multiple-antenna Rayleigh block-fading channels. Specifically, we obtain finite-blocklength, finite-SNR upper and lower bounds on the maximum coding rate achievable over such channels for a given constraint on the packet error probability. Numerical evidence suggests that our bounds delimit tightly the maximum coding rate already for short blocklengths (packets of about 100 symbols). Furthermore, our bounds reveal the existence of a tradeoff between the rate gain obtainable by spreading each codeword over all available time-frequency-spatial degrees of freedom, and the rate loss caused by the need of estimating the fading coefficients over these degrees of freedom. In particular, our bounds allow us to determine the optimal number of transmit antennas and the optimal number of time-frequency diversity branches that maximize the rate. Finally, we show that infinite-blocklength performance metrics such as the ergodic capacity and the outage capacity yield inaccurate throughput estimates.

Journal ArticleDOI
TL;DR: Novel analytical solutions for the probability of strictly positive secrecy capacity (SPSC) and a lower bound of secure outage probability (SOP) for independent and non-identically distributed channel coefficients without parameter constraints are obtained.
Abstract: In this paper, we consider the transmission of confidential information over a $\kappa $ – $\mu $ fading channel in the presence of an eavesdropper who also experiences $\kappa $ – $\mu $ fading. In particular, we obtain novel analytical solutions for the probability of strictly positive secrecy capacity (SPSC) and a lower bound of secure outage probability (SOP $^{L}$ ) for independent and non-identically distributed channel coefficients without parameter constraints. We also provide a closed-form expression for the probability of SPSC when the $\mu $ parameter is assumed to take positive integer values. Monte-Carlo simulations are performed to verify the derived results. The versatility of the $\kappa $ – $\mu $ fading model means that the results presented in this paper can be used to determine the probability of SPSC and SOP $^{L}$ for a large number of other fading scenarios, such as Rayleigh, Rice (Nakagami- $n$ ), Nakagami- $m$ , One-Sided Gaussian, and mixtures of these common fading models. In addition, due to the duality of the analysis of secrecy capacity and co-channel interference (CCI), the results presented here will have immediate applicability in the analysis of outage probability in wireless systems affected by CCI and background noise (BN). To demonstrate the efficacy of the novel formulations proposed here, we use the derived equations to provide a useful insight into the probability of SPSC and SOP $^{L}$ for a range of emerging wireless applications, such as cellular device-to-device, peer-to-peer, vehicle-to-vehicle, and body centric communications using data obtained from real channel measurements.

Journal ArticleDOI
TL;DR: A new recursive approach for bounding the capacity of the channel based on sphere-packing is proposed, which leads to new capacity upper bounds for a channel with a peak intensity constraint or an average intensity constraint.
Abstract: The capacity of the free-space optical channel is studied. A new recursive approach for bounding the capacity of the channel based on sphere-packing is proposed. This approach leads to new capacity upper bounds for a channel with a peak intensity constraint or an average intensity constraint. Under an average constraint only, the derived bound is tighter than an existing sphere-packing bound derived earlier by Farid and Hranilovic. The achievable rate of a truncated-Gaussian input distribution is also derived. It is shown that under both average and peak constraints, this achievable rate and the sphere-packing bounds are within a small gap at high SNR, leading to a simple high-SNR capacity approximation. Simple fitting functions that capture the best known achievable rate for the channel are provided. These functions can be of practical importance especially for the study of systems operating under atmospheric turbulence and misalignment conditions.

Journal ArticleDOI
TL;DR: In this article, the authors considered a full-duplex decode-and-forward (FD) system where the time-switching protocol is employed by the multiantenna relay to receive energy from the source and transmit information to the destination.
Abstract: We consider a full-duplex (FD) decode-and-forward system in which the time-switching protocol is employed by the multiantenna relay to receive energy from the source and transmit information to the destination. The instantaneous throughput is maximized by optimizing receive and transmit beamformers at the relay and the time-split parameter. We study both optimum and suboptimum schemes. The reformulated problem in the optimum scheme achieves closed-form solutions in terms of transmit beamformer for some scenarios. In other scenarios, the optimization problem is formulated as a semidefinite relaxation problem and a rank-one optimum solution is always guaranteed. In the suboptimum schemes, the beamformers are obtained using maximum ratio combining, zero-forcing, and maximum ratio transmission. When beamformers have closed-form solutions, the achievable instantaneous and delay-constrained throughput are analytically characterized. Our results reveal that beamforming increases both the energy harvesting and loop interference suppression capabilities at the FD relay. Moreover, simulation results demonstrate that the choice of the linear processing scheme as well as the time-split plays a critical role in determining the FD gains.

Journal ArticleDOI
TL;DR: Comparisons between the proposed and traditional BS association policies show the significant effect of backhaul on the network performance, which demonstrates the importance of joint system design for radio access and backhaul networks.
Abstract: With the foreseeable explosive growth of small cell deployment, backhaul has become the next big challenge in the next generation wireless networks. Heterogeneous backhaul deployment using different wired and wireless technologies may be a potential solution to meet this challenge. Therefore, it is of cardinal importance to evaluate and compare the performance characteristics of various backhaul technologies to understand their effect on the network aggregate performance. In this paper, we propose relevant backhaul models and study the delay performance of various backhaul technologies with different capabilities and characteristics, including fiber, xDSL, millimeter wave (mmWave), and sub–6 GHz. Using these models, we aim at optimizing the base station (BS) association so as to minimize the mean network packet delay in a macrocell network overlaid with small cells. Numerical results are presented to show the delay performance characteristics of different backhaul solutions. Comparisons between the proposed and traditional BS association policies show the significant effect of backhaul on the network performance, which demonstrates the importance of joint system design for radio access and backhaul networks.

Journal ArticleDOI
TL;DR: A framework to model the downlink rate coverage probability of a user in a given SCN with massive multiple-input-multiple-output (MIMO)-enabled wireless backhauls with self-interference cancellation capability is developed and a few remedial solutions are proposed that can potentially mitigate the backhaul interference and in turn improve the performance of in-band FD wirelessBackhauling.
Abstract: Recent advancements in self-interference (SI) cancellation capability of low-power wireless devices motivate in-band full-duplex (FD) wireless backhauling in small cell networks (SCNs). In-band FD wireless backhauling concurrently allows the use of the same frequency spectrum for the backhaul as well as access links of the small cells. In this paper, using tools from stochastic geometry, we develop a framework to model the downlink rate coverage probability of a user in a given SCN with massive multiple-input–multiple-output (MIMO)-enabled wireless backhauls. The considered SCN is composed of a mixture of small cells that are configured in either in-band or out-of-band backhaul modes with a certain probability. The performance of the user in the considered hierarchical network is limited by several sources of interference, such as the backhaul interference, small cell base station (SBS)-to-SBS interference, and the SI. Moreover, due to the channel hardening effect in massive MIMO, the backhaul links only experience long term channel effects, whereas the access links experience both the long term and the short term channel effects. Consequently, the developed framework is flexible to characterize different sources of interference while capturing the heterogeneity of the access and backhaul channels. In specific scenarios, the framework enables deriving closed-form coverage probability expressions. Under perfect backhaul coverage, the simplified expressions are utilized to optimize the proportion of in-band and out-of-band small cells in the SCN in the closed form. Finally, a few remedial solutions are proposed that can potentially mitigate the backhaul interference and in turn improve the performance of in-band FD wireless backhauling. Numerical results investigate the scenarios in which in-band wireless backhauling is useful and demonstrate that maintaining a correct proportion of in-band and out-of-band FD small cells is crucial in wireless backhauled SCNs.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an orthogonal chirp-division multiplexing (OCDM) for high-speed communication, which can efficiently exploit multipath diversity and thus outperform the OFDM, and that it is more resilient against the interference due to insufficient guard interval than single-carrier frequency domain equalization.
Abstract: Chirp waveform plays a significant role in radar and communication systems for its ability of pulse compression and spread spectrum. This paper presents a principle of multiplexing a bank of orthogonal chirps, termed orthogonal chirp-division multiplexing (OCDM) for high-speed communication. As Fourier transform is the kernel of orthogonal frequency-division multiplexing (OFDM), which achieves the maximum spectral efficiency (SE) of frequency-division multiplexing, Fresnel transform underlies the proposed OCDM system, which achieves the maximum SE of chirp spread spectrum. By using discrete Fresnel transform, digital implementation of OCDM is introduced. According to the properties of Fresnel transform, the transmission of OCDM signal in linear time-invariant channel is studied. Efficient digital signal processing is proposed for channel dispersion compensation. The implementation of the OCDM system is discussed with the emphasis on its compatibility to the OFDM system; it is shown that it can be easily integrated into the existing OFDM systems. Finally, the simulations are provided to validate the feasibility of the proposed OCDM. It is shown that the OCDM system can efficiently exploit multipath diversity and thus outperforms the OFDM, and that it is more resilient against the interference due to insufficient guard interval than single-carrier frequency-domain equalization.

Journal ArticleDOI
TL;DR: This paper performs an analytical study of the latency performance of redundant requests, with the primary goals of characterizing under what scenarios sending redundant requests will help (and under what scenario it will not), and of designing optimal redundant-requesting policies.
Abstract: Many systems possess the flexibility to serve requests in more than one way, such as distributed storage systems that store multiple copies of the data. In such systems, the latency of serving the requests may potentially be reduced by sending redundant requests : a request may be sent to more servers than needed and deemed served when the requisite number of servers complete service. Such a mechanism trades off the possibility of faster execution of the request with the increase in the load on the system. Several recent works empirically evaluate the latency performance of redundant requests in diverse settings. In this paper, we perform an analytical study of the latency performance of redundant requests, with the primary goals of characterizing under what scenarios sending redundant requests will help (and under what scenarios it will not), and of designing optimal redundant-requesting policies. We show that when service times are i.i.d. memoryless or “heavier,” and when the additional copies of already-completed jobs can be removed instantly, maximally scheduling redundant requests achieves the optimal average latency. On the other hand, when service times are i.i.d. “lighter” or when service times are memoryless and removal of jobs is not instantaneous, then not having any redundancy in the requests is optimal under high loads. Our results are applicable to arbitrary arrival processes.

Journal ArticleDOI
TL;DR: The results reveal the impact of the imperfect CSI on the secrecy performance of MISO SWIPT systems in the presence of multiple wiretap channels.
Abstract: In this paper, a multiple-input single-output (MISO) simultaneous wireless information and power transfer (SWIPT) system, including one base station (BS) equipped with multiple antennas, one desired single-antenna information receiver (IR), and $N$ ( $N > 1$ ) single-antenna energy-harvesting receivers (ERs) is considered. Assuming that the information signal to the desired IR may be eavesdropped by ERs if ERs are malicious, we investigate the secrecy performance of the target MISO SWIPT system when imperfect channel state information (CSI) is available and adopted for transmit antenna selection at the BS. Considering that each eavesdropping link experiences independent but not necessarily identically distributed Rayleigh fading, the closed-form expressions for the exact and the asymptotic secrecy outage probability, and the average secrecy capacity are derived and verified by simulations. Furthermore, the optimal power splitting factor is derived for each ER to realize the tradeoff between the energy harvesting and the information eavesdropping. Our results reveal the impact of the imperfect CSI on the secrecy performance of MISO SWIPT systems in the presence of multiple wiretap channels.

Journal ArticleDOI
TL;DR: A new generative model for cluster-centric D2D networks is developed that allows to study the effect of intra-cluster interfering devices that are more likely to lie closer to the cluster center.
Abstract: This paper develops a comprehensive analytical framework with foundations in stochastic geometry to characterize the performance of cluster-centric content placement in a cache-enabled device-to-device (D2D) network. Different from device-centric content placement, cluster-centric placement focuses on placing content in each cluster, such that the collective performance of all the devices in each cluster is optimized. Modeling the locations of the devices by a Poisson cluster process, we define and analyze the performance for three general cases: 1) $k$ -Tx case: the receiver of interest is chosen uniformly at random in a cluster and its content of interest is available at the $k{\mathrm{ th}}$ closest device to the cluster center; 2) $\ell $ -Rx case: the receiver of interest is the $\ell {\mathrm{ th}}$ closest device to the cluster center and its content of interest is available at a device chosen uniformly at random from the same cluster; and 3) baseline case: the receiver of interest is chosen uniformly at random in a cluster and its content of interest is available at a device chosen independently and uniformly at random from the same cluster. Easy-to-use expressions for the key performance metrics, such as coverage probability and area spectral efficiency of the whole network, are derived for all three cases. Our analysis concretely demonstrates significant improvement in the network performance when the device on which content is cached or device requesting content from cache is biased to lie closer to the cluster center compared with the baseline case. Based on this insight, we develop and analyze a new generative model for cluster-centric D2D networks that allows to study the effect of intra-cluster interfering devices that are more likely to lie closer to the cluster center.

Journal ArticleDOI
TL;DR: The results reveal that a slight change of the arrival rate may greatly affect the fraction of unstable queues in the network and the gap between the sufficient conditions and the necessary conditions is small when the access probability, the density of transmitters, or the SINR threshold is small.
Abstract: We investigate the stable packet arrival rate region of a discrete-time slotted random access network, where the sources are distributed as a Poisson point process. Each of the sources in the network has a destination at a given distance and a buffer of infinite capacity. The network is assumed to be random but static, i.e., the sources and the destinations are placed randomly and remain static during all the time slots. We employ tools from queueing theory as well as point process theory to study the stability of this system using the concept of dominance. The problem is an instance of the interacting queues problem, further complicated by the Poisson spatial distribution. We obtain sufficient conditions and necessary conditions for stability. Numerical results show that the gap between the sufficient conditions and the necessary conditions is small when the access probability, the density of transmitters, or the SINR threshold is small. The results also reveal that a slight change of the arrival rate may greatly affect the fraction of unstable queues in the network.

Journal ArticleDOI
TL;DR: Numerical results indicate that both the number of the scheduled D2D links and the system throughput can be improved simultaneously with the Zipf-distribution caching scheme, the proposed D 2D link-scheduling algorithm, and the proposed optimal power allocation algorithm compared with the state of the arts.
Abstract: We study a one-hop device-to-device (D2D)-assisted wireless caching network, where popular files are randomly and independently cached in the memory of end users. Each user may obtain the requested files from its own memory without any transmission, or from a helper through a one-hop D2D transmission, or from the base station. We formulate a joint D2D link scheduling and power allocation problem to maximize the system throughput. However, the problem is non-convex, and obtaining an optimal solution is computationally hard. Alternatively, we decompose the problem into a D2D link-scheduling problem and an optimal power allocation problem. To solve the two subproblems, we first develop a D2D link-scheduling algorithm to select the largest number of D2D links satisfying both the signal to interference plus noise ratio and the transmit power constraints. Then, we develop an optimal power allocation algorithm to maximize the minimum transmission rate of the scheduled D2D links. Numerical results indicate that both the number of the scheduled D2D links and the system throughput can be improved simultaneously with the Zipf-distribution caching scheme, the proposed D2D link-scheduling algorithm, and the proposed optimal power allocation algorithm compared with the state of the arts.

Journal ArticleDOI
TL;DR: This paper uses results from stochastic geometry to derive the probability of successful content delivery in the presence of interference and noise, and develops strategies to optimize content caching for the more general case with multiple files, and bound the DSR for that scenario.
Abstract: Device-to-device ( $ \sf {D2D}$ ) communication is a promising approach to optimize the utilization of air interface resources in 5G networks, since it allows decentralized opportunistic short-range communication. For $ \sf {D2D}$ to be useful, mobile nodes must possess content that other mobiles want. Thus, intelligent caching techniques are essential for $ \sf {D2D}$ . In this paper, we use results from stochastic geometry to derive the probability of successful content delivery in the presence of interference and noise. We employ a general transmission strategy, where multiple files are cached at the users and different files can be transmitted simultaneously throughout the network. We then formulate an optimization problem, and find the caching distribution that maximizes the density of successful receptions (DSR) under a simple transmission strategy, where a single file is transmitted at a time throughout the network. We model file requests by a Zipf distribution with exponent $\gamma _{r}$ , which results in an optimal caching distribution that is also a Zipf distribution with exponent $\gamma _{c}$ , which is related to $\gamma _{r}$ through a simple expression involving the path loss exponent. We solve the optimal content placement problem for more general demand profiles under Rayleigh, Ricean, and Nakagami small-scale fading distributions. Our results suggest that it is required to flatten the request distribution to optimize the caching performance. We also develop strategies to optimize content caching for the more general case with multiple files, and bound the DSR for that scenario.

Journal ArticleDOI
TL;DR: The use of OFDM technique reduces the integration complexity of the system where the parallel low pass filters are no longer needed to recover the transmitted data as in multicarrier DCSK scheme and the advantages of this new hybrid design are shown.
Abstract: In this paper, a multiuser OFDM-based chaos shift keying (MU OFDM-DCSK) modulation is presented. In this system, the spreading operation is performed in time domain over the multicarrier frequencies. To allow the multiple access scenario without using excessive bandwidth, each user has $N_P$ predefined private frequencies from the $N$ available frequencies to transmit its reference signal and share with the other users the remaining frequencies to transmit its $M$ spread bits. In this new design, $N_P$ duplicated chaotic reference signals are used to transmit $M$ bits instead of using $M$ different chaotic reference signals as done in DCSK systems. Moreover, given that $N_P , the MU OFDM-DCSK scheme increases spectral efficiency, uses less energy and allows multiple-access scenario. Therefore, the use of OFDM technique reduces the integration complexity of the system where the parallel low pass filters are no longer needed to recover the transmitted data as in multicarrier DCSK scheme. Finally, the bit error rate performance is investigated under multipath Rayleigh fading channels, in the presence of multiuser and additive white Gaussian noise interferences. Simulation results confirm the accuracy of our analysis and show the advantages of this new hybrid design.