scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Wireless Communications in 2018"


Journal ArticleDOI
TL;DR: In this paper, the minimum throughput over all ground users in the downlink communication was maximized by optimizing the multiuser communication scheduling and association jointly with the UAV's trajectory and power control.
Abstract: Due to the high maneuverability, flexible deployment, and low cost, unmanned aerial vehicles (UAVs) have attracted significant interest recently in assisting wireless communication. This paper considers a multi-UAV enabled wireless communication system, where multiple UAV-mounted aerial base stations are employed to serve a group of users on the ground. To achieve fair performance among users, we maximize the minimum throughput over all ground users in the downlink communication by optimizing the multiuser communication scheduling and association jointly with the UAV’s trajectory and power control. The formulated problem is a mixed integer nonconvex optimization problem that is challenging to solve. As such, we propose an efficient iterative algorithm for solving it by applying the block coordinate descent and successive convex optimization techniques. Specifically, the user scheduling and association, UAV trajectory, and transmit power are alternately optimized in each iteration. In particular, for the nonconvex UAV trajectory and transmit power optimization problems, two approximate convex optimization problems are solved, respectively. We further show that the proposed algorithm is guaranteed to converge. To speed up the algorithm convergence and achieve good throughput, a low-complexity and systematic initialization scheme is also proposed for the UAV trajectory design based on the simple circular trajectory and the circle packing scheme. Extensive simulation results are provided to demonstrate the significant throughput gains of the proposed design as compared to other benchmark schemes.

1,361 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a unified MEC-WPT design by considering a wireless powered multiuser MEC system, where a multiantenna access point (AP) integrated with an MEC server broadcasts wireless power to charge multiple users and each user node relies on the harvested energy to execute computation tasks.
Abstract: Mobile-edge computing (MEC) and wireless power transfer (WPT) have been recognized as promising techniques in the Internet of Things era to provide massive low-power wireless devices with enhanced computation capability and sustainable energy supply. In this paper, we propose a unified MEC-WPT design by considering a wireless powered multiuser MEC system, where a multiantenna access point (AP) (integrated with an MEC server) broadcasts wireless power to charge multiple users and each user node relies on the harvested energy to execute computation tasks. With MEC, these users can execute their respective tasks locally by themselves or offload all or part of them to the AP based on a time-division multiple access protocol. Building on the proposed model, we develop an innovative framework to improve the MEC performance, by jointly optimizing the energy transmit beamforming at the AP, the central processing unit frequencies and the numbers of offloaded bits at the users, as well as the time allocation among users. Under this framework, we address a practical scenario where latency-limited computation is required. In this case, we develop an optimal resource allocation scheme that minimizes the AP’s total energy consumption subject to the users’ individual computation latency constraints. Leveraging the state-of-the-art optimization techniques, we derive the optimal solution in a semiclosed form. Numerical results demonstrate the merits of the proposed design over alternative benchmark schemes.

752 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered a multi-user MEC network powered by the WPT, where each energy-harvesting WD follows a binary computation offloading policy, i.e., the data set of a task has to be executed as a whole either locally or remotely at the MEC server via task offloading.
Abstract: Finite battery lifetime and low computing capability of size-constrained wireless devices (WDs) have been longstanding performance limitations of many low-power wireless networks, e.g., wireless sensor networks and Internet of Things. The recent development of radio frequency-based wireless power transfer (WPT) and mobile edge computing (MEC) technologies provide a promising solution to fully remove these limitations so as to achieve sustainable device operation and enhanced computational capability. In this paper, we consider a multi-user MEC network powered by the WPT, where each energy-harvesting WD follows a binary computation offloading policy, i.e., the data set of a task has to be executed as a whole either locally or remotely at the MEC server via task offloading. In particular, we are interested in maximizing the (weighted) sum computation rate of all the WDs in the network by jointly optimizing the individual computing mode selection (i.e., local computing or offloading) and the system transmission time allocation (on WPT and task offloading). The major difficulty lies in the combinatorial nature of the multi-user computing mode selection and its strong coupling with the transmission time allocation. To tackle this problem, we first consider a decoupled optimization, where we assume that the mode selection is given and propose a simple bi-section search algorithm to obtain the conditional optimal time allocation. On top of that, a coordinate descent method is devised to optimize the mode selection. The method is simple in implementation but may suffer from high computational complexity in a large-size network. To address this problem, we further propose a joint optimization method based on the alternating direction method of multipliers (ADMM) decomposition technique, which enjoys a much slower increase of computational complexity as the networks size increases. Extensive simulations show that both the proposed methods can efficiently achieve a near-optimal performance under various network setups, and significantly outperform the other representative benchmark methods considered.

563 citations


Journal ArticleDOI
TL;DR: This paper derives the explicit input–output relation describing OTFS modulation and demodulation (mod/demod) and analyzes the cases of ideal pulse-shaping waveforms that satisfy the bi-orthogonality conditions and those which do not.
Abstract: The recently proposed orthogonal time–frequency–space (OTFS) modulation technique was shown to provide significant error performance advantages over orthogonal frequency division multiplexing (OFDM) over delay-Doppler channels. In this paper, we first derive the explicit input–output relation describing OTFS modulation and demodulation (mod/demod). We then analyze the cases of: 1) ideal pulse-shaping waveforms that satisfy the bi-orthogonality conditions and 2) rectangular waveforms which do not. We show that while only inter-Doppler interference (IDI) is present in the former case, additional inter-carrier interference (ICI) and inter-symbol interference (ISI) occur in the latter case. We next characterize the interferences and develop a novel low-complexity yet efficient message passing (MP) algorithm for joint interference cancellation (IC) and symbol detection. While ICI and ISI are eliminated through appropriate phase shifting, IDI can be mitigated by adapting the MP algorithm to account for only the largest interference terms. The MP algorithm can effectively compensate for a wide range of channel Doppler spreads. Our results indicate that OTFS using practical rectangular waveforms can achieve the performance of OTFS using ideal but non-realizable pulse-shaping waveforms. Finally, simulation results demonstrate the superior error performance gains of the proposed uncoded OTFS schemes over OFDM under various channel conditions.

539 citations


Journal ArticleDOI
TL;DR: Numerical results show that the shared deployment outperforms the separated case significantly, and the proposed weighted optimizations achieve a similar performance to the original optimizations, despite their significantly lower computational complexity.
Abstract: Beamforming techniques are proposed for a joint multi-input-multi-output (MIMO) radar-communication (RadCom) system, where a single device acts as radar and a communication base station (BS) by simultaneously communicating with downlink users and detecting radar targets. Two operational options are considered, where we first split the antennas into two groups, one for radar and the other for communication. Under this deployment, the radar signal is designed to fall into the null-space of the downlink channel. The communication beamformer is optimized such that the beampattern obtained matches the radar’s beampattern while satisfying the communication performance requirements. To reduce the optimizations’ constraints, we consider a second operational option, where all the antennas transmit a joint waveform that is shared by both radar and communications. In this case, we formulate an appropriate probing beampattern, while guaranteeing the performance of the downlink communications. By incorporating the SINR constraints into objective functions as penalty terms, we further simplify the original beamforming designs to weighted optimizations, and solve them by efficient manifold algorithms. Numerical results show that the shared deployment outperforms the separated case significantly, and the proposed weighted optimizations achieve a similar performance to the original optimizations, despite their significantly lower computational complexity.

458 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated how the UAV should optimally exploit its mobility via trajectory design to maximize the amount of energy transferred to all ERs during a finite charging period.
Abstract: This paper studies a new unmanned aerial vehicle (UAV)-enabled wireless power transfer system, where a UAV-mounted mobile energy transmitter is dispatched to deliver wireless energy to a set of energy receivers (ERs) at known locations on the ground. We investigate how the UAV should optimally exploit its mobility via trajectory design to maximize the amount of energy transferred to all ERs during a finite charging period. First, we consider the maximization of the sum energy received by all ERs by optimizing the UAV’s trajectory subject to its maximum speed constraint. Although this problem is non-convex, we obtain its optimal solution, which shows that the UAV should hover at one single fixed location during the whole charging period. However, the sum-energy maximization incurs a “near-far” fairness issue, where the received energy by the ERs varies significantly with their distances to the UAV’s optimal hovering location. To overcome this issue, we consider a different problem to maximize the minimum received energy among all ERs, which, however, is more challenging to solve than the sum-energy maximization. To tackle this problem, we first consider an ideal case by ignoring the UAV’s maximum speed constraint, and show that the relaxed problem can be optimally solved via the Lagrange dual method. The obtained trajectory solution implies that the UAV should hover over a set of fixed locations with optimal hovering time allocations among them. Then, for the general case with the UAV’s maximum speed constraint considered, we propose a new successive hover-and-fly trajectory motivated by the optimal trajectory in the ideal case and obtain efficient trajectory designs by applying the successive convex programing optimization technique. Finally, numerical results are provided to evaluate the performance of the proposed designs under different setups, as compared with benchmark schemes.

420 citations


Journal ArticleDOI
TL;DR: Numerical results show that the proposed UAV-enabled multicasting with optimized trajectory design achieves significant performance gains over other benchmark schemes.
Abstract: This paper studies an unmanned aerial vehicle (UAV)-enabled multicasting system, where a UAV is dispatched to disseminate a common file to a set of ground terminals (GTs). We aim to design the UAV trajectory to minimize its mission completion time, while ensuring that each GT successfully recovers the file with a desired high probability. The formulated problem is nonconvex and difficult to be solved in its original form. Therefore, we first derive an effective lower bound for the success file recovery probability of each GT. The problem is then reformulated in a more tractable form, where the UAV trajectory only needs to be designed to ensure the minimum connection time constraint with each GT, during which their distance is below a certain threshold. We show that without loss of optimality, the UAV trajectory consists of connected line segments only, which can be obtained by determining the optimal set of waypoints as well as the UAV speed along the path connecting the waypoints. We propose efficient schemes for the waypoint design based on a novel concept of virtual base station placement and by applying convex optimization. Furthermore, for fixed waypoints, the optimal UAV speed is efficiently obtained by solving a linear programming problem. Numerical results show that the proposed UAV-enabled multicasting with optimized trajectory design achieves significant performance gains over other benchmark schemes.

369 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that with multicell MMSE precoding/combining and a tiny amount of spatial channel correlation or large-scale fading variations over the array, the capacity increases without bound as the number of antennas increases, even under pilot contamination.
Abstract: The capacity of cellular networks can be improved by the unprecedented array gain and spatial multiplexing offered by Massive MIMO. Since its inception, the coherent interference caused by pilot contamination has been believed to create a finite capacity limit, as the number of antennas goes to infinity. In this paper, we prove that this is incorrect and an artifact from using simplistic channel models and suboptimal precoding/combining schemes. We show that with multicell MMSE precoding/combining and a tiny amount of spatial channel correlation or large-scale fading variations over the array, the capacity increases without bound as the number of antennas increases, even under pilot contamination. More precisely, the result holds when the channel covariance matrices of the contaminating users are asymptotically linearly independent, which is generally the case. If also the diagonals of the covariance matrices are linearly independent, it is sufficient to know these diagonals (and not the full covariance matrices) to achieve an unlimited asymptotic capacity.

358 citations


Journal ArticleDOI
TL;DR: The novel partial compression offloading model can significantly reduce the end-to-end latency in a multi-user time-division multiple access MECO system with joint communication and computation resource allocation.
Abstract: By offloading intensive computation tasks to the edge cloud located at the cellular base stations, mobile-edge computation offloading (MECO) has been regarded as a promising means to accomplish the ambitious millisecond-scale end-to-end latency requirement of fifth-generation networks. In this paper, we investigate the latency-minimization problem in a multi-user time-division multiple access MECO system with joint communication and computation resource allocation. Three different computation models are studied, i.e., local compression, edge cloud compression, and partial compression offloading. First, closed-form expressions of optimal resource allocation and minimum system delay for both local and edge cloud compression models are derived. Then, for the partial compression offloading model, we formulate a piecewise optimization problem and prove that the optimal data segmentation strategy has a piecewise structure. Based on this result, an optimal joint communication and computation resource allocation algorithm is developed. To gain more insights, we also analyze a specific scenario where communication resource is adequate while computation resource is limited. In this special case, the closed-form solution of the piecewise optimization problem can be derived. Our proposed algorithms are finally verified by numerical results, which show that the novel partial compression offloading model can significantly reduce the end-to-end latency.

337 citations


Journal ArticleDOI
TL;DR: This paper investigates the optimal policy for user scheduling and resource allocation in HetNets powered by hybrid energy with the purpose of maximizing energy efficiency of the overall network and demonstrates the convergence property of the proposed algorithm.
Abstract: Densely deployment of various small-cell base stations in cellular networks to increase capacity will lead to heterogeneous networks (HetNets), and meanwhile, embedding the energy harvesting capabilities in base stations as an alternative energy supply is becoming a reality. How to make efficient utilization of radio resource and renewable energy is a brand-new challenge. This paper investigates the optimal policy for user scheduling and resource allocation in HetNets powered by hybrid energy with the purpose of maximizing energy efficiency of the overall network. Since wireless channel conditions and renewable energy arrival rates have stochastic properties and the environment’s dynamics are unknown, the model-free reinforcement learning approach is used to learn the optimal policy through interactions with the environment. To solve our problem with continuous-valued state and action variables, a policy-gradient-based actor-critic algorithm is proposed. The actor part uses the Gaussian distribution as the parameterized policy to generate continuous stochastic actions, and the policy parameters are updated with the gradient ascent method. The critic part uses compatible function approximation to estimate the performance of the policy and helps the actor learn the gradient of the policy. The advantage function is used to further reduce the variance of the policy gradient. Using the numerical simulations, we demonstrate the convergence property of the proposed algorithm and analyze network energy efficiency.

256 citations


Journal ArticleDOI
TL;DR: Numerical results show that the proposed hybrid network with optimized spectrum sharing and cyclical multiple access design significantly improves the spatial throughput over the conventional GBS-only network; while the spectrum reuse scheme provides further throughput gains at the cost of slightly higher complexity for interference control.
Abstract: In conventional terrestrial cellular networks, mobile terminals (MTs) at the cell edge often pose a performance bottleneck due to their long distances from the serving ground base station (GBS), especially in the hotspot period when the GBS is heavily loaded. This paper proposes a new hybrid network architecture that leverages use of unmanned aerial vehicle (UAV) as an aerial mobile base station, which flies cyclically along the cell edge to offload data traffic for cell-edge MTs. We aim to maximize the minimum throughput of all MTs by jointly optimizing the UAV’s trajectory, bandwidth allocation, and user partitioning. We first consider orthogonal spectrum sharing between the UAV and GBS, and then extend to spectrum reuse where the total bandwidth is shared by both the GBS and UAV with their mutual interference effectively avoided. Numerical results show that the proposed hybrid network with optimized spectrum sharing and cyclical multiple access design significantly improves the spatial throughput over the conventional GBS-only network; while the spectrum reuse scheme provides further throughput gains at the cost of slightly higher complexity for interference control. Moreover, compared with the conventional small-cell offloading scheme, the proposed UAV offloading scheme is shown to outperform in terms of throughput, besides saving the infrastructure cost.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a frequency-selective mm-wave channel and proposed compressed sensing-based strategies to estimate the channel in the frequency domain, and evaluated different algorithms and computed their complexity to expose tradeoffs in complexity overhead performance as compared with those of previous approaches.
Abstract: Channel estimation is useful in millimeter wave (mm-wave) MIMO communication systems. Channel state information allows optimized designs of precoders and combiners under different metrics, such as mutual information or signal-to-interference noise ratio. At mm-wave, MIMO precoders and combiners are usually hybrid, since this architecture provides a means to trade-off power consumption and achievable rate. Channel estimation is challenging when using these architectures, however, since there is no direct access to the outputs of the different antenna elements in the array. The MIMO channel can only be observed through the analog combining network, which acts as a compression stage of the received signal. Most of the prior work on channel estimation for hybrid architectures assumes a frequency-flat mm-wave channel model. In this paper, we consider a frequency-selective mm-wave channel and propose compressed sensing-based strategies to estimate the channel in the frequency domain. We evaluate different algorithms and compute their complexity to expose tradeoffs in complexity overhead performance as compared with those of previous approaches.

Journal ArticleDOI
TL;DR: The effects of important system parameters on the optimum UAV positions and relaying performances to provide useful guidelines are revealed and the dual-hop multi-link option is shown to be better than the multi-hop single link option when the air-to-ground path loss parameters depend on the UAV locations.
Abstract: Unmanned aerial vehicles (UAVs) have found many important applications in communications. They can serve as either aerial base stations or mobile relays to improve the quality of services. In this paper, we study the use of multiple UAVs in relaying. Considering two typical uses of multiple UAVs as relays that form either a single multi-hop link or multiple dual-hop links, we first optimize the placement of the UAVs by maximizing the end-to-end signal-to-noise ratio for three useful channel models and two common relaying protocols. Based on the optimum placement, the two relaying setups are then compared in terms of outage and bit error rate. Numerical results show that the dual-hop multi-link option is better than the multi-hop single link option when the air-to-ground path loss parameters depend on the UAV positions. Otherwise, the dual-hop option is only better when the source-to-destination distance is small. Also, decode-and-forward UAVs provide better performances than the amplify-and-forward UAVs. The investigation also reveals the effects of important system parameters on the optimum UAV positions and relaying performances to provide useful guidelines.

Journal ArticleDOI
TL;DR: Numerical results demonstrate that the optimized MEC system utilizing cooperation has significant performance improvement over systems without cooperation.
Abstract: This paper studies a mobile edge computing (MEC) system in which two mobile devices are energized by the wireless power transfer (WPT) from an access point (AP) and they can offload part or all of their computation-intensive latency-critical tasks to the AP connected with an MEC server or an edge cloud. This harvest-then-offload protocol operates in an optimized time-division manner. To overcome the doubly near-far effect for the farther mobile device, cooperative communications in the form of relaying via the nearer mobile device is considered for offloading. Our aim is to minimize the AP’s total transmit energy subject to the constraints of the computational tasks. We illustrate that the optimization is equivalent to a min–max problem, which can be optimally solved by a two-phase method. The first phase obtains the optimal offloading decisions by solving a sum-energy-saving maximization problem for given an energy transmit power. In the second phase, the optimal minimum energy transmit power is obtained by a bisection search method. Numerical results demonstrate that the optimized MEC system utilizing cooperation has significant performance improvement over systems without cooperation.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the application of non-orthogonal multiple access (NOMA) in millimeter wave (mm-Wave) communications by exploiting beamforming, user scheduling, and power allocation.
Abstract: This paper investigates the application of non-orthogonal multiple access (NOMA) in millimeter wave (mm-Wave) communications by exploiting beamforming, user scheduling, and power allocation. Random beamforming is invoked for reducing the feedback overhead of the considered system. A non-convex optimization problem for maximizing the sum rate is formulated, which is proved to be NP-hard. The branch and bound approach is invoked to obtain the $\epsilon$ -optimal power allocation policy, which is proved to converge to a global optimal solution. To elaborate further, a low-complexity suboptimal approach is developed for striking a good computational complexity-optimality tradeoff, where the matching theory and successive convex approximation techniques are invoked for tackling the user scheduling and power allocation problems, respectively. Simulation results reveal that: 1) the proposed low complexity solution achieves a near-optimal performance and 2) the proposed mm-Wave NOMA system is capable of outperforming conventional mm-Wave orthogonal multiple access systems in terms of sum rate and the number of served users.

Journal ArticleDOI
TL;DR: The results show that the proposed sub-optimal solution achieves close-to-bound sum-rate performance, which is significantly better than that of time-division multiple access.
Abstract: In this paper, we explore non-orthogonal multiple access (NOMA) in millimeter-wave (mm-wave) communications (mm-wave-NOMA). In particular, we consider a typical problem, i.e., maximization of the sum rate of a 2-user mm-wave-NOMA system. In this problem, we need to find the beamforming vector to steer towards the two users simultaneously subject to an analog beamforming structure, while allocating appropriate power to them. As the problem is non-convex and may not be converted to a convex problem with simple manipulations, we propose a suboptimal solution to this problem. The basic idea is to decompose the original joint beamforming and power allocation problem into two sub-problems which are relatively easy to solve: one is a power and beam gain allocation problem, and the other is a beamforming problem under a constant-modulus constraint. Extension of the proposed solution from 2-user mm-wave-NOMA to more-user mm-wave-NOMA is also discussed. Extensive performance evaluations are conducted to verify the rational of the proposed solution, and the results also show that the proposed sub-optimal solution achieves close-to-bound sum-rate performance, which is significantly better than that of time-division multiple access.

Journal ArticleDOI
TL;DR: Simulation results reveal that the proposed machine learning framework enhances the performance of mm-wave-NOMA systems compared to the conventional user clustering algorithms and the proposed K-means-based onlineuser clustering algorithm provides a comparable performance to theventional K-Means algorithm and strikes a good balance between performance and computational complexity.
Abstract: Millimeter-wave non-orthogonal multiple access (mm-wave-NOMA) systems exploit the power domain for multiple accesses to further enhance the spectral efficiency. User clustering and power allocation can effectively exploit the potential of NOMA in mm-wave systems. This paper investigates the sum rate maximization problem of mm-wave-NOMA systems under the constraints of the total transmission power and users’ predefined rate requirements. The formulated optimization problem is a non-linear programming problem and, thus, is non-convex and challenging to solve, especially when the number of users becomes large. Sparked by the correlation features of the users’ channels in mm-wave-NOMA systems, we develop a K-means-based machine learning algorithm for user clustering. Moreover, for a practical dynamic scenario where the new users keep arriving in a continuous fashion, we propose a K-means-based online user clustering algorithm to reduce the computational complexity. Furthermore, to further enhance the performance of the proposed mm-wave-NOMA system, we derive the optimal power allocation policy in a closed form by exploiting the successive decoding feature. Simulation results reveal that: 1) the proposed machine learning framework enhances the performance of mm-wave-NOMA systems compared to the conventional user clustering algorithms and 2) the proposed K-means-based online user clustering algorithm provides a comparable performance to the conventional K-means algorithm and strikes a good balance between performance and computational complexity.

Journal ArticleDOI
TL;DR: A novel deep learning approach is proposed for modeling the resource allocation problem of LTE-LAA small base stations (SBSs) and it is shown that the proposed scheme can yield up to 28% and 11% gains over a conventional reactive approach and a proportional fair coexistence mechanism, respectively.
Abstract: Performing cellular long term evolution (LTE) communications in unlicensed spectrum using licensed assisted access LTE (LTE-LAA) is a promising approach to overcome wireless spectrum scarcity. However, to reap the benefits of LTE-LAA, a fair coexistence mechanism with other incumbent WiFi deployments is required. In this paper, a novel deep learning approach is proposed for modeling the resource allocation problem of LTE-LAA small base stations (SBSs). The proposed approach enables multiple SBSs to proactively perform dynamic channel selection, carrier aggregation, and f ractional spectrum access while guaranteeing fairness with existing WiFi networks and other LTE-LAA operators. Adopting a proactive coexistence mechanism enables future delay-tolerant LTE-LAA data demands to be served within a given prediction window ahead of their actual arrival time thus avoiding the underutilization of the unlicensed spectrum during off-peak hours while maximizing the total served LTE-LAA traffic load. To this end, a noncooperative game model is formulated in which SBSs are modeled as homo egualis agents that aim at predicting a sequence of future actions and thus achieving long-term equal weighted fairness with wireless local area network and other LTE-LAA operators over a given time horizon. The proposed deep learning algorithm is then shown to reach a mixed-strategy Nash equilibrium, when it converges. Simulation results using real data traces show that the proposed scheme can yield up to 28% and 11% gains over a conventional reactive approach and a proportional fair coexistence mechanism, respectively. The results also show that the proposed framework prevents WiFi performance degradation for a densely deployed LTE-LAA network.

Journal ArticleDOI
TL;DR: In this article, the authors examined the achievable performance of covert communication in amplify-and-forward one-way relay networks, where the relay is greedy and opportunistically transmits its own information to the destination covertly on top of forwarding the source's message, while the source tries to detect this covert transmission to discover the illegitimate usage of the resource (e.g., power and spectrum).
Abstract: Covert wireless communication aims to hide the very existence of wireless transmissions in order to guarantee a strong security in wireless networks. In this paper, we examine the possibility and achievable performance of covert communication in amplify-and-forward one-way relay networks. Specifically, the relay is greedy and opportunistically transmits its own information to the destination covertly on top of forwarding the source’s message, while the source tries to detect this covert transmission to discover the illegitimate usage of the resource (e.g., power and spectrum) allocated only for the purpose of forwarding the source’s information. We propose two strategies for the relay to transmit its covert information, namely rate-control and power-control transmission schemes, for which the source’s detection limits are analyzed in terms of detection error probability and the achievable effective covert rates from the relay to destination are derived. Our examination determines the conditions under which the rate-control transmission scheme outperforms the power-control transmission scheme, and vice versa, which enables the relay to achieve the maximum effective covert rate. Our analysis indicates that the relay has to forward the source’s message to shield its covert transmission and the effective covert rate increases with its forwarding ability (e.g., its maximum transmits power).

Journal ArticleDOI
TL;DR: Simulation results show that the proposed radio resource management scheme can reduce the interference from V 2V communication to CUEs and ensure the latency and reliability requirements of V2V communication.
Abstract: By leveraging direct device-to-device interaction, LTE vehicle-to-vehicle (V2V) communication becomes a promising solution to meet the stringent requirements of vehicular communication. In this paper, we propose jointly optimizing the radio resource, power allocation, and modulation/coding schemes of the V2V communications, in order to guarantee the latency and reliability requirements of vehicular user equipments (VUEs) while maximizing the information rate of cellular user equipment (CUE). To ensure the solvability of this optimization problem, the packet latency constraint is first transformed into a data rate constraint based on random network analysis by adopting the Poisson distribution model for the packet arrival process of each VUE. Then, utilizing the Lagrange dual decomposition and binary search, a resource management algorithm is proposed to find the optimal solution of joint optimization problem with reasonable complexity. Simulation results show that the proposed radio resource management scheme can reduce the interference from V2V communication to CUEs and ensure the latency and reliability requirements of V2V communication.

Journal ArticleDOI
TL;DR: In this article, the uplink and downlink localization limits in terms of 3D position and orientation error bounds for mm-wave multipath channels were analyzed. And the authors carried out a detailed analysis of the dependence of the bounds on different system parameters.
Abstract: Location-aware communication systems are expected to play a pivotal part in the next generation of mobile communication networks. Therefore, there is a need to understand the localization limits in these networks, particularly, using millimeter-wave technology (mm-wave). Towards that, we address the uplink and downlink localization limits in terms of 3D position and orientation error bounds for mm-wave multipath channels. We also carry out a detailed analysis of the dependence of the bounds on different system parameters. Our key findings indicate that the uplink and downlink behave differently in two distinct ways. First of all, the error bounds have different scaling factors with respect to the number of antennas in the uplink and downlink. Secondly, uplink localization is sensitive to the orientation angle of the user equipment (UE), whereas downlink is not. Moreover, in the considered outdoor scenarios, the non-line-of-sight paths generally improve localization when a line-of-sight path exists. Finally, our numerical results show that mm-wave systems are capable of localizing a UE with sub-meter position error, and sub-degree orientation error.

Journal ArticleDOI
TL;DR: A threshold-based approach for detecting the first peak of the channel impulse response is proposed in which the threshold adapts to the environmental noise level and is demonstrated to be robust against noise and interference in the environment.
Abstract: Exploiting cellular long-term evolution (LTE) downlink signals for navigation purposes is considered. First, the transmitted LTE signal model is presented and relevant positioning and timing information that can be extracted from these signals are identified. Second, a software-defined receiver (SDR) that is capable of acquiring, tracking, and producing pseudoranges from LTE signals is designed. Third, a threshold-based approach for detecting the first peak of the channel impulse response is proposed in which the threshold adapts to the environmental noise level. This method is demonstrated to be robust against noise and interference in the environment. Fourth, an approach for estimating pseudoranges of multiple base stations by tracking only one base station is proposed. Fifth, a navigation framework based on an extended Kalman filter is proposed to produce the navigation solution using the pseudorange measurements obtained by the proposed SDR. Finally, the proposed SDR is evaluated experimentally on an unmanned aerial vehicle (UAV) and a ground vehicle. The root mean squared-error (RMSE) between the GPS navigation solution and LTE signals from three base stations produced by the proposed SDR for the UAV is shown to be 8.15 m with a standard deviation of 2.83 m. The RMSE between the GPS navigation solution and LTE signals from six base stations in a severe multipath environment for the ground vehicle is shown to be 5.80 m with a standard deviation of 3.02 m.

Journal ArticleDOI
TL;DR: In this article, the authors derived the Cramer-Rao bound (CRB) on position and rotation angle estimation uncertainty from a single transmitter, in the presence of scatterers.
Abstract: Millimeter-wave (mm-wave) signals and large antenna arrays are considered enabling technologies for future 5G networks. While their benefits for achieving high-data rate communications are well-known, their potential advantages for accurate positioning are largely undiscovered. We derive the Cramer-Rao bound (CRB) on position and rotation angle estimation uncertainty from mm-wave signals from a single transmitter, in the presence of scatterers. We also present a novel two-stage algorithm for position and rotation angle estimation that attains the CRB for average to high signal-to-noise ratio. The algorithm is based on multiple measurement vectors matching pursuit for coarse estimation, followed by a refinement stage based on the space-alternating generalized expectation maximization algorithm. We find that accurate position and rotation angle estimation is possible using signals from a single transmitter, in either line-of-sight, non-line-of-sight, or obstructed-line-of-sight conditions.

Journal ArticleDOI
TL;DR: It is shown that for the considered power allocation scheme, random and optimum user pairing perform similarly in the large system limit, but optimum pairing is significantly better in finite dimensions.
Abstract: User pairing in non-orthogonal multiple-access (NOMA) uplink is investigated considering some predefined power allocation schemes. The base station divides the set of users into disjunct pairs and assigns the available resources to these pairs. The combinatorial problem of user pairing to achieve the maximum sum rate is analyzed in the large system limit for various scenarios, and some optimum and sub-optimum algorithms with a polynomial-time complexity are proposed. In the first scenario, $2~M$ users and the base station have a single-antenna and communicate over $M$ subcarriers. The performance of optimum pairing is derived for $M\rightarrow \infty $ and shown to be superior to random pairing and orthogonal multiple access techniques. In the second setting, a novel NOMA scheme for a multi-antenna base station and single carrier communication is proposed. In this case, the users need not be aware of the pairing strategy. Furthermore, the proposed NOMA scheme is generalized to multi-antenna users. It is shown that for the considered power allocation scheme, random and optimum user pairing perform similarly in the large system limit, but optimum pairing is significantly better in finite dimensions. It is shown that NOMA with the proposed user pairing scheme outperforms a previously proposed NOMA with signal alignment.

Journal ArticleDOI
TL;DR: In this article, a two-user downlink NOMA system with finite blocklength constraints is considered and a 1-D search algorithm is proposed to resolve the challenges mainly due to the achievable rate affected by the finite block length and the unguaranteed successive interference cancellation.
Abstract: This paper introduces downlink non-orthogonal multiple access (NOMA) into short-packet communications. NOMA has great potential to improve fairness and spectral efficiency with respect to orthogonal multiple access (OMA) for low-latency downlink transmission, thus making it attractive for the emerging Internet of Things. We consider a two-user downlink NOMA system with finite blocklength constraints, in which the transmission rates and power allocation are optimized. To this end, we investigate the trade-off among the transmission rate, decoding error probability, and the transmission latency measured in blocklength. Then, a 1-D search algorithm is proposed to resolve the challenges mainly due to the achievable rate affected by the finite blocklength and the unguaranteed successive interference cancellation. We also analyze the performance of OMA as a benchmark to fully demonstrate the benefit of NOMA. Our simulation results show that NOMA significantly outperforms OMA in terms of achieving a higher effective throughput subject to the same finite blocklength constraint, or incurring a lower latency to achieve the same effective throughput target. Interestingly, we further find that with the finite blocklength, the advantage of NOMA relative to OMA is more prominent when the effective throughput targets at the two users become more comparable.

Journal ArticleDOI
TL;DR: This paper outlines a strategy to extract spatial information from sub-6 GHz and its use in mmWave compressed beam-selection and outlines a structured precoder/combiner design to tailor the training to out-of-band information.
Abstract: Millimeter wave (mmWave) communication is one feasible solution for high data-rate applications like vehicular-to-everything communication and next generation cellular communication. Configuring mmWave links, which can be done through channel estimation or beam-selection, however, is a source of significant overhead. In this paper, we propose using spatial information extracted at sub-6 GHz to help establish the mmWave link. Assuming a fully digital architecture at sub-6 GHz; and an analog architecture at mmWave, we outline a strategy to extract spatial information from sub-6 GHz and its use in mmWave compressed beam-selection. Specifically, we formulate compressed beam-selection as a weighted sparse signal recovery problem, and obtain the weighting information from sub-6 GHz channels. In addition, we outline a structured precoder/combiner design to tailor the training to out-of-band information. We also extend the proposed out-of-band aided compressed beam-selection approach to leverage information from all active subcarriers at mmWave. To simulate multi-band frequency dependent channels, we review the prior work on frequency dependent channel behavior and outline a multi-frequency channel model. The simulation results for achievable rate show that out-of-band aided beam-selection can considerably reduce the training overhead of in-band only beam-selection.

Journal ArticleDOI
TL;DR: This paper investigates the resource allocation problem in device-to-device-based vehicular communications, based on slow fading statistics of channel state information (CSI), to alleviate signaling overhead for reporting rapidly varying accurate CSI of mobile links and proposes a suite of algorithms to address the performance-complexity tradeoffs.
Abstract: This paper investigates the resource allocation problem in device-to-device-based vehicular communications, based on slow fading statistics of channel state information (CSI), to alleviate signaling overhead for reporting rapidly varying accurate CSI of mobile links. We consider the case when each vehicle-to-infrastructure (V2I) link shares spectrum with multiple vehicle-to-vehicle (V2V) links. Leveraging the slow fading statistical CSI of mobile links, we maximize the sum V2I capacity while guaranteeing the reliability of all V2V links. We use graph partitioning tools to divide highly interfering V2V links into different clusters before formulating the spectrum sharing problem as a weighted 3-D matching problem. We propose a suite of algorithms, including a baseline graph-based resource allocation algorithm, a greedy resource allocation algorithm, and a randomized resource allocation algorithm, to address the performance-complexity tradeoffs. We further investigate resource allocation adaption in response to slow fading CSI of all vehicular links and develop a low-complexity randomized algorithm.

Journal ArticleDOI
TL;DR: This paper proposes a hybrid content caching design that does not require the knowledge of content popularity and proposes practical and heuristic CU/BS caching algorithms to address a general caching scenario by inheriting the design rationale of the aforementioned performance-guaranteed algorithms.
Abstract: Most existing content caching designs require accurate estimation of content popularity, which can be challenging in the dynamic mobile network environment. Moreover, emerging hierarchical network architecture enables us to enhance the content caching performance by opportunistically exploiting both cloud-centric and edge-centric caching. In this paper, we propose a hybrid content caching design that does not require the knowledge of content popularity. Specifically, our design optimizes the content caching locations, which can be original content servers, central cloud units (CUs) and base stations (BSs) where the design objective is to support as high average requested content data rates as possible subject to the finite service latency. We fulfill this design by employing the Lyapunov optimization approach to tackle an NP-hard caching control problem with the tight coupling between CU caching and BS caching control decisions. Toward this end, we propose algorithms in three specific caching scenarios by exploiting the submodularity property of the sum-weight objective function and the hierarchical caching structure. Moreover, we prove the proposed algorithms can achieve finite content service delay for all arrival rates within the constant fraction of capacity region using Lyapunov optimization technique. Furthermore, we propose practical and heuristic CU/BS caching algorithms to address a general caching scenario by inheriting the design rationale of the aforementioned performance-guaranteed algorithms. Trace-driven simulation demonstrates that our proposed hybrid CU/BS caching algorithms outperform the general popularity based caching algorithm and the independent caching algorithm in terms of average end-to-end service latency and backhaul/fronthaul load reduction ratios.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a framework for cross-layer optimization to ensure ultra-high reliability and ultra-low latency in radio access networks, where both transmission delay and queueing delay are considered.
Abstract: In this paper, we propose a framework for cross-layer optimization to ensure ultra-high reliability and ultra-low latency in radio access networks, where both transmission delay and queueing delay are considered. With short transmission time, the blocklength of channel codes is finite, and the Shannon capacity cannot be used to characterize the maximal achievable rate with given transmission error probability. With randomly arrived packets, some packets may violate the queueing delay. Moreover, since the queueing delay is shorter than the channel coherence time in typical scenarios, the required transmit power to guarantee the queueing delay and transmission error probability will become unbounded even with spatial diversity. To ensure the required quality-of-service (QoS) with finite transmit power, a proactive packet dropping mechanism is introduced. Then, the overall packet loss probability includes transmission error probability , queueing delay violation probability , and packet dropping probability . We optimize the packet dropping policy, power allocation policy, and bandwidth allocation policy to minimize the transmit power under the QoS constraint. The optimal solution is obtained, which depends on both channel and queue state information. Simulation and numerical results validate our analysis, and show that setting the three packet loss probabilities as equal causes marginal power loss.

Journal ArticleDOI
TL;DR: A new channel estimation scheme for TDD/FDD massive MIMO systems by reconstructing uplink/downlink channel covariance matrices with the aid of array signal processing techniques, which is applicable to various PAS distributions.
Abstract: In this paper, we propose a new channel estimation scheme for TDD/FDD massive MIMO systems by reconstructing (sometimes also referred to as covariance computing or covariance fitting) uplink/downlink channel covariance matrices (CCMs) with the aid of array signal processing techniques. Specifically, the angle parameters and power angular spectrum (PAS) of channel are extracted from the instantaneous uplink channel state information (CSI). Then, the uplink CCM is reconstructed and can be used to improve the uplink channel estimation without any additional training cost. By virtue of angle reciprocity as well as PAS reciprocity between uplink and downlink channels, the downlink CCM could also be inferred with a similar approach even for the FDD massive MIMO systems. Then, the downlink instantaneous CSI can be obtained by training toward the dominant eigen-directions of each user. The proposed strategy is applicable to various PAS distributions. Numerical results are provided to demonstrate the superiority of the proposed methods over the existing ones.