scispace - formally typeset
Search or ask a question

Showing papers on "Throughput published in 2018"


Proceedings Article
15 Feb 2018
TL;DR: Deep Gradient Compression (DGC) as mentioned in this paper employs momentum correction, local gradient clipping, momentum factor masking, and warm-up training to preserve accuracy during compression, and achieves a gradient compression ratio from 270x to 600x without losing accuracy.
Abstract: Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD is redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.

630 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a wireless broadcast network with a base station sending time-sensitive information to a number of clients through unreliable channels and formulate a discrete-time decision problem to find a transmission scheduling policy that minimizes the expected weighted sum AoI of the clients in the network.
Abstract: In this paper, we consider a wireless broadcast network with a base station sending time-sensitive information to a number of clients through unreliable channels. The Age of Information (AoI), namely the amount of time that elapsed since the most recently delivered packet was generated, captures the freshness of the information. We formulate a discrete-time decision problem to find a transmission scheduling policy that minimizes the expected weighted sum AoI of the clients in the network. We first show that in symmetric networks, a greedy policy, which transmits the packet for the client with the highest current age, is optimal. For general networks, we develop three low-complexity scheduling policies: a randomized policy, a Max-Weight policy and a Whittle’s Index policy, and derive performance guarantees as a function of the network configuration. To the best of our knowledge, this is the first work to derive performance guarantees for scheduling policies that attempt to minimize AoI in wireless networks with unreliable channels. Numerical results show that both the Max-Weight and Whittle’s Index policies outperform the other scheduling policies in every configuration simulated, and achieve near optimal performance.

379 citations


Proceedings ArticleDOI
20 May 2018
TL;DR: An analytical characterization of the achievable throughput of three different communication modes, namely, instantaneous transmission, delay-constrained transmission, and delay tolerant transmission is provided and it is shown that the instantaneous transmission mode attains the highest throughput.
Abstract: In this paper, we propose an innovative spatial-modulation (SM) based full-duplex (FD) decode-and-forward (DF) relaying protocol where the energy-constrained dual-antenna relay is powered by the radio frequency (RF) energy from the single-antenna source using the time-switching (TS) architecture. In this system, either one or both of the relay antennas receive the energy signal from the source in the energy harvesting phase. In the information transmission phase, one of the two relay antennas is selected to be active to decode and forward the information transmitted from the source and the other relay antenna receives the information from the source at the same time. In this way, the throughput of the information transmission between the relay and the destination can be significantly improved by the additional information mapped to the active antenna index which consequently leads to the improvement of the overall system throughput. Since the current SM capacity solution is not in a closed-form, we propose two tight SM capacity upper bounds and present the solution of the optimal time split ratio for the maximum system throughput according to the proposed upper bound. Monte-carlo simulations are conducted to verify the analysis and reveal the throughput gain of the proposed SM-FD relaying protocol in comparison with conventional FD relaying protocol.

267 citations


Proceedings ArticleDOI
16 Apr 2018
TL;DR: This paper develops three low-complexity transmission scheduling policies that attempt to minimize AoI subject to minimum throughput requirements and evaluates their performance against the optimal policy, and develops a randomized policy, a Max-Weight policy and a Whittle's Index policy.
Abstract: Age of Information (AoI) is a performance metric that captures the freshness of the information from the perspective of the destination. The AoI measures the time that elapsed since the generation of the packet that was most recently delivered to the destination. In this paper, we consider a single-hop wireless network with a number of nodes transmitting time-sensitive information to a Base Station and address the problem of minimizing the Expected Weighted Sum AoI of the network while simultaneously satisfying timely-throughput constraints from the nodes. We develop three low-complexity transmission scheduling policies that attempt to minimize AoI subject to minimum throughput requirements and evaluate their performance against the optimal policy. In particular, we develop a randomized policy, a Max-Weight policy and a Whittle's Index policy, and show that they are guaranteed to be within a factor of two, four and eight, respectively, away from the minimum AoI possible. In contrast, simulation results show that Max-Weight outperforms the other policies, both in terms of AoI and throughput, in every network configuration simulated, and achieves near optimal performance.

258 citations


Journal ArticleDOI
TL;DR: A new Q-learning-based transmission scheduling mechanism using deep learning for the CIoT is proposed to solve the problem of how to achieve the appropriate strategy to transmit packets of different buffers through multiple channels to maximize the system throughput.
Abstract: Cognitive networks (CNs) are one of the key enablers for the Internet of Things (IoT), where CNs will play an important role in the future Internet in several application scenarios, such as healthcare, agriculture, environment monitoring, and smart metering. However, the current low packet transmission efficiency of IoT faces a problem of the crowded spectrum for the rapidly increasing popularities of various wireless applications. Hence, the IoT that uses the advantages of cognitive technology, namely the cognitive radio-based IoT (CIoT), is a promising solution for IoT applications. A major challenge in CIoT is the packet transmission efficiency using CNs. Therefore, a new Q-learning-based transmission scheduling mechanism using deep learning for the CIoT is proposed to solve the problem of how to achieve the appropriate strategy to transmit packets of different buffers through multiple channels to maximize the system throughput. A Markov decision process-based model is formulated to describe the state transformation of the system. A relay is used to transmit packets to the sink for the other nodes. To maximize the system utility in different system states, the reinforcement learning method, i.e., the Q learning algorithm, is introduced to help the relay to find the optimal strategy. In addition, the stacked auto-encoders deep learning model is used to establish the mapping between the state and the action to accelerate the solution of the problem. Finally, the experimental results demonstrate that the new action selection method can converge after a certain number of iterations. Compared with other algorithms, the proposed method can better transmit packets with less power consumption and packet loss.

240 citations


Journal ArticleDOI
TL;DR: Numerical results show that the proposed hybrid network with optimized spectrum sharing and cyclical multiple access design significantly improves the spatial throughput over the conventional GBS-only network; while the spectrum reuse scheme provides further throughput gains at the cost of slightly higher complexity for interference control.
Abstract: In conventional terrestrial cellular networks, mobile terminals (MTs) at the cell edge often pose a performance bottleneck due to their long distances from the serving ground base station (GBS), especially in the hotspot period when the GBS is heavily loaded. This paper proposes a new hybrid network architecture that leverages use of unmanned aerial vehicle (UAV) as an aerial mobile base station, which flies cyclically along the cell edge to offload data traffic for cell-edge MTs. We aim to maximize the minimum throughput of all MTs by jointly optimizing the UAV’s trajectory, bandwidth allocation, and user partitioning. We first consider orthogonal spectrum sharing between the UAV and GBS, and then extend to spectrum reuse where the total bandwidth is shared by both the GBS and UAV with their mutual interference effectively avoided. Numerical results show that the proposed hybrid network with optimized spectrum sharing and cyclical multiple access design significantly improves the spatial throughput over the conventional GBS-only network; while the spectrum reuse scheme provides further throughput gains at the cost of slightly higher complexity for interference control. Moreover, compared with the conventional small-cell offloading scheme, the proposed UAV offloading scheme is shown to outperform in terms of throughput, besides saving the infrastructure cost.

234 citations


Journal ArticleDOI
TL;DR: In this paper, a UAV-enabled orthogonal frequency division multiple access (OFDMA) network is considered, where an UAV is dispatched as the mobile base station (BS) to serve a group of users on the ground.
Abstract: The use of unmanned aerial vehicles (UAVs) as communication platforms is of great significance in future wireless networks, especially for on-demand deployment in temporary events and emergency situations. Although prior works have shown the performance improvement by exploiting the UAV’s mobility, they mainly focus on delay-tolerant applications. As delay requirements fundamentally limit the UAV’s mobility, it remains unknown whether the UAV is able to provide any performance gain in delay-constrained communication scenarios. Motivated by the above, we study, in this paper, an UAV-enabled orthogonal frequency-division multiple access (OFDMA) network where an UAV is dispatched as the mobile base station (BS) to serve a group of users on the ground. We consider a minimum-rate ratio (MRR) for each user, defined as the minimum instantaneous rate required over the average achievable throughput, to flexibly adjust the percentage of its delay-constrained data traffic. Under a given set of constraints on the users’ MRRs, we aim to maximize the minimum average throughput of all users by jointly optimizing the UAV trajectory and OFDMA resource allocation. First, we show that the max–min throughput in general decreases as the users’ MRRs become larger, which reveals a fundamental throughput-delay tradeoff in UAV-enabled communications. Next, we propose an iterative parameter-assisted block coordinate descent method to optimize the UAV trajectory and OFDMA resource allocation alternately, by applying the successive convex optimization and the Lagrange duality, respectively. Furthermore, an efficient and systematic UAV trajectory initialization scheme is proposed based on the simple circular trajectory. Finally, simulation results are provided to verify our theoretical findings and demonstrate the effectiveness of our proposed designs.

207 citations


Journal ArticleDOI
TL;DR: This work seriously considers the incorporation of global centralized software defined network (SDN) and edge computing (EC) in IIoT with EC and demonstrates that the proposed scheme outperforms the related methods in terms of average time delay, goodput, throughput, PDD, and download time.
Abstract: In recent years, smart factory in the context of Industry 4.0 and industrial Internet of Things (IIoT) has become a hot topic for both academia and industry. In IIoT system, there is an increasing requirement for exchange of data with different delay flows among different smart devices. However, there are few studies on this topic. To overcome the limitations of traditional methods and address the problem, we seriously consider the incorporation of global centralized software defined network (SDN) and edge computing (EC) in IIoT with EC. We propose the adaptive transmission architecture with SDN and EC for IIoT. Then, according to data streams with different latency constrains, the requirements can be divided into two groups: 1) ordinary and 2) emergent stream. In the low-deadline situation, a coarse-grained transmission path algorithm provided by finding all paths that meet the time constrains in hierarchical Internet of Things (IoT). After that, by employing the path difference degree (PDD), an optimum routing path is selected considering the aggregation of time deadline, traffic load balances, and energy consumption. In the high-deadline situation, if the coarse-grained strategy is beyond the situation, a fine-grained scheme is adopted to establish an effective transmission path by an adaptive power method for getting low latency. Finally, the performance of proposed strategy is evaluated by simulation. The results demonstrate that the proposed scheme outperforms the related methods in terms of average time delay, goodput, throughput, PDD, and download time. Thus, the proposed method provides better solution for IIoT data transmission.

204 citations


Posted Content
TL;DR: In this paper, the authors investigated the resource allocation algorithm design for multicarrier solar-powered UAV communication systems, where the UAV is powered by solar energy enabling sustainable communication services to multiple ground users.
Abstract: In this paper, we investigate the resource allocation algorithm design for multicarrier solar-powered unmanned aerial vehicle (UAV) communication systems. In particular, the UAV is powered by solar energy enabling sustainable communication services to multiple ground users. We study the joint design of the three-dimensional (3D) aerial trajectory and the wireless resource allocation for maximization of the system sum throughput over a given time period. As a performance benchmark, we first consider an offline resource allocation design assuming non-causal knowledge of the channel gains. The algorithm design is formulated as a mixed-integer non-convex optimization problem taking into account the aerodynamic power consumption, solar energy harvesting, a finite energy storage capacity, and the quality-of-service (QoS) requirements of the users. Despite the non-convexity of the optimization problem, we solve it optimally by applying monotonic optimization to obtain the optimal 3D-trajectory and the optimal power and subcarrier allocation policy. Subsequently, we focus on online algorithm design which only requires real-time and statistical knowledge of the channel gains. The optimal online resource allocation algorithm is motivated by the offline scheme and entails a high computational complexity. Hence, we also propose a low-complexity iterative suboptimal online scheme based on successive convex approximation. Our results unveil the tradeoff between solar energy harvesting and power-efficient communication. In particular, the solar-powered UAV first climbs up to a high altitude to harvest a sufficient amount of solar energy and then descents again to a lower altitude to reduce the path loss of the communication links to the users it serves.

190 citations


Journal ArticleDOI
TL;DR: This paper proposes a new MAC layer—RS-LoRa—to improve reliability and scalability of LoRa wide-area networks (LoRaWANs) and implement it in NS-3 and demonstrates the benefit of RS-Lo Ra over the legacy LoRaWan, in terms of packet error ratio, throughput, and fairness.
Abstract: Providing low power and long range (LoRa) connectivity is the goal of most Internet of Things networks, e.g., LoRa, but keeping communication reliable is challenging. LoRa networks are vulnerable to the capture effect. Cell-edge nodes have a high chance of losing packets due to collisions, especially when high spreading factors (SFs) are used that increase time on air. Moreover, LoRa networks face the problem of scalability when they connect thousands of nodes that access the shared channels randomly. In this paper, we propose a new MAC layer—RS-LoRa—to improve reliability and scalability of LoRa wide-area networks (LoRaWANs). The key innovation is a two-step lightweight scheduling : 1) a gateway schedules nodes in a coarse-grained manner through dynamically specifying the allowed transmission powers and SFs on each channel and 2) based on the coarse-grained scheduling information, a node determines its own transmission power, SF, and when and on which channel to transmit. Through the proposed lightweight scheduling, nodes are divided into different groups, and within each group, nodes use similar transmission power to alleviate the capture effect. The nodes are also guided to select different SFs to increase the network reliability and scalability. We have implemented RS-LoRa in NS-3 and evaluated its performance through extensive simulations. Our results demonstrate the benefit of RS-LoRa over the legacy LoRaWAN, in terms of packet error ratio, throughput, and fairness. For instance, in a single-cell scenario with 1000 nodes, RS-LoRa can reduce the packet error ratio of the legacy LoRaWAN by nearly 20%.

187 citations


Journal ArticleDOI
TL;DR: A novel communication framework to enable CST in DSS systems by employing a power control-based SI mitigation scheme is proposed and a throughput performance analysis of this proposed framework is carried out.
Abstract: Full-duplex (FD) wireless technology enables a radio to transmit and receive on the same frequency band at the same time, and it is considered to be one of the candidate technologies for the fifth generation (5G) and beyond wireless communication systems due to its advantages, including potential doubling of the capacity and increased spectrum utilization efficiency. However, one of the main challenges of FD technology is the mitigation of strong self-interference (SI). Recent advances in different SI cancellation techniques, such as antenna cancellation, analog cancellation, and digital cancellation methods, have led to the feasibility of using FD technology in different wireless applications. Among potential applications, one important application area is dynamic spectrum sharing (DSS) in wireless systems particularly 5G networks, where FD can provide several benefits and possibilities such as concurrent sensing and transmission (CST), concurrent transmission and reception, improved sensing efficiency and secondary throughput, and the mitigation of the hidden terminal problem. In this direction, first, starting with a detailed overview of FD-enabled DSS, we provide a comprehensive survey of recent advances in this domain. We then highlight several potential techniques for enabling FD operation in DSS wireless systems. Subsequently, we propose a novel communication framework to enable CST in DSS systems by employing a power control-based SI mitigation scheme and carry out the throughput performance analysis of this proposed framework. Finally, we discuss some open research issues and future directions with the objective of stimulating future research efforts in the emerging FD-enabled DSS wireless systems.

Journal ArticleDOI
Rongfei Fan1, Jiannan Cui1, Song Jin1, Kai Yang1, Jianping An1 
TL;DR: The problems of UAV node placement and communication resource allocation are investigated jointly for a UAV relaying system for the first time and the global optimal solution is achieved.
Abstract: Utilizing unmanned aerial vehicle (UAV) as the relay is an effective technical solution for the wireless communication between ground terminals faraway or obstructed. In this letter, the problems of UAV node placement and communication resource allocation are investigated jointly for a UAV relaying system for the first time. Multiple communication pairs on the ground, with one rotary-wing UAV serving as relay, are considered. Transmission power, bandwidth, transmission rate, and UAV’s position are optimized jointly to maximize the system throughput. An optimization problem is formulated, which is non-convex. The global optimal solution is achieved by transforming the formulated problem to be a monotonic optimization problem.

Posted Content
TL;DR: Numerical results show the merits of the proposed approach, and in particular that the use of PIM increases the system throughput by at least 40%, without requiring any additional energy consumption.
Abstract: This paper investigates the use of a Passive Intelligent Mirrors (PIM) to operate a multi-user MISO downlink communication. The transmit powers and the mirror reflection coefficients are designed for sum-rate maximization subject to individual QoS guarantees to the mobile users. The resulting problem is non-convex, and is tackled by combining alternating maximization with the majorization-minimization method. Numerical results show the merits of the proposed approach, and in particular that the use of PIM increases the system throughput by at least $40\%$, without requiring any additional energy consumption.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the energy-efficient power allocation and wireless backhaul bandwidth allocation in OFDMA heterogeneous small cell networks and proposed a near optimal iterative resource allocation algorithm to solve the resource allocation problem.
Abstract: The widespread application of wireless services and dense devices access has triggered huge energy consumption. Because of the environmental and financial considerations, energy-efficient design in wireless networks has become an inevitable trend. To the best of our knowledge, energy-efficient orthogonal frequency division multiple access (OFDMA) heterogeneous small cell optimization comprehensively considering energy efficiency maximization, power allocation, wireless backhaul bandwidth allocation, and user quality of service is a novel approach and research direction, and it has not been investigated. In this paper, we study the energy-efficient power allocation and wireless backhaul bandwidth allocation in OFDMA heterogeneous small cell networks. Different from the existing resource allocation schemes that maximize the throughput, the studied scheme maximizes energy efficiency by allocating both transmit power of each small cell base station to users and bandwidth for backhauling, according to the channel state information and the circuit power consumption. The problem is first formulated as a non-convex nonlinear programming problem and then it is decomposed into two convex subproblems. A near optimal iterative resource allocation algorithm is designed to solve the resource allocation problem. A suboptimal low-complexity approach is also developed by exploring the inherent structure and property of the energy-efficient design. Simulation results demonstrate the effectiveness of the proposed algorithms by comparing with the existing schemes.

Journal ArticleDOI
Haichao Wang, Guoru Ding, Feifei Gao1, Jin Chen, Jinlong Wang, Le Wang 
TL;DR: In this article, the authors present a vision of UAV-supported ultra dense networks (UDNs) and highlight the efficient power control in UAVsupported UDNs by discussing the main design considerations and methods in a comprehensive manner.
Abstract: By means of network densification, ultra dense networks (UDNs) can efficiently broaden the network coverage and enhance the system throughput. In parallel, unmanned aerial vehicle (UAV) communications and networking have attracted increasing attention recently due to their high agility and numerous applications. In this article, we present a vision of UAV-supported UDNs. First, we present four representative scenarios to show the broad applications of UAV-supported UDNs in communications, caching, and energy transfer. Then we highlight the efficient power control in UAV-supported UDNs by discussing the main design considerations and methods in a comprehensive manner. Furthermore, we demonstrate the performance superiority of UAV-supported UDNs via case study simulations, compared to traditional fixed-infrastructure-based networks. In addition, we discuss the dominating technical challenges and open issues ahead.

Journal ArticleDOI
TL;DR: A quadratic energy trading-based Stackelberg game, linear energy trading -based StACkelberggame, and social welfare scheme, in which the Stackellberg equilibrium for the formulated games is derived, and the optimal solution for the socialelfare scheme is proposed.
Abstract: This paper investigates a wireless powered sensor network, where multiple sensor nodes are deployed to monitor a certain external environment. A multiantenna power station (PS) provides the power to these sensor nodes during wireless energy transfer phase, and consequently the sensor nodes employ the harvested energy to transmit their own monitoring information to a fusion center during wireless information transfer (WIT) phase. The goal is to maximize the system sum throughput of the sensor network, where two different scenarios are considered, i.e., PS and the sensor nodes belong to the same or different service operator(s). For the first scenario, we propose a global optimal solution to jointly design the energy beamforming and time allocation. We further develop a closed-form solution for the proposed sum throughput maximization. For the second scenario in which the PS and the sensor nodes belong to different service operators, energy incentives are required for the PS to assist the sensor network. Specifically, the sensor network needs to pay in order to purchase the energy services released from the PS to support WIT. In this case, this paper exploits this hierarchical energy interaction, which is known as energy trading . We propose a quadratic energy trading -based Stackelberg game , linear energy trading -based Stackelberg game , and social welfare scheme, in which we derive the Stackelberg equilibrium for the formulated games, and the optimal solution for the social welfare scheme. Finally, numerical results are provided to validate the performance of our proposed schemes.

Journal ArticleDOI
TL;DR: This paper transforms the original optimization problem for NOMA to an equivalent problem which can be solved suboptimally via an iterative power control and time allocation algorithm, and shows that it is optimal for each machine type communication device (MTCD) to transmit with the minimum throughput.
Abstract: This paper studies energy efficient resource allocation for a machine-to-machine enabled cellular network with nonlinear energy harvesting, especially focusing on two different multiple access strategies, namely nonorthogonal multiple access (NOMA) and time division multiple access (TDMA). Our goal is to minimize the total energy consumption of the network via joint power control and time allocation while taking into account circuit power consumption. For both NOMA and TDMA strategies, we show that it is optimal for each machine type communication device (MTCD) to transmit with the minimum throughput, and the energy consumption of each MTCD is a convex function with respect to the allocated transmission time. Based on the derived optimal conditions for the transmission power of MTCDs, we transform the original optimization problem for NOMA to an equivalent problem which can be solved suboptimally via an iterative power control and time allocation algorithm. Through an appropriate variable transformation, we also transform the original optimization problem for TDMA to an equivalent tractable problem, which can be iteratively solved. Numerical results verify the theoretical findings and demonstrate that NOMA consumes less total energy than TDMA at low circuit power regime of MTCDs, while at high circuit power regime of MTCDs TDMA achieves better network energy efficiency than NOMA.

Journal ArticleDOI
TL;DR: In this article, a two-user downlink NOMA system with finite blocklength constraints is considered and a 1-D search algorithm is proposed to resolve the challenges mainly due to the achievable rate affected by the finite block length and the unguaranteed successive interference cancellation.
Abstract: This paper introduces downlink non-orthogonal multiple access (NOMA) into short-packet communications. NOMA has great potential to improve fairness and spectral efficiency with respect to orthogonal multiple access (OMA) for low-latency downlink transmission, thus making it attractive for the emerging Internet of Things. We consider a two-user downlink NOMA system with finite blocklength constraints, in which the transmission rates and power allocation are optimized. To this end, we investigate the trade-off among the transmission rate, decoding error probability, and the transmission latency measured in blocklength. Then, a 1-D search algorithm is proposed to resolve the challenges mainly due to the achievable rate affected by the finite blocklength and the unguaranteed successive interference cancellation. We also analyze the performance of OMA as a benchmark to fully demonstrate the benefit of NOMA. Our simulation results show that NOMA significantly outperforms OMA in terms of achieving a higher effective throughput subject to the same finite blocklength constraint, or incurring a lower latency to achieve the same effective throughput target. Interestingly, we further find that with the finite blocklength, the advantage of NOMA relative to OMA is more prominent when the effective throughput targets at the two users become more comparable.

Journal ArticleDOI
TL;DR: This paper investigates the NOMA downlink relay-transmission, and proposes an optimal power allocation problem for the BS and relays to maximize the overall throughput delivered to the MU and proposes a hybrid N OMA (HB-NOMA) relay that adaptively exploits the benefit of NOMa relay and that of the interference-free TDMA relay.
Abstract: The emerging non-orthogonal multiple access (NOMA), which enables mobile users (MUs) to share same frequency channel simultaneously, has been considered as a spectrum-efficient multiple access scheme to accommodate tremendous traffic growth in future cellular networks. In this paper, we investigate the NOMA downlink relay-transmission, in which the macro base station (BS) first uses NOMA to transmit to a group of relays, and all relays then use NOMA to transmit their respectively received data to an MU. In specific, we propose an optimal power allocation problem for the BS and relays to maximize the overall throughput delivered to the MU. Despite the non-convexity of the problem, we adopt the vertical decomposition and propose a layered-algorithm to efficiently compute the optimal power allocation solution. Numerical results show that the proposed NOMA relay-transmission can increase the throughput up to 30 percent compared with the conventional time division multiple access (TDMA) scheme, and we find that increasing the relays’ power capacity can increase the throughput gain of the NOMA relay against the TDMA relay. Furthermore, to improve the throughput under weak channel power gains, we propose a hybrid NOMA (HB-NOMA) relay that adaptively exploits the benefit of NOMA relay and that of the interference-free TDMA relay. By using the throughput provided by the HB-NOMA relay for each individual MU, we study the multi-MUs scenario and investigate the multi-MUs scheduling problem over a long-term period to maximize the overall utility of all MUs. Numerical results demonstrate the performance advantage of the proposed multi-MUs scheduling that adopts the HB-NOMA relay-transmission.

Journal ArticleDOI
TL;DR: The performance analysis for the proposed offloading control scheme based on the SDNi-MEC server architecture shows that it has better throughput in both the cellular networking link and the V2V paths when the vehicle’s density is in the middle.
Abstract: Data offloading plays an important role for the mobile data explosion problem that occurs in cellular networks. This paper proposed an idea and control scheme for offloading vehicular communication traffic in the cellular network to vehicle to vehicle (V2V) paths that can exist in vehicular ad hoc networks (VANETs). A software-defined network (SDN) inside the mobile edge computing (MEC) architecture, which is abbreviated as the SDNi-MEC server, is devised in this paper to tackle the complicated issues of VANET V2V offloading. Using the proposed SDNi-MEC architecture, each vehicle reports its contextual information to the context database of the SDNi-MEC server, and the SDN controller of the SDNi-MEC server calculates whether there is a V2V path between the two vehicles that are currently communicating with each other through the cellular network. This proposed method: 1) uses each vehicle’s context; 2) adopts a centralized management strategy for calculation and notification; and 3) tries to establish a VANET routing path for paired vehicles that are currently communicating with each other using a cellular network. The performance analysis for the proposed offloading control scheme based on the SDNi-MEC server architecture shows that it has better throughput in both the cellular networking link and the V2V paths when the vehicle’s density is in the middle.

Journal ArticleDOI
17 Sep 2018
TL;DR: The existing low-energy adaptive clustering hierarchy (LEACH) clustering protocol is modified by introducing a threshold limit for cluster head selection with simultaneously switching the power level between the nodes with outperforms as compared to the existing LEACH protocol.
Abstract: Wireless sensor networks (WSNs) have a wide range of applications in various fields. One of the most recent emerging applications are in the world of Internet of Things (IoT), which allows inter-connection of different objects or devices through the Internet. However, limited battery power is the major concern of WSNs as compared to mobile ad-hoc network, which affects the longevity of the network. Hence, a lot of research has been focused on to minimise the energy consumption of the WSNs. Designing of a hierarchical clustering algorithm is one of the numerous approaches to minimise the energy of the WSNs. In this study, the existing low-energy adaptive clustering hierarchy (LEACH) clustering protocol is modified by introducing a threshold limit for cluster head selection with simultaneously switching the power level between the nodes. The proposed modified LEACH protocol outperforms as compared to the existing LEACH protocol with 67% rise in throughput and extending the number of alive nodes to 1750 rounds which can be used to enhance the WSN lifetime. When compared with other energy efficient protocols, it is found that the proposed algorithm performs better in terms of stability period and network lifetime in different scenarios of area, energy and node density.

Posted Content
TL;DR: A principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology, and decision-making under uncertainty is sorely lacking and is a first step to filling this void.
Abstract: Ensuring ultra-reliable and low-latency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At its core, URLLC mandates a departure from expected utility-based network design approaches, in which relying on average quantities (e.g., average throughput, average delay and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture, and topology (across access, edge, and core) and decision-making under uncertainty is sorely lacking. The overarching goal of this article is a first step to fill this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a plethora of techniques and methodologies pertaining to the requirements of ultra-reliable and low-latency communication, as well as their applications through selected use cases. These results provide crisp insights for the design of low-latency and high-reliable wireless networks.

Journal ArticleDOI
TL;DR: This article addresses the issue and discusses how to leverage the deep LSTM learning technique to make localized prediction of the traffic load at the UDN base station (i.e., the eNB) and proposes the appropriate action policy a priori to avoid/alleviate the congestion in an intelligent fashion.
Abstract: Recently, deep learning has emerged as a state-of-the-art machine learning technique with promising potential to drive significant breakthroughs in a wide range of research areas. The application of deep learning for network traffic control, however, remains immature due to the difficulty in uniquely characterizing the network traffic features as an appropriate input and output dataset to the learning structures. The network traffic features are anticipated to be even more dynamic and complex in the UDNs of the emerging 5G networks with high traffi c demands coupled with beamforming and massive MIMO technologies. Therefore, it is critical for 5G network operators to carry out radio resource control in an efficient manner instead of adopting the simple conventional F/TDD. This is because the conventional uplink-downlink configuration change in the existing dynamic TDD method, typically used for resource assignment in beamforming and massive-MIMO-based UDNs, is prone to repeated congestion. In this article, we address this issue and discuss how to leverage the deep LSTM learning technique to make localized prediction of the traffic load at the UDN base station (i.e., the eNB). Based on localized prediction, our proposed algorithm executes the appropriate action policy a priori to avoid/alleviate the congestion in an intelligent fashion. Simulation results demonstrate that our proposal outperforms the conventional method in terms of packet loss rate, throughput, and MOS.

Posted Content
TL;DR: In this paper, the authors revisited some fundamental tradeoffs in UAV-enabled communication and trajectory design, and showed that communication throughput, delay, and (propulsion) energy consumption can be traded off among each other by adopting different UAV trajectory designs, which sheds new light on their traditional tradeoff in terrestrial communication.
Abstract: The use of unmanned aerial vehicles (UAVs) as aerial communication platforms is of high practical value for future wireless systems such as 5G, especially for swift and on-demand deployment in temporary events and emergency situations. Compared to traditional terrestrial base stations (BSs) in cellular network, UAV-mounted aerial BSs possess stronger line-of-sight (LoS) links with the ground users due to their high altitude as well as high and flexible mobility in three-dimensional (3D) space, which can be exploited to enhance the communication performance. On the other hand, unlike terrestrial BSs that have reliable power supply, aerial BSs in practice have limited on-board energy, but require significant propulsion energy to stay airborne and support high mobility. Motivated by the above new considerations, this article aims to revisit some fundamental tradeoffs in UAV-enabled communication and trajectory design. Specifically, it is shown that communication throughput, delay, and (propulsion) energy consumption can be traded off among each other by adopting different UAV trajectory designs, which sheds new light on their traditional tradeoffs in terrestrial communication. Promising directions for future research are also discussed.

Journal ArticleDOI
TL;DR: The experimental results show that DROM has the good convergence and effectiveness and provides better routing configurations than existing solutions to improve the network performance, such as reducing the delay and improving the throughput.
Abstract: This paper proposes DROM, a deep reinforcement learning mechanism for Software-Defined Networks (SDN) to achieve a universal and customizable routing optimization. DROM simplifies the network operation and maintenance by improving the network performance, such as delay and throughput, with a black-box optimization in continuous time. We evaluate the DROM with experiments. The experimental results show that DROM has the good convergence and effectiveness and provides better routing configurations than existing solutions to improve the network performance, such as reducing the delay and improving the throughput.

Journal ArticleDOI
TL;DR: A dynamic network slicing scheme for multitenant H-CRANs is proposed, which takes into account tenants’ priority, baseband resources, fronthaul and backhaul capacities, quality of service (QoS), and interference.
Abstract: Multitenant cellular network slicing has been gaining huge interest recently. However, it is not well-explored under the heterogeneous cloud radio access network (H-CRAN) architecture. This paper proposes a dynamic network slicing scheme for multitenant H-CRANs, which takes into account tenants’ priority, baseband resources, fronthaul and backhaul capacities, quality of service (QoS), and interference. The framework of the network slicing scheme consists of an upper-level, which manages admission control, user association, and baseband resource allocation; and a lower-level, which performs radio resource allocation among users. Simulation results show that the proposed scheme can achieve a higher network throughput, fairness, and QoS performance compared with several baseline schemes.

Journal ArticleDOI
TL;DR: A joint link adaptation and resource allocation policy is proposed that dynamically adjusts the block error probability of URLLC small payload transmissions in accordance with the instantaneous experienced load per cell as well as what conditions are more appropriate for dynamic multiplexing of UR LLC and eMBB traffic in the upcoming 5G systems.
Abstract: This paper presents solutions for efficient multiplexing of ultra-reliable low-latency communications (URLLC) and enhanced mobile broadband (eMBB) traffic on a shared channel. This scenario presents multiple challenges in terms of radio resource scheduling, link adaptation, and inter-cell interference, which are identified and addressed throughout this paper. We propose a joint link adaptation and resource allocation policy that dynamically adjusts the block error probability of URLLC small payload transmissions in accordance with the instantaneous experienced load per cell. Extensive system-level simulations of the downlink performance showpromising gains of this technique, reducing the URLLC latency from 1.3 to 1 ms at the 99.999% percentile, with less than 10% degradation of the eMBB throughput performance as compared with conventional scheduling policies. Moreover, an exhaustive sensitivity analysis is conducted to determine the URLLC and eMBB performance under different offered loads, URLLC payload sizes, and link adaptation and scheduling strategies. The presented results give valuable insights on the maximum URLLC offered traffic load that can be tolerated while still satisfying the URLLC requirements, as well as what conditions are more appropriate for dynamic multiplexing of URLLC and eMBB traffic in the upcoming 5G systems.

Journal ArticleDOI
TL;DR: Numerical results reveal that the cooperative relay strategy of the backscatter radios significantly improves the throughput performance, and an iterative algorithm with reduced complexity and communication overhead is proposed to decompose the original problem into two sub-problems.
Abstract: The integration of wireless power transfer (WPT) with the low-power backscatter communications provides a promising way to sustain battery-less wireless networks. In this paper, we consider a backscatter communication network wirelessly powered by a power beacon station (PBS). Each backscatter radio uses the harvested energy to power its data transmissions, in which some other radios can help as the wireless relays with an aim to improve throughput performance by cooperative transmission. Under this setting, we formulate a throughput maximization problem to jointly optimize WPT and the relay strategy of the backscatter radios. An iterative algorithm with reduced complexity and communication overhead is proposed to decompose the original problem into two sub-problems distributed at the PBS and the backscatter receiver. Moreover, we take uncertain channel information into consideration and formulate robust counter-parts of the throughput maximization problem when either the backscatter or relay channel is subject to estimation errors. The difficulty of the robust counter-part lies in the coupling of the PBS’ power allocation and relay strategy in matrix inequalities, which is addressed by alternating optimization with guaranteed convergence. Numerical results reveal that the cooperative relay strategy of the backscatter radios significantly improves the throughput performance.

Proceedings ArticleDOI
16 Mar 2018
TL;DR: A new hierarchical 5G Next generation VANET architecture is proposed to integrate the centralization and flexibility of Software Defined Networking and Cloud-RAN, with 5G communication technologies, to effectively allocate resources with a global view.
Abstract: The growth of technical revolution towards 5G Next generation networks is expected to meet various communication requirements of future Intelligent Transportation Systems (ITS). Motivated by the consumer needs for variety of ITS applications, bandwidth, high speed and ubiquity, researches are currently exploring different network architectures and techniques, which could be employed in Next generation ITS. To provide flexible network management, control and high resource utilization in Vehicular Ad-hoc Networks (VANETs) on large scale, a new hierarchical 5G Next generation VANET architecture is proposed. The key idea of this holistic architecture is to integrate the centralization and flexibility of Software Defined Networking (SDN) and Cloud-RAN (CRAN), with 5G communication technologies, to effectively allocate resources with a global view. Moreover, a fog computing framework (comprising of zones and clusters) has been proposed at the edge, to avoid frequent handovers between vehicles and RSUs. The transmission delay, throughput and control overhead on controller are analyzed and compared with other architectures. Simulation results indicate reduced transmission delay and minimized control overhead on controllers. Moreover, the throughput of proposed system is also improved.

Journal ArticleDOI
TL;DR: Performance evaluations based on synthetic and real trace simulations manifest that the presented method can significantly increase link connectivity, link capacity, network throughput, and energy efficiency comparing with the existing solutions.
Abstract: Due to the heterogeneous and resource-constrained characters of Internet of Things (IoT), how to guarantee ubiquitous network connectivity is challenging. Although LTE cellular technology is the most promising solution to provide network connectivity in IoTs, information diffusion by cellular network not only occupies its saturating bandwidth, but also costs additional fees. Recently, NarrowBand-IoT (NB-IoT), introduced by 3GPP, is designed for low-power massive devices, which intends to refarm wireless spectrum and increase network coverage. For the sake of providing high link connectivity and capacity, we stimulate effective cooperations among user equipments (UEs), and propose a social-aware group formation framework to allocate resource blocks (RBs) effectively following an in-band NB-IoT solution. Specifically, we first introduce a social-aware multihop device-to-device (D2D) communication scheme to upload information toward the eNodeB within an LTE, so that a logical cooperative D2D topology can be established. Then, we formulate the D2D group formation as a scheduling optimization problem for RB allocation, which selects the feasible partition for the UEs by jointly considering relay method selection and spectrum reuse for NB-IoTs. Since the formulated optimization problem has a high computational complexity, we design a novel heuristic with a comprehensive consideration of power control and relay selection. Performance evaluations based on synthetic and real trace simulations manifest that the presented method can significantly increase link connectivity, link capacity, network throughput, and energy efficiency comparing with the existing solutions.