scispace - formally typeset
Search or ask a question

Showing papers on "Channel allocation schemes published in 2019"


Journal ArticleDOI
TL;DR: This paper studies an unmanned aerial vehicle-assisted mobile edge computing (MEC) architecture, in which a UAV roaming around the area may serve as a computing server to help user equipment (UEs) compute their tasks or act as a relay for offloading their computation tasks to the access point (AP).
Abstract: In this paper, we study an unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) architecture, in which a UAV roaming around the area may serve as a computing server to help user equipment (UEs) compute their tasks or act as a relay for further offloading their computation tasks to the access point (AP). We aim to minimize the weighted sum energy consumption of the UAV and UEs subject to the task constraints, the information-causality constraints, the bandwidth allocation constraints and the UAV’s trajectory constraints. The required optimization is nonconvex, and an alternating optimization algorithm is proposed to jointly optimize the computation resource scheduling, bandwidth allocation, and the UAV’s trajectory in an iterative fashion. The numerical results demonstrate that significant performance gain is obtained over conventional methods. Also, the advantages of the proposed algorithm are more prominent when handling computation-intensive latency-critical tasks.

231 citations


Journal ArticleDOI
TL;DR: This paper proves that a global optimal solution can be found in a convex subset of the original feasible region for ultra-reliable and low-latency communications (URLLC), where the blocklength of channel codes is short.
Abstract: In this paper, we aim to find the global optimal resource allocation for ultra-reliable and low-latency communications (URLLC), where the blocklength of channel codes is short. The achievable rate in the short blocklength regime is neither convex nor concave in bandwidth and transmit power. Thus, a non-convex constraint is inevitable in optimizing resource allocation for URLLC. We first consider a general resource allocation problem with constraints on the transmission delay and decoding error probability, and prove that a global optimal solution can be found in a convex subset of the original feasible region. Then, we illustrate how to find the global optimal solution for an example problem, where the energy efficiency (EE) is maximized by optimizing antenna configuration, bandwidth allocation, and power control under the latency and reliability constraints. To improve the battery life of devices and EE of communication systems, both uplink and downlink resources are optimized. The simulation and numerical results validate the analysis and show that the circuit power is dominated by the total power consumption when the average inter-arrival time between packets is much larger than the required delay bound. Therefore, optimizing antenna configuration and bandwidth allocation without power control leads to minor EE loss.

166 citations


Journal ArticleDOI
TL;DR: A deep reinforcement learning framework to allocate resources to users in a near optimal way and exploits an attention-based neural network (ANN) to perform the channel assignment in the multi-carrier NOMA system.
Abstract: Non-orthogonal multiple access (NOMA) has been considered as a significant candidate technique for the next generation wireless communication to support high throughput and massive connectivity. It allows different users to be multiplexed on one channel through applying superposition coding at the transmitter and successive interference cancellation (SIC) at the receiver. To fully utilize the benefit of the NOMA technique, the key problem is how to optimally allocate resources, such as power and channels, to users to maximize the system performance. There have been some existing works on the power allocation for the single-carrier NOMA system. However, how to optimally assign channels in the multi-carrier NOMA system is still unclear. In this paper, we propose a deep reinforcement learning framework to allocate resources to users in a near optimal way. Specifically, we exploit an attention-based neural network (ANN) to perform the channel assignment. Simulation results show that the proposed framework can achieve better system performance, compared with the state-of-the-art approaches.

146 citations


Journal ArticleDOI
TL;DR: This paper studies the UAV access selection and BS bandwidth allocation problems in a UAV assisted IoT communication network, where a hierarchical game framework is presented and the proposed algorithms are evaluated via simulations.
Abstract: The growing popularity of Internet of Things (IoT) with the requirements of highly reliable and low latency has imposed huge challenges to current cellular networks. Using small aerial platforms like unmanned aerial vehicles (UAVs) to assist terrestrial base stations (BSs) is attractive, but it often challenged by the lack of UAV access selection and resource allocation algorithm to balance the network performance and service cost. In this paper, we study the UAV access selection and BS bandwidth allocation problems in a UAV assisted IoT communication network, where a hierarchical game framework is presented. The complicated interactions among UAVs and BSs as well as the cyclic dependency is studied by applying the Stackelberg game theory. Wherein, the access competition among groups of UAVs is formulated as a dynamic evolutionary game and solved by an evolutionary equilibrium. On the other hand, the problem of how much bandwidth should BSs allocate to the UAVs is modeled as a noncooperative game, where the existence and the uniqueness of Nash equilibrium is analyzed. Stochastic geometry tool is used to model the position distribution of network nodes and drive the payoff expressions by taking into account different network parameters. The analytical results for the proposed hierarchical game model and the corresponding solutions are evaluated via simulations, which verify both the validity of our analysis and the effectiveness of the proposed algorithms.

107 citations


Journal ArticleDOI
TL;DR: By exploiting device-to-device (D2D) communication for enabling user collaboration and reducing the edge server’s load, this paper investigates the D2D-assisted and NOMA-based MEC system and proposes a scheduling-based joint computing resource, power, and channel allocations algorithm to achieve the joint optimization.
Abstract: Mobile edge computing (MEC) and non-orthogonal multiple access (NOMA) have been considered as the promising techniques to address the explosively growing computation-intensive applications and accomplish the requirement of massive connectivity in the fifth-generation networks. Moreover, since the computing resources of the edge server are limited, the computing load of the edge server needs to be effectively alleviated. In this paper, by exploiting device-to-device (D2D) communication for enabling user collaboration and reducing the edge server's load, we investigate the D2D-assisted and NOMA-based MEC system. In order to minimize the weighted sum of the energy consumption and delay of all users, we jointly optimize the computing resource, power, and channel allocations. Regarding the computing resource allocation, we propose an adaptive algorithm to find the optimal solution. Regarding the power allocation, we present a novel power allocation algorithm based on the particle swarm optimization (PSO) for the single NOMA group comprised of multiple cellular users. Then, for the matching group comprised of a NOMA group and D2D pairs, we theoretically derive the interval of optimal power allocation and propose a PSO-based algorithm to solve it. Regarding the channel allocation, we propose a one-to-one matching algorithm based on the Pareto improvement and swapping operations and extend the one-to-one matching algorithm to a many-to-one matching scenario. Finally, we propose a scheduling-based joint computing resource, power, and channel allocations algorithm to achieve the joint optimization. The simulation results show that the proposed solution can effectively reduce the weighted sum of the energy consumption and delay of all users.

85 citations


Journal ArticleDOI
Deyu Zhang1, Ying Qiao1, Liang She1, Ruyin Shen1, Ju Ren1, Yaoxue Zhang1 
TL;DR: A Lyapunov-based framework is proposed to decompose the problem into different time scales, based on which an online two time-scale resource allocation algorithm is developed which determines the harvested and purchased energy in a large time scale, and the channel allocation and data collection in a small time scale.
Abstract: It is expected that billions of objects will be connected through sensors and embedded devices for pervasive intelligence in the coming era of Internet of Things (IoT). However, the performance of such ubiquitous interconnection highly depends on the supply of network resources in terms of both energy and spectrum. Librating IoT devices from the resource deficiency, we consider a green IoT network in which the IoT devices transmit data to a fusion node over multihop relaying. To achieve sustainable operation, IoT devices obtain energy from both ambient energy sources and power grid, while opportunistically access the licensed spectrum for data transmission. We formulate a stochastic problem to optimize the network utility minus the cost on on-grid energy purchasing. The problem formulation takes into account the different granularity in the changing of harvested energy, power price, and primary user activities. To address the problem, we propose a Lyapunov-based framework to decompose the problem into different time scales, based on which an online two time-scale resource allocation algorithm, is developed which determines the harvested and purchased energy in a large time scale, and the channel allocation and data collection in a small time scale. Furthermore, we analyze the required data buffer and energy buffer to support the proposed algorithm. Extensive simulation results validate the correctness of the analysis and the efficiency of the proposed algorithm.

70 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed LBLP can work with the existing routing protocols to improve the network throughput substantially and balance the load even when the switching delay is large.
Abstract: Cooperative channel allocation and scheduling are key issues in wireless mesh networks with multiple interfaces and multiple channels. In this paper, we propose a load balance link layer protocol (LBLP) aiming to cooperatively manage the interfaces and channels to improve network throughput. In LBLP, an interface can work in a sending or receiving mode. For the receiving interfaces, the channel assignment is proposed considering the number, position and status of the interfaces, and a task allocation algorithm based on the Huffman tree is developed to minimize the mutual interference. A dynamic link scheduling algorithm is designed for the sending interfaces, making the tradeoff between the end-to-end delay and the interface utilization. A portion of the interfaces can adjust their modes for load balancing according to the link status and the interface load. Simulation results show that the proposed LBLP can work with the existing routing protocols to improve the network throughput substantially and balance the load even when the switching delay is large.

65 citations


Journal ArticleDOI
Liu Sihan1, Yucheng Wu1, Liang Li1, Xiaocui Liu1, Weiyang Xu1 
TL;DR: The proposed power control algorithm based on the Lambert W function to maximize the energy efficiency of a single D2D pair is proposed and the simulation results show that the proposed algorithm could not only guarantee the transmission rate of cellular users but also improve the system and D1D pairs energy efficiency.
Abstract: A large number of mobile multimedia terminals are prominent features of smart cities. Device-to-device (D2D) communication takes advantage of the limited bandwidth resources of cellular networks to accommodate more mobile devices. However, when D2D pairs reuse cellular users channels, serious interference leads to energy consumption, which dissatisfies the requirements of green communication. This paper focuses on energy efficiency maximization of D2D communication under the constraints of both D2D pairs and cellular users quality of service. The formulated resource allocation problem is NP-hard, which is usually difficult to solve within polynomial time. To make the problem easy to handle, we divide it into power control and channel allocation sub-problems. In particular, we propose a power control algorithm based on the Lambert W function to maximize the energy efficiency of a single D2D pair. The preference values of D2D pairs and cellular users are calculated using the power control results, respectively. A channel allocation scheme based on the Gale–Shapley algorithm utilizes preference values to match two sides, which aims at maximizing the signal to interference plus noise ratio of cellular users and the energy efficiency of D2D pairs. The simulation results show that the proposed algorithm could not only guarantee the transmission rate of cellular users but also improve the system and D2D pairs energy efficiency.

57 citations


Journal ArticleDOI
TL;DR: This paper proposes a joint power and bandwidth allocation (JPBA) scheme for the sake of maximizing the energy efficiency of small cells under the constraint of a guaranteed quality-of-service requirement for the macro cell, and presents a new two-tier iterative algorithm to obtain the optimal solution.
Abstract: This paper investigates the problem of energy efficiency maximization (EEM) for the small cells that coexist with a macro cell in an underlay heterogeneous cellular network, where a macro base station and a number of small base stations transmit signals to a macro user and small users through their shared spectrum. We propose a joint power and bandwidth allocation (JPBA) scheme for the sake of maximizing the energy efficiency (EE) of small cells under the constraint of a guaranteed quality-of-service requirement for the macro cell. Considering that our formulated EEM-based JPBA (EEM-JPBA) problem is non-convex, we convert the original fractional problem into an equivalent subtractive form by adopting the Dinkelbach’s method, which is addressed through the augmented Lagrange multiplier approach. Moreover, a new two-tier iterative algorithm is presented to obtain the optimal solution of our EEM-JPBA scheme. Simulation results demonstrate that the proposed two-tier iterative algorithm can quickly converge to the optimal EE solution. In addition, it is shown that the proposed EEM-JPBA scheme significantly outperforms the conventional power and bandwidth allocation methods in terms of their EE performance.

54 citations


Journal ArticleDOI
TL;DR: Numerical results demonstrate that the proposed scheme enhances the sum rate and provides guaranteed QoS for CR-NOMA based femtocell users in comparison to the existing conventional OMA based-femtocell techniques.
Abstract: In this paper, we propose a joint channel allocation and power control algorithm by using cognitive radio non-orthogonal multiple access (CR-NOMA) for femtocell users (FUs). The aim is to maximize the sum rate of the FUs for guaranteed quality of service (QoS). With an aim to have guaranteed QoS for FUs, we use CR-NOMA at the femto base station (FBS). Then, an algorithm for pairing among strong and weak users is proposed by using the channel gain difference. Using pairing, the NOMA interference between them reduces which results in better channel utilization. Moreover, we differentiate the even/odd number of FUs in a femtocell to provide the QoS for weak users also. For this purpose, OMA is used to get a predefined data rate using a greedy channel allocation algorithm. The power of each FBS is controlled by using the successive convex approximation for low complexity (SCALE) protocol with Karush-Kuhn-Tucker (KKT) conditions. Numerical results demonstrate that the proposed scheme enhances the sum rate and provides guaranteed QoS for CR-NOMA based femtocell users in comparison to the existing conventional OMA based-femtocell techniques.

53 citations


Journal ArticleDOI
TL;DR: A machine learning-based predictive dynamic bandwidth allocation algorithm, termed MLP-DBA, to address the uplink bandwidth contention and latency bottleneck of such networks, with results showing reduced uplink latency and packet drop ratio as compared to conventional predictive DBA algorithms.
Abstract: Human-to-machine (H2M) communications in emerging tactile-haptic applications are characterized by stringent low-latency transmission. To achieve low-latency transmissions over existing optical and wireless access networks, this paper proposes a machine learning-based predictive dynamic bandwidth allocation (DBA) algorithm, termed MLP-DBA, to address the uplink bandwidth contention and latency bottleneck of such networks. The proposed algorithm utilizes an artificial neural network (ANN) at the central office (CO) to predict H2M packet bursts arriving at each optical network unit wireless access point (ONU-AP), thereby enabling the uplink bandwidth demand of each ONU-AP to be estimated. As such, arriving packet bursts at the ONU-APs can be allocated bandwidth for tranmission by the CO without having to wait to transmit in the following transmission cycles. Extensive simulations show that the ANN-based prediction of H2M packet bursts achieves >90% accuracy, significantly improving bandwidth demand estimation over existing prediction algorithms. MLP-DBA also makes adaptive bandwidth allocation decisions by classifying each ONU-AP according to its estimated bandwidth, with results showing reduced uplink latency and packet drop ratio as compared to conventional predictive DBA algorithms.

Journal ArticleDOI
TL;DR: A distributed joint source, routing, and channel selection scheme that can effectively improve the network aggregate throughput, as well as reduce delay and packet loss probability.
Abstract: A node can provide a file to other nodes after downloading the file or data from the Internet. When more than one node have obtained the same file, this is considered a multisource transmission, in which all these nodes can act as candidate providers (sources) and transmit the file to a new request node (destination) together. In cases where there is negligible or no interference, multisource transmission can improve the download throughput because of parallel transmissions through multiple paths. However, this improvement is not guaranteed due to wireless interference among different paths. Wireless interference can be alleviated by the multiradio and multichannel technique. Because the source and multipath routing selections interact with channel assignment, the multisource transmission problem with multiradio and multichannel presents a significant challenge. In this paper, we propose a distributed joint source, routing, and channel selection scheme. The source selection issue can be concurrently solved via multipath finding. There are three sub-algorithms in our scheme, namely, interference-aware routing algorithm, channel assignment algorithm, and local routing adjustment algorithm. The interference-aware routing algorithm is used to find paths sequentially and is jointly executed with the channel assignment algorithm. After finding a new path, the local routing adjustment algorithm may be executed to locally adjust the selected paths so as to further reduce wireless interference. Extensive simulations have been conducted to demonstrate that our algorithms can effectively improve the network aggregate throughput, as well as reduce delay and packet loss probability.

Journal ArticleDOI
TL;DR: Numerical results demonstrate that the proposed ECAA algorithm achieves near-optimal performance and is superior to random channel assignment, but has much lower computational complexity.
Abstract: In this paper, the efficient resource allocation for the uplink transmission of wireless powered Internet of Things (IoT) networks is investigated. We adopt LoRa technology as an example in the IoT network, but this paper is still suitable for other communication technologies. Allocating limited resources, like spectrum and energy resources, among a massive number of users faces critical challenges. We consider grouping wireless powered IoT users into available channels first and then investigate power allocation for users grouped in the same channel to improve the network throughput. Specifically, the user grouping problem is formulated as a many to one matching game. It is achieved by considering IoT users and channels as selfish players which belong to two disjoint sets. Both selfish players focus on maximizing their own utilities. Then we propose an efficient channel allocation algorithm (ECAA) with low complexity for user grouping. Additionally, a Markov decision process is used to model unpredictable energy arrival and channel conditions uncertainty at each user, and a power allocation algorithm is proposed to maximize the accumulative network throughput over a finite-horizon of time slots. By doing so, we can distribute the channel access and dynamic power allocation local to IoT users. Numerical results demonstrate that our proposed ECAA algorithm achieves near-optimal performance and is superior to random channel assignment, but has much lower computational complexity. Moreover, simulations show that the distributed power allocation policy for each user is obtained with better performance than a centralized offline scheme.

Journal ArticleDOI
TL;DR: This paper proposes a Joint Power control and Channel allocation based on Reinforcement Learning (JPCRL) algorithm combining with statistical CSI to reduce interference adaptively and shows that the proposed algorithm can effectively improve the throughput compared with the existing scheme.
Abstract: In dense Wireless Local Area Networks (WLANs), high-density Access Points (APs) bring severe interference that seriously affects the experience of users, resulting in lower throughput and poor connection quality. Due to the heavy computation workload raised by the sizable networking systems and the difficulty in estimating instantaneous Channel State Information (CSI), existing works are hard to solve interference problem. In this paper, we propose a Joint Power control and Channel allocation based on Reinforcement Learning (JPCRL) algorithm combining with statistical CSI to reduce interference adaptively. Firstly, we analyze the correlation between transmit power and channel, and formulate the interference optimization as a Mixed Integer Nonlinear Programming (MINLP) problem. Secondly, we use the statistical CSI method to take the power and channel state as the state and action space, the overall throughput increment as the reward function of Q-learning, and obtain the optimal joint optimization strategy through off-line training. Moreover, for the periodic reinforcement learning process leading to resource consumption, we design an event-driven mechanism of Q-learning, which triggers online learning to refresh the optimal policy by event-driven condition and the consumption of computing resources can be reduced. The evaluation results show that the proposed algorithm can effectively improve the throughput compared with the existing scheme.

Journal ArticleDOI
01 Mar 2019
TL;DR: An architecture is proposed for two Greenhouses based on Networked Control Systems that is IoT-based and built on top of switched Ethernet and Wi-Fi, and the introduction of fault tolerance at the controller level.
Abstract: Wireless Sensor Networks have been often used in the context of Greenhouse architectures. In this paper, an architecture is proposed for two Greenhouses based on Networked Control Systems. This architecture is IoT-based and built on top of switched Ethernet and Wi-Fi. Some sensors in the proposed architecture require a one-second real-time deadline. Riverbed simulations prove that there is zero packet loss and no over-delayed packets. An important contribution of this work is the design of a channel allocation scheme that prevents interference in this relatively large Greenhouse system. Another contribution of this work is the introduction of fault tolerance at the controller level. If one controller fails in one of the Greenhouses, the other controller automatically takes over the entire operation of the two-Greenhouse system. Riverbed simulations again show that this fault-tolerant system does not suffer any packet loss or over-delayed packets. Continuous Time Markov Chains are then developed to calculate the reliability as well as the steady state availability of the two-Greenhouse system. The Coverage parameter is taken into account. Finally, a case study is presented to quantitatively assess the advantage of fault tolerance in terms of downtime reduction; this is expected to be attractive especially in developing countries.

Journal ArticleDOI
TL;DR: This paper proposes a joint mode selection, channel allocation, and power control algorithm using particle swarm optimization to manage the interference and improve the overall throughput of cellular and D2D users such that the minimum data rate requirement of WiFi users is guaranteed.
Abstract: Device-to-Device communications (D2D) are being used to improve the spectral efficiency and to reduce the load of the base station (BS) by reutilizing the licensed spectrum. However, the primary cellular users may face high interference from D2D users. There is also a scarcity of licensed spectrum due to the dense deployment of smart devices. To alleviate this problem, in this paper, we are extending the D2D users to the unlicensed band. In this scenario, the D2D users are allowed to reuse the licensed spectrum or share the unlicensed spectrum with the incumbent WiFi users. However, cellular and WiFi users experience high interference if an efficient interference management scheme is not used. In this paper, we propose a joint mode selection, channel allocation, and power control algorithm using particle swarm optimization to manage the interference and improve the overall throughput of cellular and D2D users such that the minimum data rate requirement of WiFi users is guaranteed. Through numerical simulations, we show that the proposed algorithm can significantly mitigate the interference and improve the throughput of the system.

Journal ArticleDOI
TL;DR: The real-time packet transmission up to 50 Gb/s without packet loss during 48-hour through the channel bonding in a single optical distribution network (ODN) with 20 km reach and 1:64 split ratio is experimentally demonstrated.
Abstract: To meet the latency and bandwidth requirements of 5G mobile services and next-generation residential/business services, we present high speed and low-latency passive optical network (PON) enabled by time controlled-tactile optical access (TIC-TOC) technology The TIC-TOC technology could support bandwidth-intensive as well as low-latency services for 5G mobile network by using channel bonding and low-latency oriented dynamic bandwidth allocation (DBA) In order to confirm the technical feasibility of TIC-TOC, FPGA-based OLT and multi-speed ONU prototypes are implemented We experimentally demonstrate the real-time packet transmission up to 50 Gb/s without packet loss during 48-hour through the channel bonding in a single optical distribution network (ODN) with 20 km reach and 1:64 split ratio Less than 400 μs latency for 5G mobile services is also demonstrated Furthermore, we confirm that total throughput could be expanded up to 100 Gb/s by adding more channels

Journal ArticleDOI
TL;DR: A novel routing scheme is developed that selects the best path along with the channel assignment such that the highest capacity is achieved and is presented as a near-optimal solution based on a sequential fixing procedure.
Abstract: Routing and channel assignment schemes for cognitive radio networks (CRNs) are often designed assuming the half-duplex (HD) transmission capability per user. However, recent advances in full-duplex (FD) communications and self-interference suppression techniques challenge the traditional HD transmission capability, in which FD communication can significantly improve spectrum utilization. In this work, we investigate the routing and channel assignment problem in FD-based CRNs. Two types of FD communications are considered. The first type only allows for simultaneous transmission and reception over different channels, while the second type allows for simultaneous transmission and reception over the same channel. Specifically, for a given cognitive radio (CR) source–destination pair, we first formulate the channel assignment problem for each path between the communicating pair as an optimization problem with the main objective of minimizing the number of distinct assigned channels for that path such that the number of simultaneous active hops across the path is maximized. We show that the optimization problem is a binary linear programming problem, which is, in general, nondeterministic polynomial time-hard. Thus, we present a near-optimal solution based on a sequential fixing procedure, where the binary variables are iteratively determined by solving a sequence of relaxed programs. Accordingly, we develop a novel routing scheme that selects the best path along with the channel assignment such that the highest capacity is achieved. Simulation results are provided, which show that a careful routing and channel assignment scheme for FD CRNs can significantly improve the network performance.

Journal ArticleDOI
TL;DR: In this paper, a mixed-integer non-linear joint optimization for the power control and allocation of resource blocks (RBs) to each group is proposed to optimize the energy efficiency of D2D wireless communications.
Abstract: In this paper, we optimize the energy efficiency (bits/s/Hz/J) of device-to-multi-device (D2MD) wireless communications. While the device-to-device scenario has been extensively studied to improve the spectral efficiency in cellular networks, the use of multicast communications opens the possibility of reusing the spectrum resources also inside the groups. The optimization problem is formulated as a mixed integer non-linear joint optimization for the power control and allocation of resource blocks (RBs) to each group. Our model explicitly considers resource sharing by letting co-channel transmission over a RB (up to a maximum of $r$ transmitters) and/or transmission through $s$ different channels in each group. We use an iterative decomposition approach, using first matching theory to find a stable even if sub-optimal channel allocation, to then optimize the transmission power vectors in each group via fractional programming. In addition, within this framework, both the network energy efficiency and the max-min individual energy efficiency are investigated. We characterize numerically the energy-efficient capacity region, and our results show that the normalized energy efficiency is nearly optimal (above 90% of the network capacity) for a wide range of minimum-rate constraints. This performance is better than that of other matching-based techniques previously proposed.

Journal ArticleDOI
TL;DR: This paper proposes a novel DBA approach that employs deep learning to predict the bandwidth demand of end-users so that the control overhead due to the request-grant mechanism in NG-EPON is reduced, thereby increasing the bandwidth utilization.
Abstract: Over the last decade, Passive Optical Networks (PONs) have emerged as an ideal candidate for next-generation broadband access networks. Meanwhile, machine learning and more specifically deep learning has been regarded as a star technology for solving complex classification and prediction problems. Recent advances in hardware and cloud technologies offer all the necessary capabilities for employing deep learning to enhance Next-Generation Ethernet PON’s (NG-EPON) performance. In NG-EPON systems, control messages are exchanged in every cycle between the optical line terminal and optical network units to enable dynamic bandwidth allocation (DBA) in the upstream direction. In this paper, we propose a novel DBA approach that employs deep learning to predict the bandwidth demand of end-users so that the control overhead due to the request-grant mechanism in NG-EPON is reduced, thereby increasing the bandwidth utilization. The extensive simulations highlight the merits of the new DBA approach and offer insights for this new line of research.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a distributed channel allocation algorithm that each user runs and converges to the optimal allocation while achieving an order optimal regret of O(log T )$, where T denotes the length of time horizon.
Abstract: Channel allocation is the task of assigning channels to users such that some objective (e.g., sum-rate) is maximized. In centralized networks such as cellular networks, this task is carried by the base station (BS) which gathers the channel state information (CSI) from the users and computes the optimal solution. In distributed networks such as ad-hoc and device-to-device (D2D) networks, no BS exists and conveying global CSI between users is costly or simply impractical. When the CSI is time varying and unknown to the users, the users face the challenge of both learning the channel statistics online and converging to a good channel allocation. This introduces a multi-armed bandit (MAB) scenario with multiple decision makers. If two or more users choose the same channel, a collision occurs and they all receive zero reward. We propose a distributed channel allocation algorithm that each user runs and converges to the optimal allocation while achieving an order optimal regret of ${\mathcal{O}}\left ({\log T}\right )$ , where $T$ denotes the length of time horizon. The algorithm is based on a carrier sensing multiple access (CSMA) implementation of the distributed auction algorithm. It does not require any exchange of information between users. Users need only to observe a single channel at a time and sense if there is a transmission on that channel, without decoding the transmissions or identifying the transmitting users. We demonstrate the performance of our algorithm using simulated LTE and 5G channels.

Journal ArticleDOI
TL;DR: A redundant transmission strategy to meet the reliability requirement for state estimation incorporates the ISM channels with the opportunistically harvested channels to provide adequate spectrum opportunities for redundant transmissions and increases the sum rate of the WSN.
Abstract: State estimation over wireless sensor networks (WSNs) plays an important role for the ubiquitous monitoring in industrial cyber-physical systems (ICPSs). However, the unreliable wireless channels lead to the transmitted measurements arriving at the remote estimator intermittently, which will deteriorate the estimation performance. Question of how to improve the transmission reliability in the hostile industrial environment to guarantee the pre-defined estimation performance for ICPSs is largely unexplored. This paper is concerned with a redundant transmission strategy to meet the reliability requirement for state estimation. This strategy incorporates the ISM channels with the opportunistically harvested channels to provide adequate spectrum opportunities for redundant transmissions. First, we explore the relationship between the estimation performance and the transmission reliability, based on which a joint optimization of channel allocation and power control is then developed to guarantee the estimation performance and maximize the sum rate of the WSN. Second, we formulate the optimization into a mix-integer nonlinear programming problem, which is solved efficiently by decomposing it into the channel allocation and power control subproblems. Ultimately, simulation study demonstrates that the proposed strategy not only ensures the required state estimation performance, but also increases the sum rate of the WSN.

Journal ArticleDOI
TL;DR: This work proposes a novel technique to analyze the Nash equilibria of a random interference game, determined by the random channel gains, and proves that even its worst equilibrium has asymptotically optimal weighted sum rate for any interference regime and even for correlated channels.
Abstract: We consider the problem of distributed channel allocation in large networks under the frequency-selective interference channel. Performance is measured by the weighted sum of achievable rates. Our proposed algorithm is a modified Fictitious Play algorithm that can be implemented distributedly, and its stable points are the pure Nash equilibria of a given game. Our goal is to design a utility function for a non-cooperative game, such that all of its pure Nash equilibria have close to optimal global performance. This will make the algorithm close to optimal while requiring no communication between users. We propose a novel technique to analyze the Nash equilibria of a random interference game, determined by the random channel gains. Our analysis is asymptotic in the number of users. First, we present a natural non-cooperative game where the utility of each user is his achievable rate. It is shown that, asymptotically in the number of users and for strong enough interference, this game exhibits many bad equilibria. Then, we propose a novel non-cooperative M frequency-selective interference channel game as a slight modification of the former, where the utility of each user is artificially limited. We prove that even its worst equilibrium has asymptotically optimal weighted sum rate for any interference regime and even for correlated channels. This is based on an order statistics analysis of the fading channels that is valid for a broad class of fading distributions (including Rayleigh, Rician, m-Nakagami, and more). We carry out simulations that show fast convergence of our algorithm to the proven asymptotically optimal pure Nash equilibria.

Journal ArticleDOI
TL;DR: Through computer simulations it was confirmed that the number of active wavelength channels can be reduced by 50% with the proposed algorithm, and thus more RUs can be efficiently accommodated using TWDM-PON.
Abstract: Time- and wavelength- division multiplexed passive optical network (TWDM-PON) has attracted considerable attention for the next generation optical access systems. Among potential applications of TWDM-PON, a major application is the support of mobile fronthaul streams between radio units (RUs) and distributed units (DUs) in the centralized radio access network (C-RAN) architecture, which consists of central units (CUs), DUs, and RUs. The upstream fronthaul traffic that an optical line terminal (OLT) receives is expected to become highly bursty due to the variable data rate generated by employing new functional split options and the synchronization of data transmission between neighboring RUs caused by time-division duplex (TDD). However, there has been no wavelength and bandwidth allocation scheme for TWDM-PON that is designed to efficiently accommodate fronthaul streams satisfying the strict delay requirement. Therefore, in this paper we propose a novel wavelength and bandwidth allocation algorithm that can minimize the number of active wavelength channels considering the high burstiness and delay requirement of fronthaul data transmission. Through computer simulations it was confirmed that the number of active wavelength channels can be reduced by 50% with the proposed algorithm, and thus more RUs can be efficiently accommodated using TWDM-PON.

Journal ArticleDOI
TL;DR: A joint D2D cluster formation and radio resource allocation problem to maximize spectral efficiency in terms of uplink sum-rate, subject to constraints of guaranteeing a minimum throughput to each AMI entity is formulated.
Abstract: In smart grid, an advanced metering infrastructure (AMI) facilitates two-way communications between smart meters (SMs) and a meter data management unit (MDMU). We use cellular-assisted device-to-device (D2D) communications to connect a large number of SMs to the MDMU via cluster heads (CHs). Improper clustering can affect spectral efficiency. In addition, each SM and CH must be provided with a throughput guarantee for reliable smart grid operations. In this paper, we formulate a joint D2D cluster formation and radio resource allocation problem to maximize spectral efficiency in terms of uplink sum-rate, subject to constraints of guaranteeing a minimum throughput to each AMI entity. The optimization problem is a mixed-integer non-linear program, which is non-deterministic polynomial-time hard. For practical implementation and real-time decision making, we divide the original problem into four sub-problems: cluster formation, initial power allocation, channel allocation, and power allocation improvement, respectively. In the sub-problem for power allocation improvement, we exploit the properties of the difference of convex functions in finding the optimal transmission power. Simulation results confirm superiority of the proposed algorithm compared with other existing algorithms.

Journal ArticleDOI
TL;DR: A joint power and sub-channel allocation for secrecy capacity (JPSASC) algorithm is proposed to obtain the suboptimal solution of the joint problem and simulation results verify that JPSASC algorithm outperforms other algorithms in maximizing secrecy capacity.
Abstract: In this paper, we consider the physical layer security for non-orthogonal multiple access (NOMA)-based uplink massive machine type communication (mMTC) networks. Aiming at the maximization of the system secrecy capacity, with the presence of eavesdroppers, we propose a joint power and sub-channel allocation for secrecy capacity (JPSASC) algorithm to obtain the suboptimal solution of the joint problem. Particularly, the power allocation problem is modeled as a non-cooperative game with a distributed perspective, where each MTC device selfishly optimizes its power allocation over multi-channels to maximize its own secrecy capacity. The existence of Nash equilibrium (NE) is proved and a sufficient condition to ensure the uniqueness of NE is given. Moreover, the distributed power allocation and preference secrecy capacity maximum (PSCM) algorithms are proposed for power allocation and sub-channel allocation problem, respectively. Simulation results verify that JPSASC algorithm outperforms other algorithms in maximizing secrecy capacity. Furthermore, the secrecy capacity in NOMA-based mMTC is improved compared with that in orthogonal multiple access schemes.

Journal ArticleDOI
TL;DR: This letter investigates the capacity performance of backscatter communication with frequency shift in Rician fading channels, and demonstrates that the superiority of the optimal bandwidth allocation and the sub-optimality of the uniform bandwidth allocation scheme when the number of BackCom users is small.
Abstract: This letter investigates the capacity performance of backscatter communication (BackCom) with frequency shift in Rician fading channels, where each BackCom user passively modulates its own signal over an externally-generated continuous wave (CW) carrier. Besides, a sub-carrier is actively generated in each BackCom user for frequency shift so that BackCom users are allocated to non-overlapping sub-channels for interference-free transmission. The ergodic capacity with the optimal bandwidth allocation is analyzed, where the exact closed-form expression is not attainable and an approximation is obtained for convenience. For fair comparison, uniform bandwidth allocation and no bandwidth allocation schemes are analyzed, where approximated ergodic capacities are derived in closed-form expressions. Simulation results confirm the tightness of analytical results, and demonstrate that the superiority of the optimal bandwidth allocation and the sub-optimality of the uniform bandwidth allocation scheme when the number of BackCom users is small.

Journal ArticleDOI
TL;DR: A robust energy-efficient resource allocation scheme is proposed to guarantee the outage probability requirements of controllers and actuators while realizing the maximization of the system energy efficiency.
Abstract: To promote future intelligent systems, a novel cyber-physical-social smart system (CPS3) powered by the Internet of Things is presented in this paper, where wireless network virtualization is adopted to enhance the diversity and the flexibility of the service operation and the system management Based on the presented system, a robust energy-efficient resource allocation scheme is proposed to guarantee the outage probability requirements of controllers and actuators while realizing the maximization of the system energy efficiency Different from the existing works, imperfect channel state information is studied for energy-efficient resource allocation in CPS3 To effectively handle the formulated optimization problem, the concept of virtual devices is introduced to equivalently reformulate the original problem Afterward, the probabilistic mixed problem is approximately transformed into a nonprobabilistic problem though outage probability analyses After the transformation, the optimization problem can be decomposed into power allocation and channel allocation, where an iterative algorithm for power allocation is adopted to maximize the system energy, and a heuristic greedy algorithm is presented to schedule sensors and actuators on different subchannels based on the obtained power allocation results Simulation results demonstrate the convergency of the proposed algorithm and the advantages of the proposed scheme

Posted Content
TL;DR: In this article, a deep reinforcement learning-based channel allocation scheme using graph convolutional networks (GCNs) was proposed for IEEE 802.11 Extremely High Throughput (EHT) study group.
Abstract: Last year, IEEE 802.11 Extremely High Throughput Study Group (EHT Study Group) was established to initiate discussions on new IEEE 802.11 features. Coordinated control methods of the access points (APs) in the wireless local area networks (WLANs) are discussed in EHT Study Group. The present study proposes a deep reinforcement learning-based channel allocation scheme using graph convolutional networks (GCNs). As a deep reinforcement learning method, we use a well-known method double deep Q-network. In densely deployed WLANs, the number of the available topologies of APs is extremely high, and thus we extract the features of the topological structures based on GCNs. We apply GCNs to a contention graph where APs within their carrier sensing ranges are connected to extract the features of carrier sensing relationships. Additionally, to improve the learning speed especially in an early stage of learning, we employ a game theory-based method to collect the training data independently of the neural network model. The simulation results indicate that the proposed method can appropriately control the channels when compared to extant methods.

Journal ArticleDOI
TL;DR: A greedy-based mode selection and channel allocation algorithm, which can not only allocate the appropriate channel resource for D2D links, but also select the optimal communication mode for them, is proposed.
Abstract: Device-to-device (D2D) communications, as one of the promising technology for the future network, allow two devices in proximity to communicate directly with each other to alleviate the data traffic load for the base station (BS). In this paper, we analysis the joint relay selection and resource allocation scheme for relay-aided D2D communication networks. The objective is to maximize the total transmission data rates of the system while guaranteeing the minimum quality of service (QoS) requirements for both cellular users (CUs) and D2D users (DUs). We first propose a social-aware relay selection algorithm with low computational complexity to obtain the suitable relay nodes (RNs) for D2D links. Then the power control schemes are considered when a D2D link works in the D2D direct or relay mode to maximize the system transmission data rates. On basis of this, we propose a greedy-based mode selection and channel allocation algorithm, which can not only allocate the appropriate channel resource for D2D links, but also select the optimal communication mode for them. Numerical results demonstrate that the proposed scheme is capable of substantially improving the performance of the system compared with the other algorithms.