scispace - formally typeset
Search or ask a question

Showing papers on "Channel allocation schemes published in 2020"


Journal ArticleDOI
TL;DR: In this article, the authors investigated the downlink communications of intelligent reflecting surface (IRS) assisted non-orthogonal multiple access (NOMA) systems and formulated a joint optimization problem over the channel assignment, decoding order of NOMA users, power allocation, and reflection coefficients.
Abstract: This article investigates the downlink communications of intelligent reflecting surface (IRS) assisted non-orthogonal multiple access (NOMA) systems. To maximize the system throughput, we formulate a joint optimization problem over the channel assignment, decoding order of NOMA users, power allocation, and reflection coefficients. The formulated problem is proved to be NP-hard. To tackle this problem, a three-step novel resource allocation algorithm is proposed. Firstly, the channel assignment problem is solved by a many-to-one matching algorithm. Secondly, by considering the IRS reflection coefficients design, a low-complexity decoding order optimization algorithm is proposed. Thirdly, given a channel assignment and decoding order, a joint optimization algorithm is proposed for solving the joint power allocation and reflection coefficient design problem. Numerical results illustrate that: i) with the aid of IRS, the proposed IRS-NOMA system outperforms the conventional NOMA system without the IRS in terms of system throughput; ii) the proposed IRS-NOMA system achieves higher system throughput than the IRS assisted orthogonal multiple access (IRS-OMA) systems; iii) simulation results show that the performance gains of the IRS-NOMA and the IRS-OMA systems can be enhanced via carefully choosing the location of the IRS.

190 citations


Proceedings ArticleDOI
06 Jul 2020
TL;DR: This paper proposes an efficient online algorithm, called JCAB, which jointly optimizes configuration adaption and bandwidth allocation to address a number of key challenges in edge-based video analytics systems, including edge capacity limitation, unknown network variation, intrusive dynamics of video contents.
Abstract: Real-time analytics on video data demands intensive computation resources and high energy consumption. Traditional cloud-based video analytics relies on large centralized clusters to ingest video streams. With edge computing, we can offload compute-intensive analysis tasks to the nearby server, thus mitigating long latency incurred by data transmission via wide area networks. When offloading frames from the front-end device to the edge server, the application configuration (frame sampling rate and frame resolution) will impact several metrics, such as energy consumption, analytics accuracy and user-perceived latency. In this paper, we study the configuration adaption and bandwidth allocation for multiple video streams, which are connected to the same edge node sharing an upload link. We propose an efficient online algorithm, called JCAB, which jointly optimizes configuration adaption and bandwidth allocation to address a number of key challenges in edge-based video analytics systems, including edge capacity limitation, unknown network variation, intrusive dynamics of video contents. Our algorithm is developed based on Lyapunov optimization and Markov approximation, works online without requiring future information, and achieves a provable performance bound. Simulation results show that JCAB can effectively balance the analytics accuracy and energy consumption while keeping low system latency.

126 citations


Journal ArticleDOI
TL;DR: This article first optimize the bandwidth allocation by presenting three schemes for the second-hop wireless relaying, then optimize the computation offloading based on the discrete particle swarm optimization algorithm, and presents three relay selection criteria by taking into account the tradeoff between the system performance and implementation complexity.
Abstract: In this article, we investigate a communication and computation problem for industrial Internet of Things (IoT) networks, where $K$ relays can help accomplish the computation tasks with the assist of $M$ computational access points. In industrial IoT networks, latency and energy consumption are two important metrics of interest to measure the system performance. To enhance the system performance, a three-hierarchical optimization framework is proposed to reduce the latency and energy consumption, which involves bandwidth allocation, offloading, and relay selection. Specifically, we first optimize the bandwidth allocation by presenting three schemes for the second-hop wireless relaying. We then optimize the computation offloading based on the discrete particle swarm optimization algorithm. We further present three relay selection criteria by taking into account the tradeoff between the system performance and implementation complexity. Simulation results are finally demonstrated to show the effectiveness of the proposed three-hierarchical optimization framework.

122 citations


Journal ArticleDOI
TL;DR: A two-stage joint optimal offloading algorithm is proposed, optimizing computation and communication resource allocation under limited energy and sensitive latency and numerical simulation results show the effectiveness of the proposed algorithm.
Abstract: The ever-increasing growth in maritime activities with large amounts of Maritime Internet-of-Things (M-IoT) devices and the exploration of ocean network leads to a great challenge for dealing with a massive amount of maritime data in a cost-effective and energy-efficient way. However, the resources-constrained maritime users cannot meet the high requirements of transmission delay and energy consumption, due to the excessive traffic and limited resources in maritime networks. To solve this problem, mobile edge computing is taken as a promising paradigm to help mobile devices from edge servers via computation offloading considering the different quality of service (QoS) with the complex ocean environments, resulting in energy saving and increased transmission latency. To investigate the tradeoff between latency and energy consumption in low-cost large-scale maritime communication, we formulate the offloading optimization problem and propose a two-stage joint optimal offloading algorithm, optimizing computation and communication resource allocation under limited energy and sensitive latency. At the first stage, the maritime users make the decision on whether to offload a computation considering their demands and environments. Then, the channel allocation and power allocation problems were proposed to optimize the offloading policy which coordinates with the center cloud servers at the second stage, considering the dynamic tradeoff of latency and energy consumption. Finally, numerical simulation results show the effectiveness of the proposed algorithm.

68 citations


Journal ArticleDOI
TL;DR: This paper develops and evaluates a resource allocation approach for cognitive radios implemented with SDR technology over two testbeds of the ORCA federation, based on a Markov Random Field framework realizing a distributed cross-layer computation for the secondary nodes of the cognitive radio network.
Abstract: Software Defined Radio (SDR)-enabled cognitive radio network architectures are expected to play an important role in the future 5G networks. Despite the increased research interest, the current implementations are of small-scale and provide limited functionality. In this paper, we contribute towards the alleviation of the limitations in SDR deployments by developing and evaluating a resource allocation approach for cognitive radios implemented with SDR technology over two testbeds of the ORCA federation. Resource allocation is based on a Markov Random Field (MRF) framework realizing a distributed cross-layer computation for the secondary nodes of the cognitive radio network. The proposed framework implementation consists of self-contained modules developed in GNU Radio realizing cognitive functionalities, such as spectrum sensing, collision detection, etc. We demonstrate the feasibility of the MRF based resource allocation approach and provide extensive results and performance analysis that highlight its key features. The latter provide useful insights about the advantages of our framework, while allowing to pinpoint current technological barriers of broader interest.

66 citations


Journal ArticleDOI
TL;DR: Simulation results show that the UAVs-aided self-organized D2D network can achieve a high capacity via the joint optimization of relay deployment, channel allocation, and relay assignment.
Abstract: Unmanned aerial vehicles (UAVs) can be deployed in the air to provide high probabilities of line of sight (LoS) transmission, thus UAVs bring much gain for wireless communication systems. In this paper, we study a UAVs-aided self-organized device-to-device (D2D) network. Relay deployment, channel allocation and relay assignment are jointly optimized, aiming to maximize the capacity of the relay network. On account of the coupled relationship between the three optimization variables, an alternating optimization approach is proposed to solve this problem. The original problem is divided into two sub-problems. The first one is that of optimizing the channel allocation and relay assignment with fixed relay deployment. Considering without central controller, a reinforcement learning algorithm is proposed to solve this sub-problem. The second sub-problem is that of optimizing the relay deployment with fixed channel allocation and relay assignment. Assuming no knowledge of channel model and exact positions of the communication nodes, an online learning algorithm based on real-time capacity is proposed to solve this sub-problem. By solving the two sub-problems alternately and iteratively, the original problem is finally solved. Simulation results show that the UAVs-aided D2D network can achieve a high capacity via the joint optimization of relay deployment, channel allocation, and relay assignment.

58 citations


Journal ArticleDOI
TL;DR: A priority-based secondary user (SU) call admission and channel allocation scheme that outperforms the greedy non-priority and fair proportion schemes and reduces the blocking probability of higher-priority SU calls while maintaining a sufficient level of channel utilization.
Abstract: The Internet of Things (IoT) is a network of interconnected objects, in which every object in the world seeks to communicate and exchange information actively. This exponential growth of interconnected objects increases the demand for wireless spectrum. However, providing wireless channel access to every communicating object while ensuring its guaranteed quality of service (QoS) requirements is challenging and has not yet been explored, especially for IoT-enabled mission-critical applications and services. Meanwhile, Cognitive Radio-enabled Internet of Things (CR-IoT) is an emerging field that is considered the future of IoT. The combination of CR technology and IoT can better handle the increasing demands of various applications such as manufacturing, logistics, retail, environment, public safety, healthcare, food, and drugs. However, due to the limited and dynamic resource availability, CR-IoT cannot accommodate all types of users. In this paper, we first examine the availability of a licensed channel on the basis of its primary users’ activities (e.g., traffic patterns). Second, we propose a priority-based secondary user (SU) call admission and channel allocation scheme, which is further based on a priority-based dynamic channel reservation scheme. The objective of our study is to reduce the blocking probability of higher-priority SU calls while maintaining a sufficient level of channel utilization. The arrival rates of SU calls of all priority classes are estimated using a Markov chain model, and further channels for each priority class are reserved based on this analysis. We compare the performance of the proposed scheme with the greedy non-priority and fair proportion schemes in terms of the SU call-blocking probability, SU call-dropping probability, channel utilization, and throughput. Numerical results show that the proposed priority scheme outperforms the greedy non-priority and fair proportion schemes.

56 citations


Journal ArticleDOI
Bodong Shang1, Lingjia Liu1
TL;DR: This article introduces a coordinate descent algorithm that decomposes the UEs’ energy consumption minimization problem into several subproblems which can be efficiently solved and demonstrates the advantages of the proposed algorithm in terms of the reduced total energy consumption of UEs.
Abstract: Unmanned aerial vehicles (UAVs) are expected to be deployed as aerial base stations (BSs) in future wireless networks to provide extensive coverage and additional computational capabilities for user equipments (UEs). In this article, we study mobile-edge computing (MEC) in air–ground integrated wireless networks, including ground computational access points (GCAPs), UAVs, and UEs, where UAVs and GCAPs cooperatively provide computing resources for UEs. Our goal is to minimize the total energy consumption of UEs by jointly optimizing users’ association, uplink power control, channel allocation, computation capacity allocation, and UAV 3-D placement, subject to the constraints on deterministic binary offloading, UEs’ latency requirements, computation capacity, UAV power consumption, and available bandwidth. Due to the nonconvexity of the primary problem and the coupling of variables, we introduce a coordinate descent algorithm that decomposes the UEs’ energy consumption minimization problem into several subproblems which can be efficiently solved. The simulation results demonstrate the advantages of the proposed algorithm in terms of the reduced total energy consumption of UEs.

51 citations


Journal ArticleDOI
TL;DR: In this article, a Mixed-Integer Non-Linear Programming (MINLP) formulation is developed, in which the location of a single UAV-BS and bandwidth allocations to users are jointly determined.
Abstract: Unmanned Aerial Vehicle Base Stations (UAV-BSs) are envisioned to be an integral component of the next generation Wireless Communications Networks (WCNs) with a potential to create opportunities for enhancing the capacity of the network by dynamically moving the supply towards the demand while facilitating the services that cannot be provided via other means efficiently. A significant drawback of the state-of-the-art have been designing a WCN in which the service-oriented performance measures (e.g., throughput) are optimized without considering different relevant decisions such as determining the location and allocating the resources, jointly. In this study, we address the UAV-BS location and bandwidth allocation problems together to optimize the total network profit. In particular, a Mixed-Integer Non-Linear Programming (MINLP) formulation is developed, in which the location of a single UAV-BS and bandwidth allocations to users are jointly determined. The objective is to maximize the total profit without exceeding the backhaul and access capacities. The profit gained from a specific user is assumed to be a piecewise-linear function of the provided data rate level, where higher data rate levels would yield higher profit. Due to high complexity of the MINLP, we propose an efficient heuristic algorithm with lower computational complexity. We show that, when the UAV-BS location is determined, the resource allocation problem can be reduced to a Multidimensional Binary Knapsack Problem (MBKP), which can be solved in pseudo-polynomial time. To exploit this structure, the optimal bandwidth allocations are determined by solving several MBKPs in a search algorithm. We test the performance of our algorithm with two heuristics and with the MINLP model solved by a commercial solver. Our numerical results show that the proposed algorithm outperforms the alternative solution approaches and would be a promising tool to improve the total network profit.

49 citations


Journal ArticleDOI
TL;DR: The MSCA (Multi-Strategy Channel Allocation algorithm for edge computing) algorithm presented in this paper can minimize channel interference and overall network energy consumption while satisfying throughput and end-to-end delay.
Abstract: Wireless mesh networks (WMNs) are a kind of wireless network technology that can transmit multi-hop information, and have been regarded as one of the key technologies for configuring wireless machines. In WMNs, wireless routers can provide multi-hop wireless connections between nodes in the network and access the Internet through gateway devices. Multicast is a communication technology that uses best effort to send information from a source node to multiple destinations. In this paper, we study the problems of channel interference and time slot multi-user collisions caused by radio during WMNs information transmission. Through innovative use of edge computing technology to construct a node data cache model and step-by-step calculation of node channel separation, a multi-strategy channel allocation algorithm for edge computing is proposed. The mechanism of this algorithm is to use edge computing technology to pre-store the data in the nodes in the multicast tree and calculate the channel separation between the nodes, and then select the transmission channel number of the node with the least interference to avoid mutual interference between node information transmissions. Through experimental tests and comparisons, the MSCA (Multi-Strategy Channel Allocation algorithm for edge computing) algorithm presented in this paper can minimize channel interference and overall network energy consumption while satisfying throughput and end-to-end delay.

48 citations


Journal ArticleDOI
Ju Ren1, Kadir Md Mahfujul1, Feng Lyu1, Sheng Yue1, Yaoxue Zhang1 
TL;DR: This paper investigates the computation offloading problem in a hierarchical network architecture, and proposes a scheme of joint channel allocation and resource management, named JCRM, to make offloading decisions and maximize the long-term network utility with considering stochastic task arrival/dispatch and dynamic changes in available resources.
Abstract: To accommodate ever-increasing computational workloads while satisfying the requirements of delay-sensitive tasks, mobile edge computing (MEC) is proposed to offload the tasks to nearby edge servers. Nevertheless, it can introduce new technical issues in terms of transmission and computation overheads affected by the underlying offloading decisions. In this paper, we investigate the computation offloading problem in a hierarchical network architecture, where tasks can be offloaded to nearby micro-BS and further forwarded to macro-BS equipped with an MEC server. Specifically, we propose a scheme of joint channel allocation and resource management, named JCRM, to make offloading decisions and maximize the long-term network utility with considering stochastic task arrival/dispatch and dynamic changes in available resources. As the formulated utility maximization problem is a mixed-integer non-linear stochastic programming problem that is directly intractable, we, therefore, leverage the Lyapunov optimization technique to decouple the original problem into three separate sub-problems. Based on the solutions to those sub-problems, our proposed scheme can make optimal offloading-downloading decisions with maximizing the overall task offloading rate. Finally, we verify the long-term network stability and near-optimal performance of JCRM via both theoretical analysis and extensive simulations.

Journal ArticleDOI
TL;DR: In DeepCA, a novel reinforcement learning based approach for energy-efficient channel allocation in SIoT, a new sliding block scheme is introduced to facilitate the modeling of dynamic feature of the LEO satellite, and a deep reinforcement learning algorithm is proposed for optimal channel allocation.
Abstract: Recently, Satellite Internet of Things (SIoT), a space network that consists of numerous Low Earth Orbit (LEO) satellites, is regarded as a promising technique since it is the only solution to provide 100% global coverage for the whole earth, without any additional terrestrial infrastructure supports. However, compared with Geostationary Earth Orbit (GEO) satellites, the LEO satellites always move very fast to cover an area within only 5-12 minutes per pass, bringing high dynamics to the network access. Furthermore, to reduce the cost, the power and spectrum channel resources of each LEO satellite are very limited, i.e., less than 10% of GEO. Therefore, to take fully advantage of the limited resource, it is very challenging to have an efficient resource allocation scheme for SIoT. Current resource allocation schemes for satellites are mostly designed for GEO, and these schemes do not consider many LEO specific concerns, including the constrained energy, the mobility characteristic, the dynamics of connections and transmissions etc. Towards this end, we proposed DeepCA, a novel reinforcement learning based approach for energy-efficient channel allocation in SIoT. In DeepCA, we firstly introduce a new sliding block scheme to facilitate the modeling of dynamic feature of the LEO satellite, and formulate the dynamic channel allocation problem in SIoT as a Markov decision process (MDP). We then propose a deep reinforcement learning algorithm for optimal channel allocation. To accelerate the learning process of DeepCA, we utilize the image form to represent the requests of users to reduce the input size, and carefully divide an action into multiple mini-actions to reduce the size of the action set. Extensive simulations show that our proposed DeepCA approach can save at least 67.86% energy consumption compared with traditional algorithms.

Journal ArticleDOI
TL;DR: This article innovatively envisions a joint computation and URLLC resource allocation strategy for collaborative MEC assisted cellular-V2X networks and formulates a jointly power consumption optimization problem while guaranteeing the network stability.
Abstract: By leveraging the 5G enabled V2X networks, the vehicles connected by cellular base-stations can support a wide variety of computation-intensive services. In order to solve the arisen challenges in end-to-end low-latency transmission and backhaul resources, mobile edge computing (MEC) is now regarded as a promising paradigm for 5G-V2X communications. Considering the importance of both reliability and delay in vehicle communication, this article innovatively envisions a joint computation and URLLC resource allocation strategy for collaborative MEC assisted cellular-V2X networks and formulate a jointly power consumption optimization problem while guaranteeing the network stability. To solve this NP hard problem, we decouple it into two sub-problems: URLLC resource allocation for multi-cells to multi-vehicles and computation resource decisions among local vehicle, serving MEC server and collaborative MEC server. Secondly, non-cooperative game and bipartite graph are introduced to reduce the inter-cell interference and decide the channel allocation, which aims to maximize the throughput with a guarantee of reliability in URLLC V2X communication. Then, an online Lyapunov optimization method is proposed to solve computation resource allocation to get a trade-off between the average weighted power consumption and delay where CPU frequency are calculated using Gauss-Seidel method. Finally, the simulation results demonstrate that our proposed strategy can get better trade-off performance among power consumption, overflow probability and execution delay than the one based on centralized MEC assisted V2X.

Journal ArticleDOI
TL;DR: A game-theoretic based model for bandwidth allocation in the forward link and a cooperative multi-agent deep reinforcement learning method to achieve the optimal bandwidth allocation strategy are built.
Abstract: Information dissemination in mobile networks turns out to be a problem when the network is sparse Mobile networks begin to establish a separate cluster attributable to the limited communication range of terminals The multi-beam satellite communication systems can play a significant role in providing direct-to-user satellite mobile services and connecting the separated clusters This paper focuses on how to efficiently schedule limited satellite-based radio resources to enhance transmission efficiency and meet the requested traffic with low complexity Taking the inter-beam interference and resource utilization variance into consideration, we build a game-theoretic based model for bandwidth allocation in the forward link As the size of satellite beams increases, the size of the action space for deep reinforcement learning based on a single agent becomes large, resulting in high time complexity Thus, we extend the single-agent deep reinforcement learning to the multi-agent context and then propose a cooperative multi-agent deep reinforcement learning method to achieve the optimal bandwidth allocation strategy Each beam works as a player who is willing to satisfy the request traffic with flexible payloads We built a multi-beam satellite platform using real historical data The experimental results show that this approach is capable of enhancing transmission efficiency and can be flexible to achieve the desired goal with low complexity

Journal ArticleDOI
TL;DR: This paper investigates the computation offloading and subcarrier allocation problem in Multi-carrier (MC) NOMA based MEC systems and addresses it using Deep Reinforcement Learning for Online Computation Offloading (DRLOCO-MNM) algorithm, which considerably improve the computation rates of M EC systems.
Abstract: One of the missions of fifth generation (5G) wireless networks is to provide massive connectivity of the fast growing number of Internet of Things (IoT) devices. To satisfy this mission, non-orthogonal multiple access (NOMA) has been recognized as a promising solution for 5G networks to significantly improve the network capacity. Considered as a booster of IoT devices, and in parallel with the development of NOMA techniques, multi-access edge computing (MEC) is also becoming one of the key emerging technologies for 5G networks. In this paper, with an objective of maximizing the computation rate of an MEC system, we investigate the computation offloading and subcarrier allocation problem in Multi-carrier (MC) NOMA based MEC systems and address it using Deep Reinforcement Learning for Online Computation Offloading (DRLOCO-MNM) algorithm. In particular, the DRLOCO-MNM helps each of the user equipments (UEs) decides between local and remote computation modes, and also assigns the appropriate subcarrier to the UEs in the case of remote computation mode. The DRLOCO-MNM algorithm is especially advantageous over the other machine learning techniques applied on NOMA because it does not require labeled data for training or a complete definition of the channel environment. The DRLOCO-MNM also does avoid the complexity found in many optimization algorithms used to solve channel allocation in existing NOMA related studies. Numerical simulations and comparison with other algorithms show that our proposed module and its algorithm considerably improve the computation rates of MEC systems.

Journal ArticleDOI
TL;DR: This work considers the problem of radio resource sharing between enhanced mobile broadband and ultra-reliable and low latency communications (URLLC), two heterogeneous 5G services and proposes the use of a max-matching diversity (MMD) algorithm to properly allocate the channels to eMBB users.
Abstract: This work considers the problem of radio resource sharing between enhanced mobile broadband (eMBB) and ultra-reliable and low latency communications (URLLC), two heterogeneous 5G services. More specifically, we propose the use of a max-matching diversity (MMD) algorithm to properly allocate the channels to the eMBB users, considering both heterogeneous orthogonal multiple access (H-OMA) and heterogeneous non-orthogonal multiple access (H-NOMA) network slicing strategies. Our results indicate that MMD can simultaneously improve the eMBB achievable rate and the URLLC reliability regardless the network slicing strategy adopted.

Journal ArticleDOI
TL;DR: Most of the researches addressing channel allocation and packets scheduling, when merging the cognitive radio networks with the IoT technology are presented.

Journal ArticleDOI
TL;DR: This article addresses the social-awareness property and unmanned-aerial-vehicle (UAV)-assisted information diffusion in emergency scenarios, where UAVs can disseminate alert messages to a set of terrestrial users within their coverage, and then these users can continuously disseminate the received data packets to their socially connected users in a device-to-device (D2D) multicast manner.
Abstract: In this article, we address the social-awareness property and unmanned-aerial-vehicle (UAV)-assisted information diffusion in emergency scenarios, where UAVs can disseminate alert messages to a set of terrestrial users within their coverage, and then these users can continuously disseminate the received data packets to their socially connected users in a device-to-device (D2D) multicast manner. In this regard, we have to solve both the dynamic cluster formation and spectrum sharing problems in stochastic environments, since both UAVs and terrestrial users may arrive or depart suddenly. For the cluster formation problem, considering that the data rate of a multicast cluster is determined by the member with the worst link condition, we formulate it as a many-to-one matching game and adopt the rotation-swap algorithm to maximize the expected number of users receiving the alerting messages in each time slot. For the dynamic spectrum sharing problem, aiming at eliminating the interference while minimizing the channel switching cost, we propose a dynamic hypergraph coloring approach to model the cumulative interference and maintain the mutual interference at a low level by exploring a small number of vertices, when the graph is dynamically updated, i.e., the insertion/deletion of vertex/edge. Moreover, we prove some crucial properties, including global stability, convergence, and complexity. Finally, simulation results show that our proposed approach can achieve a better tradeoff among the information diffusion speed, channel switch cost, and complexity.

Journal ArticleDOI
TL;DR: Simulation results reveal that the effectiveness of the proposed schemes in terms of EE compared to the existing NOMA and the orthogonal multiple access schemes is higher than expected.
Abstract: In this paper, we consider a downlink non-orthogonal multiple access (NOMA) enabled heterogeneous small cells network (HSCN), where the macro base station simultaneously communicates with multiple small cell base stations (SBSs) through wireless backhaul. In each small cell, users are grouped by NOMA bases and then served by their respective SBS. The proposed framework considers the realistic imperfect channel state information and quality of service requirements of users. The goal is to investigate an energy-efficient joint power, and bandwidth allocation scheme, which aims to maximize the energy efficiency (EE) of the small cells in downlink NOMA-HSCN constrained by the maximum transmit power and the minimum required data rate simultaneously. The optimization problem is non-convex due to the fractional objective function and non-convex constraint and thus challenging to obtain an exact solution efficiently. To this end, the joint optimization is first decomposed into two subproblems. Then, an iterative algorithm to solve the power optimization subproblem is proposed with guaranteed convergence. Furthermore, we derive a closed-form solution for the bandwidth allocation subproblem. Simulation results reveal that the effectiveness of the proposed schemes in terms of EE compared to the existing NOMA and the orthogonal multiple access schemes.

Journal ArticleDOI
TL;DR: The tracking performance limitation of networked control systems (NCSs) is studied as continuous-time linear multi-input multioutput (MIMO) systems with random reference noises, including additive white noise, quantization noise, bandwidth, as well as encoder-decoder.
Abstract: In this paper, the tracking performance limitation of networked control systems (NCSs) is studied. The NCSs are considered as continuous-time linear multi-input multioutput (MIMO) systems with random reference noises. The controlled plants include unstable poles and nonminimum phase (NMP) zeros. The output feedback path is affected by multiple communication constraints. We focus on some basic communication constraints, including additive white noise (AWN), quantization noise, bandwidth, as well as encoder-decoder. The system performance is evaluated with the tracking error energy, and used a two-degree-of-freedom (2DOF) controller. The explicit representation of the tracking performance is given in this paper. The results indicate the tracking performance limitations rely to internal characteristics of the plant (unstable poles and NMP zeros), reference noises [the reference noise power distribution (RNPD) and its directions], and the characteristics of communication constraints. The characteristics of communication constraints include communication noise power distribution (CNPD); quantization noise power distribution (QNPD), and their distribution directions; transform bandwidth allocation (TBA); transform encoder-decoder allocation (TEA), and their allocation directions; and NMP zeros and MP part of bandwidth. Moreover, the tracking performance limitations are also affected by the angles between the each transform NMP zero direction and RNPD direction, and these angles between each transform unstable poles direction and the direction of communication constraint distribution/allocation. In addition, for MIMO NCSs, bandwidth (there are not identical two channels) can always affect the direction of unstable poles, and the channel allocation of bandwidth and encode-decode may be used for a feasible method for the performance allocation of each channel. Finally, an instance is given for verifying the effectiveness of the theoretical outcomes.

Journal ArticleDOI
TL;DR: This work focuses on the novel joint design of parameter (computation load) allocation and bandwidth allocation (for downloading and uploading) and demonstrates that integrating PABA can substantially improve the performance of partitioned edge learning in terms of latency and accuracy.
Abstract: To leverage data and computation capabilities of mobile devices, machine learning algorithms are deployed at the network edge for training artificial intelligence (AI) models, resulting in the new paradigm of edge learning. In this paper, we consider the framework of partitioned edge learning for iteratively training a large-scale model using many resource-constrained devices (called workers). To this end, in each iteration, the model is dynamically partitioned into parametric blocks, which are downloaded to worker groups for updating using data subsets. Then, the local updates are uploaded to and cascaded by the server for updating a global model. To reduce resource usage by minimizing the total learning-and-communication latency, this work focuses on the novel joint design of parameter (computation load) allocation and bandwidth allocation (for downloading and uploading). Two design approaches are adopted. First, a practical sequential approach, called partially integrated parameter-and-bandwidth allocation (PABA), yields two schemes, namely bandwidth aware parameter allocation and parameter aware bandwidth allocation . The former minimizes the load for the slowest (in computing) of worker groups, each training a same parametric block. The latter allocates the largest bandwidth to the worker being the latency bottleneck. Second, PABA are jointly optimized. Despite it being a nonconvex problem, an efficient and optimal solution algorithm is derived by intelligently nesting a bisection search and solving a convex problem. Experimental results using real data demonstrate that integrating PABA can substantially improve the performance of partitioned edge learning in terms of latency (by e.g., 46%) and accuracy (by e.g., 4% given the latency of 100 seconds).

Journal ArticleDOI
TL;DR: This work proposes the use of node based channel allocation to improve the performance of the best scheme, and demonstrates its practicality and reliability, with up to 6 percentage points better packet delivery ratio than the second best option while retaining a similar radio duty cycle.
Abstract: Time Slotted Channel Hopping (TSCH) is a link layer protocol defined in the IEEE 802.15.4 standard. Although it is designed to provide highly reliable and efficient service targeting industrial automation systems, scheduling TSCH transmissions in the time and frequency dimensions is left to the implementers. We evaluate the performance of existing autonomous scheduling approaches for TSCH on various traffic patterns and network configurations. We thoroughly investigate the pros and cons of each scheme; moreover, we propose the use of node based channel allocation to improve the performance of the best scheme, and demonstrate its practicality and reliability, with up to 6 percentage points better packet delivery ratio than the second best option while retaining a similar radio duty cycle. Finally, based on our extensive performance evaluation, we provide some guidelines on how to select a scheduler for a given network.

Journal ArticleDOI
TL;DR: A deep reinforcement learning-based channel allocation scheme that enables the efficient use of experience in densely deployed wireless local area networks and achieves greater reward channel allocation or realizes the optimal channel allocation while reducing the number of changes.
Abstract: For densely deployed wireless local area networks (WLANs), this paper proposes a deep reinforcement learning-based channel allocation scheme that enables the efficient use of experience. The central idea is that an objective function is modeled relative to communication quality as a parametric function of a pair of observed topologies and channels. This is because communication quality in WLANs is significantly influenced by the carrier sensing relationship between access points. The features of the proposed scheme can be summarized by two points. First, we adopt graph convolutional layers in the model to extract the features of the channel vectors with topology information, which is the adjacency matrix of the graph dependent on the carrier sensing relationships. Second, we filter experiences to reduce the duplication of data for learning, which can often adversely influence the generalization performance. Because fixed experiences tend to be repeatedly observed in WLAN channel allocation problems, the duplication of experiences must be avoided. The simulation results demonstrate that the proposed method enables the allocation of channels in densely deployed WLANs such that the system throughput increases. Moreover, improved channel allocation, compared to other existing methods, is achieved in terms of the system throughput. Furthermore, compared to the immediate reward maximization method, the proposed method successfully achieves greater reward channel allocation or realizes the optimal channel allocation while reducing the number of changes.

Journal ArticleDOI
TL;DR: In this article, the problem of jointly determining backhaul-aware 3D placement for UAVs, user-BS associations and corresponding bandwidth allocations while minimizing total downlink transmit power was studied.
Abstract: Use of aerial base stations (ABSs) is a promising approach to enhance the agility and flexibility of future wireless networks. ABSs can improve the coverage and/or capacity of a network by moving supply towards demand. Deploying ABSs in a network presents several challenges such as finding an efficient 3D-placement of ABSs that takes network objectives into account. Another challenge is the limited wireless backhaul capacity of ABSs and consequently, potentially higher latency incurred. Content caching is proposed to alleviate the backhaul congestion and decrease the latency. We consider a limited backhaul capacity for ABSs due to varying position-dependent path loss values and define two groups of users (delay-tolerant and delay-sensitive) with different data rate requirements. We study the problem of jointly determining backhaul-aware 3D placement for ABSs, user-BS associations and corresponding bandwidth allocations while minimizing total downlink transmit power. Proposed iterative algorithm applies a decomposition method. First, the 3D locations of ABSs are found using semi-definite relaxation and coordinate descent methods, and then user-BS associations and bandwidth allocations are optimized. The simulation results demonstrate the effectiveness of the proposed algorithm and provide insights about the impact of traffic distribution and content caching on transmit power and backhaul usage of ABSs.

Journal ArticleDOI
TL;DR: This paper investigates the joint problem of resource allocation and EH time slot allocation of DUEs in EH-DCNs, and proposes a joint iterative algorithm based on Lagrangian dual decomposition for higher energy efficiency and spectral efficiency for different network parameter settings.
Abstract: Energy Harvesting (EH) technology enables Device-to-Device (D2D) User Equipments (DUEs) to harvest energy from ambient energy, making contributions to green communications and breaking the single battery-powered and the regional limitations of device deployment The joint problem of energy harvesting and resource allocation in EH-based D2D Communication Networks (EH-DCNs) is a challenge issue In this paper, we investigate the joint problem of resource allocation and EH time slot allocation of DUEs in EH-DCNs, where the DUEs harvest energy and multiplex Cellular User Equipments (CUEs) uplink resources A channel assignment, power allocation and EH time slot allocation problem in EH-DCNs is formulated The goal is to maximize energy- and spectral-efficiency with $\alpha$ -fairness while guaranteeing the EH constraints of DUEs and the quality of service of CUEs The formulated problem is a non-convex mixed-integer multi-objective optimization problem In order to solve the formulated problem, the multi-objective optimization problem is transformed into a single-objective optimization problem based on the weight sum method We propose a joint iterative algorithm based on Lagrangian dual decomposition for $\alpha >0$ and $\alpha =0$ , respectively Numerical results illustrate that the proposed algorithm achieves higher energy efficiency and spectral efficiency for different network parameter settings

Journal ArticleDOI
TL;DR: Simulation results verify the effectiveness of the proposed randomized auction scheme in bandwidth trading, which is proved to be approximately truthful, individually rational, and computationally efficient.
Abstract: Inspired by the shared infrastructure of colocation data centers and the growth of Mobile Edge Computing (MEC), colocation MEC businesses have thrived to offer an economical and low latency solution for MEC based tenants. In colocation MEC, a colocation service provider leases spaces at the base stations (BSs) to tenants for housing their servers. Although the tenants fully manage their own servers, they still need to purchase bandwidth from the network provider to via bandwidth purchasing market to serve their users. As a result, tenants should consider the trade-off between delay and energy consumption, which are related to bandwidth and server computing speed, respectively. In this paper, an auction framework is used to model the interaction between the network provider and tenants in the bandwidth market. The network provider determines the bandwidth amount and corresponding payment by tenants based on their submitted bids, in which the valuation of the bandwidth amount in each bid is based on the coupling between bandwidth and server computing speed. The networking provider also seeks winners in the social welfare maximization problem, which is NP-hard. To solve this problem, we propose a solution based on the randomized auction mechanism, which is proved to be approximately truthful, individually rational, and computationally efficient. Simulation results verify the effectiveness of our proposed randomized auction scheme in bandwidth trading.

Journal ArticleDOI
TL;DR: A cooperative multi-agent deep reinforcement learning (CMDRL) framework to achieve the radio resources management strategy is proposed and the experimental results show that this approach is capable of enhancing transmission efficiency and be flexible to achieved the desired goal with low complexity.
Abstract: Centralized radio resource management method puts all of the computational burdens in an agent, which is unbearable with the increasing of data dimensionality. This letter focuses on how to schedule limited satellite-based radio resources efficiently to enhance transmission efficiency and extend broadband coverage with low complexity. We propose a cooperative multi-agent deep reinforcement learning (CMDRL) framework to achieve the radio resources management strategy. The bandwidth allocation problem is taken as an example to analyze the proposed novel method in simulation. The experimental results show that this approach is capable of enhancing transmission efficiency and be flexible to achieve the desired goal with low complexity.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an adaptive channel allocation and AP selection in enterprise WLANs using reinforcement learning techniques, where the Thompson sampling algorithm is used to explore and learn which channel to use and which AP to associate, respectively.

Proceedings ArticleDOI
06 Jul 2020
TL;DR: From simulation results, this paper identifies the tradeoff points of the optimal AoI and throughput, demonstrating that one performance metric improves at the expense of degrading the other, with the routing path found as one of the key factors in determining such a tradeoff.
Abstract: The Age-of-Information (AoI) is a newly introduced metric for capturing information updating timeliness, as opposed to the network throughput, which is a conventional performance metric to measure the network transmission speed and robustness as a whole. While considerable work has addressed either optimal AoI or throughput individually, the inherent relationships between the two performance metrics are yet to be explored, especially in multi-hop networks. In this paper, we explore their relationships in multi-hop networks for the very first time, particularly focusing on the impacts of flexible routes on the two metrics. By developing a rigorous mathematical model with interference, channel allocation, link scheduling, and routing path selection taken into consideration, we build the interrelation between AoI and throughput in multi-hop networks. A multi-criteria optimization problem is formulated with the goal of simultaneously minimizing AoI and maximizing network throughput. To solve this problem, we resort to a novel approach by transforming the multi-criteria problem into a single objective one so as to find the weakly Pareto-optimal points iteratively, thereby allowing us to screen all Pareto-optimal points for the solution. A new algorithm based on the piece-wise linearization technique is then developed to closely linearize the non-linear terms in the single objective problem via their linear approximation segments to make it solvable. We formally prove that our algorithms can find all Pareto-optimal points in a finite number of iterations. From simulation results, we identify the tradeoff points of the optimal AoI and throughput, demonstrating that one performance metric improves at the expense of degrading the other, with the routing path found as one of the key factors in determining such a tradeoff.

Journal ArticleDOI
TL;DR: A two-stage resource allocation optimization algorithm is developed by analyzing the interdependence of various resources to achieve the trade-off between energy efficiency and network capacity maximization under the constraints of link interference and load balance.