scispace - formally typeset
Search or ask a question

Showing papers on "Resource management published in 2019"


Journal ArticleDOI
TL;DR: In this article, a decentralized resource allocation mechanism for vehicle-to-vehicle (V2V) communications based on deep reinforcement learning is proposed, which can be applied to both unicast and broadcast scenarios.
Abstract: In this paper, we develop a novel decentralized resource allocation mechanism for vehicle-to-vehicle (V2V) communications based on deep reinforcement learning, which can be applied to both unicast and broadcast scenarios. According to the decentralized resource allocation mechanism, an autonomous “agent,” a V2V link or a vehicle, makes its decisions to find the optimal sub-band and power level for transmission without requiring or having to wait for global information. Since the proposed method is decentralized, it incurs only limited transmission overhead. From the simulation results, each agent can effectively learn to satisfy the stringent latency constraints on V2V links while minimizing the interference to vehicle-to-infrastructure communications.

438 citations


Journal ArticleDOI
TL;DR: This paper provides a comprehensive survey on the literature involving machine learning algorithms applied to SDN, from the perspective of traffic classification, routing optimization, quality of service/quality of experience prediction, resource management and security.
Abstract: In recent years, with the rapid development of current Internet and mobile communication technologies, the infrastructure, devices and resources in networking systems are becoming more complex and heterogeneous. In order to efficiently organize, manage, maintain and optimize networking systems, more intelligence needs to be deployed. However, due to the inherently distributed feature of traditional networks, machine learning techniques are hard to be applied and deployed to control and operate networks. Software defined networking (SDN) brings us new chances to provide intelligence inside the networks. The capabilities of SDN (e.g., logically centralized control, global view of the network, software-based traffic analysis, and dynamic updating of forwarding rules) make it easier to apply machine learning techniques. In this paper, we provide a comprehensive survey on the literature involving machine learning algorithms applied to SDN. First, the related works and background knowledge are introduced. Then, we present an overview of machine learning algorithms. In addition, we review how machine learning algorithms are applied in the realm of SDN, from the perspective of traffic classification, routing optimization, quality of service/quality of experience prediction, resource management and security. Finally, challenges and broader perspectives are discussed.

436 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of the recent advances of ML in wireless communication, which are classified as: resource management in the MAC layer, networking and mobility management in network layer, and localization in the application layer.
Abstract: As a key technique for enabling artificial intelligence, machine learning (ML) is capable of solving complex problems without explicit programming. Motivated by its successful applications to many practical tasks like image recognition, both industry and the research community have advocated the applications of ML in wireless communication. This paper comprehensively surveys the recent advances of the applications of ML in wireless communication, which are classified as: resource management in the MAC layer, networking and mobility management in the network layer, and localization in the application layer. The applications in resource management further include power control, spectrum management, backhaul management, cache management, and beamformer design and computation resource management, while ML-based networking focuses on the applications in clustering, base station switching control, user association, and routing. Moreover, literatures in each aspect is organized according to the adopted ML techniques. In addition, several conditions for applying ML to wireless communication are identified to help readers decide whether to use ML and which kind of ML techniques to use. Traditional approaches are also summarized together with their performance comparison with ML-based approaches, based on which the motivations of surveyed literatures to adopt ML are clarified. Given the extensiveness of the research area, challenges and unresolved issues are presented to facilitate future studies. Specifically, ML-based network slicing, infrastructure update to support ML-based paradigms, open data sets and platforms for researchers, theoretical guidance for ML implementation, and so on are discussed.

330 citations


Journal ArticleDOI
TL;DR: This paper investigates the spectrum sharing problem in vehicular networks based on multi-agent reinforcement learning and demonstrates that with a proper reward design and training mechanism, the multiple V2V agents successfully learn to cooperate in a distributed way to simultaneously improve the sum capacity of V2I links and payload delivery rate of V1V links.
Abstract: This paper investigates the spectrum sharing problem in vehicular networks based on multi-agent reinforcement learning, where multiple vehicle-to-vehicle (V2V) links reuse the frequency spectrum preoccupied by vehicle-to-infrastructure (V2I) links. Fast channel variations in high mobility vehicular environments preclude the possibility of collecting accurate instantaneous channel state information at the base station for centralized resource management. In response, we model the resource sharing as a multi-agent reinforcement learning problem, which is then solved using a fingerprint-based deep Q-network method that is amenable to a distributed implementation. The V2V links, each acting as an agent, collectively interact with the communication environment, receive distinctive observations yet a common reward, and learn to improve spectrum and power allocation through updating Q-networks using the gained experiences. We demonstrate that with a proper reward design and training mechanism, the multiple V2V agents successfully learn to cooperate in a distributed way to simultaneously improve the sum capacity of V2I links and payload delivery rate of V2V links.

315 citations


Journal ArticleDOI
TL;DR: This paper investigates the resource allocation algorithm design for multicarrier solar-powered unmanned aerial vehicle (UAV) communication systems and proposes a low-complexity iterative suboptimal online scheme based on the successive convex approximation.
Abstract: In this paper, we investigate the resource allocation algorithm design for multicarrier solar-powered unmanned aerial vehicle (UAV) communication systems. In particular, the UAV is powered by the solar energy enabling sustainable communication services to multiple ground users. We study the joint design of the 3D aerial trajectory and the wireless resource allocation for maximization of the system sum throughput over a given time period. As a performance benchmark, we first consider an off-line resource allocation design assuming non-causal knowledge of the channel gains. The algorithm design is formulated as a mixed-integer non-convex optimization problem taking into account the aerodynamic power consumption, solar energy harvesting, a finite energy storage capacity, and the quality-of-service requirements of the users. Despite the non-convexity of the optimization problem, we solve it optimally by applying monotonic optimization to obtain the optimal 3D-trajectory and the optimal power and subcarrier allocation policy. Subsequently, we focus on the online algorithm design that only requires real-time and statistical knowledge of the channel gains. The optimal online resource allocation algorithm is motivated by the off-line scheme and entails a high computational complexity. Hence, we also propose a low-complexity iterative suboptimal online scheme based on the successive convex approximation. Our simulation results reveal that both the proposed online schemes closely approach the performance of the benchmark off-line scheme and substantially outperform two baseline schemes. Furthermore, our results unveil the tradeoff between solar energy harvesting and power-efficient communication. In particular, the solar-powered UAV first climbs up to a high altitude to harvest a sufficient amount of solar energy and then descends again to a lower altitude to reduce the path loss of the communication links to the users it serves.

273 citations


Journal ArticleDOI
TL;DR: A deep recurrent neural network-based algorithm is proposed to solve the energy efficient resource allocation (RA) problem for the NOMA-based heterogeneous IoT with fast convergence and low computational complexity.
Abstract: The Internet of Things (IoT) has attracted significant attentions in the fifth generation mobile networks and the smart cities. However, considering the large numbers of connectivity demands, it is vital to improve the spectrum efficiency (SE) of the IoT with an affordable power consumption. To improve the SE, the nonorthogonal multiple access (NOMA) technology is newly proposed through accommodating multiple users in the same spectrums. As a result, in this paper, an energy efficient resource allocation (RA) problem is introduced for the NOMA-based heterogeneous IoT. At first, we assume the successive interference cancelation (SIC) is imperfect for practical implementations. Then, based on the analyzing method for cognitive radio networks, we present a stepwise RA scheme for the mobile users and the IoT users with the mutual interference management. Third, we propose a deep recurrent neural network-based algorithm to solve the problem optimally and rapidly. Moreover, a priorities and rate demands-based user scheduling method is supplemented, to coordinate the access of the heterogeneous users with the limited radio resource. At last, the simulation results verify that the deep learning-based scheme is able to provide optimal RA results for the NOMA heterogeneous IoT with fast convergence and low computational complexity. Compared with the conventional orthogonal frequency division multiple access system, the NOMA system with imperfect SIC yields better performance on the SE and the scale of connectivity, at the cost of high power consumption and low energy efficiency.

236 citations


Journal ArticleDOI
TL;DR: This paper considers a cognitive vehicular network that uses the TVWS band, and forms a dual-side optimization problem, to minimize the cost of VTs and that of the MEC server at the same time, and designs an algorithm called DDORV to tackle the joint optimization problem.
Abstract: The proliferation of smart vehicular terminals (VTs) and their resource hungry applications impose serious challenges to the processing capabilities of VTs and the delivery of vehicular services. Mobile Edge Computing (MEC) offers a promising paradigm to solve this problem by offloading VT applications to proximal MEC servers, while TV white space (TVWS) bands can be used to supplement the bandwidth for computation offloading. In this paper, we consider a cognitive vehicular network that uses the TVWS band, and formulate a dual-side optimization problem, to minimize the cost of VTs and that of the MEC server at the same time. Specifically, the dual-side cost minimization is achieved by jointly optimizing the offloading decision and local CPU frequency on the VT side, and the radio resource allocation and server provisioning on the server side, while guaranteeing network stability. Based on Lyapunov optimization, we design an algorithm called DDORV to tackle the joint optimization problem, where only current system states, such as channel states and traffic arrivals, are needed. The closed-form solution to the VT-side problem is obtained easily by derivation and comparing two values. For MEC server side optimization, we first obtain server provisioning independently, and then devise a continuous relaxation and Lagrangian dual decomposition based iterative algorithm for joint radio resource and power allocation. Simulation results demonstrate that DDORV converges fast, can balance the cost-delay tradeoff flexibly, and can obtain more performance gains in cost reduction as compared with existing schemes.

231 citations


Journal ArticleDOI
TL;DR: DNNs are trained here with a model-free primal-dual method that simultaneously learns a DNN parameterization of the resource allocation policy and optimizes the primal and dual variables.
Abstract: This paper considers the design of optimal resource allocation policies in wireless communication systems, which are generically modeled as a functional optimization problem with stochastic constraints. These optimization problems have the structure of a learning problem in which the statistical loss appears as a constraint, motivating the development of learning methodologies to attempt their solution. To handle stochastic constraints, training is undertaken in the dual domain. It is shown that this can be done with small loss of optimality when using near-universal learning parameterizations. In particular, since deep neural networks (DNNs) are near universal, their use is advocated and explored. DNNs are trained here with a model-free primal-dual method that simultaneously learns a DNN parameterization of the resource allocation policy and optimizes the primal and dual variables. Numerical simulations demonstrate the strong performance of the proposed approach on a number of common wireless resource allocation problems.

223 citations


Journal ArticleDOI
TL;DR: A profound view of IoT and NBIoT is presented, subsuming their technical features, resource allocation, and energy-efficiency techniques and applications, and two novel energy-efficient techniques "zonal thermal pattern analysis" and "energy-efficient adaptive health monitoring system" have been proposed towards green IoT.
Abstract: The advancement of technologies over years has poised Internet of Things (IoT) to scoop out untapped information and communication technology opportunities. It is anticipated that IoT will handle the gigantic network of billions of devices to deliver plenty of smart services to the users. Undoubtedly, this will make our life more resourceful but at the cost of high energy consumption and carbon footprint. Consequently, there is a high demand for green communication to reduce energy consumption, which requires optimal resource availability and controlled power levels. In contrast to this, IoT devices are constrained in terms of resources—memory, power, and computation. Low power wide area (LPWA) technology is a response to the need for efficient utilization of power resource, as it evinces characteristics such as the capability to proffer low power connectivity to a huge number of devices spread over wide geographical areas at low cost. Various LPWA technologies, such as LoRa and SigFox, exist in the market, offering a proficient solution to the users. However, in order to abstain the need of new infrastructure (like base station) that is required for proprietary technologies, a new cellular-based licensed technology, narrowband IoT (NBIoT), is introduced by 3GPP in Rel-13. This technology presents a good candidature to handle LPWA market because of its characteristics like enhanced indoor coverage, low power consumption, latency insensitivity, and massive connection support towards NBIoT. This survey presents a profound view of IoT and NBIoT, subsuming their technical features, resource allocation, and energy-efficiency techniques and applications. The challenges that hinder the NBIoT path to success are also identified and discussed. In this paper, two novel energy-efficient techniques “zonal thermal pattern analysis” and energy-efficient adaptive health monitoring system have been proposed towards green IoT.

214 citations


Journal ArticleDOI
TL;DR: This article focuses on deep reinforcement- learning (DRL)-based approaches that allow network entities to learn and build knowledge about the networks and thus make optimal decisions locally and independently and presents an application of DRL for 5G network slicing optimization.
Abstract: Future-generation wireless networks (5G and beyond) must accommodate surging growth in mobile data traffic and support an increasingly high density of mobile users involving a variety of services and applications. Meanwhile, the networks become increasingly dense, heterogeneous, decentralized, and ad hoc in nature, and they encompass numerous and diverse network entities. Consequently, different objectives, such as high throughput and low latency, need to be achieved in terms of service, and resource allocation must be designed and optimized accordingly. However, considering the dynamics and uncertainty that inherently exist in wireless network environments, conventional approaches for service and resource management that require complete and perfect knowledge of the systems are inefficient or even inapplicable. Inspired by the success of machine learning in solving complicated control and decision-making problems, in this article we focus on deep reinforcement- learning (DRL)-based approaches that allow network entities to learn and build knowledge about the networks and thus make optimal decisions locally and independently. We first overview fundamental concepts of DRL and then review related works that use DRL to address various issues in 5G networks. Finally, we present an application of DRL for 5G network slicing optimization. The numerical results demonstrate that the proposed approach achieves superior performance compared with baseline solutions.

200 citations


Journal ArticleDOI
TL;DR: This correspondence considers non-orthogonal multiple access (NOMA) assisted mobile edge computing (MEC), where the power and time allocation is jointly optimized to reduce the energy consumption of computation offloading.
Abstract: This correspondence considers non-orthogonal multiple access (NOMA) assisted mobile edge computing (MEC), where the power and time allocation is jointly optimized to reduce the energy consumption of computation offloading. Closed-form expressions for the optimal power and time allocation solutions are obtained and used to establish the conditions for determining whether the conventional orthogonal multiple access (OMA), pure NOMA or hybrid NOMA should be used for MEC offloading.

Journal ArticleDOI
TL;DR: A deep reinforcement learning (DRL)-based joint mode selection and resource management approach is proposed, aiming at minimizing long-term system power consumption under the dynamics of edge cache states and transfer learning is integrated with DRL to accelerate learning process.
Abstract: Fog radio access networks (F-RANs) are seen as potential architectures to support services of Internet of Things by leveraging edge caching and edge computing. However, current works studying resource management in F-RANs mainly consider a static system with only one communication mode. Given network dynamics, resource diversity, and the coupling of resource management with mode selection, resource management in F-RANs becomes very challenging. Motivated by the recent development of artificial intelligence, a deep reinforcement learning (DRL)-based joint mode selection and resource management approach is proposed. Each user equipment (UE) can operate either in cloud RAN (C-RAN) mode or in device-to-device mode, and the resource managed includes both radio resource and computing resource. The core idea is that the network controller makes intelligent decisions on UE communication modes and processors’ on–off states with precoding for UEs in C-RAN mode optimized subsequently, aiming at minimizing long-term system power consumption under the dynamics of edge cache states. By simulations, the impacts of several parameters, such as learning rate and edge caching service capability, on system performance are demonstrated, and meanwhile the proposal is compared with other different schemes to show its effectiveness. Moreover, transfer learning is integrated with DRL to accelerate learning process.

Journal ArticleDOI
TL;DR: In this paper, the authors reviewed publications as early as 1991, with 85% of the publications between 2013 and 2018, to identify and classify the architectures, infrastructure, and underlying algorithms for managing resources in fog/edge computing.
Abstract: Contrary to using distant and centralized cloud data center resources, employing decentralized resources at the edge of a network for processing data closer to user devices, such as smartphones and tablets, is an upcoming computing paradigm, referred to as fog/edge computing. Fog/edge resources are typically resource-constrained, heterogeneous, and dynamic compared to the cloud, thereby making resource management an important challenge that needs to be addressed. This article reviews publications as early as 1991, with 85% of the publications between 2013 and 2018, to identify and classify the architectures, infrastructure, and underlying algorithms for managing resources in fog/edge computing.

Journal ArticleDOI
26 Nov 2019-Sensors
TL;DR: This article provides a detailed survey of all relevant research works, in which ML techniques have been used on UAV-based communications for improving various design and functional aspects such as channel modeling, resource management, positioning, and security.
Abstract: Unmanned aerial vehicles (UAVs) will be an integral part of the next generation wireless communication networks. Their adoption in various communication-based applications is expected to improve coverage and spectral efficiency, as compared to traditional ground-based solutions. However, this new degree of freedom that will be included in the network will also add new challenges. In this context, the machine-learning (ML) framework is expected to provide solutions for the various problems that have already been identified when UAVs are used for communication purposes. In this article, we provide a detailed survey of all relevant research works, in which ML techniques have been used on UAV-based communications for improving various design and functional aspects such as channel modeling, resource management, positioning, and security.

Proceedings ArticleDOI
01 Dec 2019
TL;DR: In this paper, various enhancements of resource allocation scheme used in the Device to Device (D2D) communications are examined and their merits and de-merits are compared to identify the areas of improvement for the future resource allocation schemes based on autonomous direct D2D communication.
Abstract: For wireless devices; information sharing from one device to another represents the most significant task for typically discovering the right devices and connecting them is extremely critical. In data communication networks, D2D represent a direct connecting path between two nodes. It cannot depend on Base Station (BS). In this type of D2D, communication links are constantly non transparent and it can arise on in-band or out-band of the spectrum. Data communication via Base Station (BS) can reduce data rate and provide proper resource utilization, which includes voice and text service. The direct communication between two nodes may not follow the resource strategy and utilize more power. In this paper, various enhancements of resource allocation scheme used in the Device to Device (D2D) communications are examined. These resource allocation schemes are compared for its merits and de-merits and to identify the areas of improvement for the future resource allocation schemes based on autonomous direct D2D communication.

Journal ArticleDOI
Rui Dong1, Changyang She1, Wibowo Hardjawana1, Yonghui Li1, Branka Vucetic1 
TL;DR: In this paper, the authors proposed a deep learning (DL) architecture, where a digital twin of the real network environment is used to train the DL algorithm off-line at a central server.
Abstract: In this paper, we consider a mobile edge computing system with both ultra-reliable and low-latency communications services and delay tolerant services. We aim to minimize the normalized energy consumption, defined as the energy consumption per bit, by optimizing user association, resource allocation, and offloading probabilities subject to the quality-of-service requirements. The user association is managed by the mobility management entity (MME), while resource allocation and offloading probabilities are determined by each access point (AP). We propose a deep learning (DL) architecture, where a digital twin of the real network environment is used to train the DL algorithm off-line at a central server. From the pre-trained deep neural network (DNN), the MME can obtain user association scheme in a real-time manner. Considering that the real networks are not static, the digital twin monitors the variation of real networks and updates the DNN accordingly. For a given user association scheme, we propose an optimization algorithm to find the optimal resource allocation and offloading probabilities at each AP. The simulation results show that our method can achieve lower normalized energy consumption with less computation complexity compared with an existing method and approach to the performance of the global optimal solution.

Journal ArticleDOI
TL;DR: This work focuses on the trading between the cloud/fog computing service provider and miners, and proposes an auction-based market model for efficient computing resource allocation, and designs an approximate algorithm which guarantees the truthfulness, individual rationality and computational efficiency.
Abstract: As an emerging decentralized secure data management platform, blockchain has gained much popularity recently. To maintain a canonical state of blockchain data record, proof-of-work based consensus protocols provide the nodes, referred to as miners, in the network with incentives for confirming new block of transactions through a process of “block mining” by solving a cryptographic puzzle. Under the circumstance of limited local computing resources, e.g., mobile devices, it is natural for rational miners, i.e., consensus nodes, to offload computational tasks for proof of work to the cloud/fog computing servers. Therefore, we focus on the trading between the cloud/fog computing service provider and miners, and propose an auction-based market model for efficient computing resource allocation. In particular, we consider a proof-of-work based blockchain network, which is constrained by the computing resource and deployed as an infrastructure for decentralized data management applications. Due to the competition among miners in the blockchain network, the allocative externalities are particularly taken into account when designing the auction mechanisms. Specifically, we consider two bidding schemes: the constant-demand scheme where each miner bids for a fixed quantity of resources, and the multi-demand scheme where the miners can submit their preferable demands and bids. For the constant-demand bidding scheme, we propose an auction mechanism that achieves optimal social welfare. In the multi-demand bidding scheme, the social welfare maximization problem is NP-hard. Therefore, we design an approximate algorithm which guarantees the truthfulness, individual rationality and computational efficiency. Through extensive simulations, we show that our proposed auction mechanisms with the two bidding schemes can efficiently maximize the social welfare of the blockchain network and provide effective strategies for the cloud/fog computing service provider.

Proceedings ArticleDOI
04 Apr 2019
TL;DR: This work presents PARTIES, a QoS-aware resource manager that enables an arbitrary number of interactive, latency-critical services to share a physical node without QoS violations, and shows that Party improves throughput under QoS by 61% on average, compared to existing resource managers.
Abstract: Multi-tenancy in modern datacenters is currently limited to a single latency-critical, interactive service, running alongside one or more low-priority, best-effort jobs. This limits the efficiency gains from multi-tenancy, especially as an increasing number of cloud applications are shifting from batch jobs to services with strict latency requirements. We present PARTIES, a QoS-aware resource manager that enables an arbitrary number of interactive, latency-critical services to share a physical node without QoS violations. PARTIES leverages a set of hardware and software resource partitioning mechanisms to adjust allocations dynamically at runtime, in a way that meets the QoS requirements of each co-scheduled workload, and maximizes throughput for the machine. We evaluate PARTIES on state-of-the-art server platforms across a set of diverse interactive services. Our results show that PARTIES improves throughput under QoS by 61% on average, compared to existing resource managers, and that the rate of improvement increases with the number of co-scheduled applications per physical host.

Journal ArticleDOI
TL;DR: This paper proves that a global optimal solution can be found in a convex subset of the original feasible region for ultra-reliable and low-latency communications (URLLC), where the blocklength of channel codes is short.
Abstract: In this paper, we aim to find the global optimal resource allocation for ultra-reliable and low-latency communications (URLLC), where the blocklength of channel codes is short. The achievable rate in the short blocklength regime is neither convex nor concave in bandwidth and transmit power. Thus, a non-convex constraint is inevitable in optimizing resource allocation for URLLC. We first consider a general resource allocation problem with constraints on the transmission delay and decoding error probability, and prove that a global optimal solution can be found in a convex subset of the original feasible region. Then, we illustrate how to find the global optimal solution for an example problem, where the energy efficiency (EE) is maximized by optimizing antenna configuration, bandwidth allocation, and power control under the latency and reliability constraints. To improve the battery life of devices and EE of communication systems, both uplink and downlink resources are optimized. The simulation and numerical results validate the analysis and show that the circuit power is dominated by the total power consumption when the average inter-arrival time between packets is much larger than the required delay bound. Therefore, optimizing antenna configuration and bandwidth allocation without power control leads to minor EE loss.

Journal ArticleDOI
TL;DR: The proposed intelligent resource allocation framework (iRAF) is a multitask deep reinforcement learning algorithm for making resource allocation decisions based on network states and task characteristics, such as the computing capability of edge servers and devices, communication channel quality, resource utilization, and latency requirement of the services, etc.
Abstract: Recently, as the development of artificial intelligence (AI), data-driven AI methods have shown amazing performance in solving complex problems to support the Internet of Things (IoT) world with massive resource-consuming and delay-sensitive services. In this paper, we propose an intelligent resource allocation framework (iRAF) to solve the complex resource allocation problem for the collaborative mobile edge computing (CoMEC) network. The core of iRAF is a multitask deep reinforcement learning algorithm for making resource allocation decisions based on network states and task characteristics, such as the computing capability of edge servers and devices, communication channel quality, resource utilization, and latency requirement of the services, etc. The proposed iRAF can automatically learn the network environment and generate resource allocation decision to maximize the performance over latency and power consumption with self-play training. iRAF becomes its own teacher: a deep neural network (DNN) is trained to predict iRAF’s resource allocation action in a self-supervised learning manner, where the training data is generated from the searching process of Monte Carlo tree search (MCTS) algorithm. A major advantage of MCTS is that it will simulate trajectories into the future, starting from a root state, to obtain a best action by evaluating the reward value. Numerical results show that our proposed iRAF achieves 59.27% and 51.71% improvement on service latency performance compared with the greedy-search and the deep $Q$ -learning-based methods, respectively.

Proceedings ArticleDOI
24 Jun 2019
TL;DR: A straightforward way is co-locating different workloads on the same hardware and to figure out the resource efficiency and understand the key characteristics of workloads in co-located cluster, an 8-day trace from Alibaba's production trace is analyzed.
Abstract: Cloud platform provides great flexibility and cost-efficiency for end-users and cloud operators. However, low resource utilization in modern datacenters brings huge wastes of hardware resources and infrastructure investment. To improve resource utilization, a straightforward way is co-locating different workloads on the same hardware. To figure out the resource efficiency and understand the key characteristics of workloads in co-located cluster, we analyze an 8-day trace from Alibaba's production trace. We reveal three key findings as follows. First, memory becomes the new bottleneck and limits the resource efficiency in Alibaba's datacenter. Second, in order to protect latency-critical applications, batch-processing applications are treated as second-class citizens and restricted to utilize limited resources. Third, more than 90% of latency-critical applications are written in Java applications. Massive self-contained JVMs further complicate resource management and limit the resource efficiency in datacenters.

Journal ArticleDOI
TL;DR: An average energy efficiency problem in EH-DHNs is formulated, taking into consideration EH time slot allocation, power and spectrum RB allocation for the D2D links, which is a nonconvex problem, and the original problem is transformed into a tractable convex optimization problem.
Abstract: Energy harvesting (EH) from ambient energy sources can potentially reduce the dependence on the supply of grid or battery energy, providing many benefits to green communications. In this paper, we investigate the device-to-device (D2D) user equipments (DUEs) multiplexing cellular user equipments (CUEs) downlink spectrum resources problem for EH-based D2D communication heterogeneous networks (EH-DHNs). Our goal is to maximize the average energy efficiency of all D2D links, in the case of guaranteeing the quality of service of CUEs and the EH constraints of the D2D links. The resource allocation problems contain the EH time slot allocation of DUEs, power and spectrum resource block (RB) allocation. In order to tackle these issues, we formulate an average energy efficiency problem in EH-DHNs, taking into consideration EH time slot allocation, power and spectrum RB allocation for the D2D links, which is a nonconvex problem. Furthermore, we transform the original problem into a tractable convex optimization problem. We propose joint the EH time slot allocation, power and spectrum RB allocation iterative algorithm based on the Dinkelbach and Lagrangian constrained optimization. Numerical results demonstrate that the proposed iterative algorithm achieves higher energy efficiency for different network parameters settings.

Proceedings ArticleDOI
29 Apr 2019
TL;DR: Comparative evaluations with real-world measurement data prove that DeepCog’s tight integration of machine learning into resource orchestration allows for substantial (50% or above) reduction of operating expenses with respect to resource allocation solutions based on state-of-the-art mobile traffic predictors.
Abstract: Network slicing is a new paradigm for future 5G networks where the network infrastructure is divided into slices devoted to different services and customized to their needs. With this paradigm, it is essential to allocate to each slice the needed resources, which requires the ability to forecast their respective demands. To this end, we present DeepCog, a novel data analytics tool for the cognitive management of resources in 5G systems. DeepCog forecasts the capacity needed to accommodate future traffic demands within individual network slices while accounting for the operator’s desired balance between resource overprovisioning (i.e., allocating resources exceeding the demand) and service request violations (i.e., allocating less resources than required). To achieve its objective, DeepCog hinges on a deep learning architecture that is explicitly designed for capacity forecasting. Comparative evaluations with real-world measurement data prove that DeepCog’s tight integration of machine learning into resource orchestration allows for substantial (50% or above) reduction of operating expenses with respect to resource allocation solutions based on state-of-the-art mobile traffic predictors. Moreover, we leverage DeepCog to carry out an extensive first analysis of the trade-off between capacity overdimensioning and unserviced demands in adaptive, sliced networks and in presence of real-world traffic.

Journal ArticleDOI
TL;DR: In this paper, an interdisciplinary conceptual approach to manage species coexistence over the long-term is presented, highlighting the importance of including anthropological and geographical knowledge to find sustainable solutions to managing human-elephant conflict.
Abstract: Human-elephant conflict is a major conservation concern in elephant range countries. A variety of management strategies have been developed and are practiced at different scales for preventing and mitigating human-elephant conflict. However, human-elephant conflict remains pervasive as the majority of existing prevention strategies are driven by site-specific factors that only offer short-term solutions, while mitigation strategies frequently transfer conflict risk from one place to another. Here, we review current human-elephant conflict management strategies and describe an interdisciplinary conceptual approach to manage species coexistence over the long-term. Our proposed model identifies shared resource use between humans and elephants at different spatial and temporal scales for development of long-term solutions. The model also highlights the importance of including anthropological and geographical knowledge to find sustainable solutions to managing human-elephant conflict.

Journal ArticleDOI
TL;DR: A distributed algorithm based on the machine learning framework of liquid state machine (LSM) is proposed that enables the UAVs to autonomously choose the optimal resource allocation strategies that maximize the number of users with stable queues depending on the network states.
Abstract: In this paper, the problem of joint caching and resource allocation is investigated for a network of cache-enabled unmanned aerial vehicles (UAVs) that service wireless ground users over the LTE licensed and unlicensed bands. The considered model focuses on users that can access both licensed and unlicensed bands while receiving contents from either the cache units at the UAVs directly or via content server-UAV-user links. This problem is formulated as an optimization problem, which jointly incorporates user association, spectrum allocation, and content caching. To solve this problem, a distributed algorithm based on the machine learning framework of liquid state machine (LSM) is proposed. Using the proposed LSM algorithm, the cloud can predict the users’ content request distribution while having only limited information on the network’s and users’ states. The proposed algorithm also enables the UAVs to autonomously choose the optimal resource allocation strategies that maximize the number of users with stable queues depending on the network states. Based on the users’ association and content request distributions, the optimal contents that need to be cached at UAVs and the optimal resource allocation are derived. Simulation results using real datasets show that the proposed approach yields up to 17.8% and 57.1% gains, respectively, in terms of the number of users that have stable queues compared with two baseline algorithms: Q-learning with cache and Q-learning without cache. The results also show that the LSM significantly improves the convergence time of up to 20% compared with conventional learning algorithms such as Q-learning.

Journal ArticleDOI
TL;DR: The experimental results show, the proposed VM consolidation approach uses a regression-based model to approximate the future CPU and memory utilization of VMs and PMs provides substantial improvement over other heuristic and meta-heuristic algorithms in reducing the energy consumption, the number of VM migrations and thenumber of SLA violations.
Abstract: Virtual Machine (VM) consolidation provides a promising approach to save energy and improve resource utilization in data centers. Many heuristic algorithms have been proposed to tackle the VM consolidation as a vector bin-packing problem. However, the existing algorithms have focused mostly on the number of active Physical Machines (PMs) minimization according to their current resource requirements and neglected the future resource demands. Therefore, they generate unnecessary VM migrations and increase the rate of Service Level Agreement (SLA) violations in data centers. To address this problem, we propose a VM consolidation approach that takes into account both the current and future utilization of resources. Our approach uses a regression-based model to approximate the future CPU and memory utilization of VMs and PMs. We investigate the effectiveness of virtual and physical resource utilization prediction in VM consolidation performance using Google cluster and PlanetLab real workload traces. The experimental results show, our approach provides substantial improvement over other heuristic and meta-heuristic algorithms in reducing the energy consumption, the number of VM migrations and the number of SLA violations.

Journal ArticleDOI
TL;DR: This paper proposes a software-defined STN to manage and orchestrate networking, caching, and computing resources jointly, and forms the joint resources allocation problem as a joint optimization problem, and uses a deep Q-learning approach to solve it.
Abstract: With the development of satellite networks, there is an emerging trend to integrate satellite networks with terrestrial networks, called satellite-terrestrial networks (STNs). The improvements of STNs need innovative information and communication technologies, such as networking, caching, and computing. In this paper, we propose a software-defined STN to manage and orchestrate networking, caching, and computing resources jointly. We formulate the joint resources allocation problem as a joint optimization problem, and use a deep Q-learning approach to solve it. Simulation results show the effectiveness of our proposed scheme.

Journal ArticleDOI
TL;DR: Numerical results show that the convergence of an iRSS satisfies online resource scheduling requirements and can significantly improve resource utilization while guaranteeing performance isolation between slices, compared with other benchmark algorithms.
Abstract: It is widely acknowledged that network slicing can tackle the diverse use cases and connectivity services of the forthcoming next-generation mobile networks (5G). Resource scheduling is of vital importance for improving resource-multiplexing gain among slices while meeting specific service requirements for radio access network (RAN) slicing. Unfortunately, due to the performance isolation, diversified service requirements, and network dynamics (including user mobility and channel states), resource scheduling in RAN slicing is very challenging. In this paper, we propose an intelligent resource scheduling strategy (iRSS) for 5G RAN slicing. The main idea of an iRSS is to exploit a collaborative learning framework that consists of deep learning (DL) in conjunction with reinforcement learning (RL). Specifically, DL is used to perform large time-scale resource allocation, whereas RL is used to perform online resource scheduling for tackling small time-scale network dynamics, including inaccurate prediction and unexpected network states. Depending on the amount of available historical traffic data, an iRSS can flexibly adjust the significance between the prediction and online decision modules for assisting RAN in making resource scheduling decisions. Numerical results show that the convergence of an iRSS satisfies online resource scheduling requirements and can significantly improve resource utilization while guaranteeing performance isolation between slices, compared with other benchmark algorithms.


Journal ArticleDOI
TL;DR: This paper provides a survey-style introduction to resource allocation approaches in UDNs and provides a taxonomy to classify the resource allocation methods in the existing literatures.
Abstract: Driven by the explosive data traffic and new quality of service requirement of mobile users, the communication industry has been experiencing a new evolution by means of network infrastructure densification. With the increase of the density as well as the variety of access points (APs), the network benefits from proximal transmissions and increased spatial reuse of system resources, thus introducing a new paradigm named ultra-dense networks (UDNs). Since the limited available resources are shared by ubiquitous APs in UDNs, the demand for efficient resource allocation schemes becomes even more compelling. However, the large scale of UDNs impedes the exploration of effective resource allocation approaches particularly on the computational complexity and significance overhead or feedback. In this paper, we provide a survey-style introduction to resource allocation approaches in UDNs. Specifically, we first present some common scenarios of UDNs with the relevant special issues. Second, we provide a taxonomy to classify the resource allocation methods in the existing literatures. Then, to alleviate the main difficulties of UDNs, some prevailing and feasible solutions are elaborated. Next, we present some emerging technologies thriving UDNs with special RA features discussed. Additionally, the challenges and open research directions are outlined in this field.