scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 2018"


Posted Content
TL;DR: In this paper, a comprehensive literature review on applications of deep reinforcement learning in communications and networking is presented, which includes dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation.
Abstract: This paper presents a comprehensive literature review on applications of deep reinforcement learning in communications and networking. Modern networks, e.g., Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, deep reinforcement learning, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of deep reinforcement learning from fundamental concepts to advanced models. Then, we review deep reinforcement learning approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks such as 5G and beyond. Furthermore, we present applications of deep reinforcement learning for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying deep reinforcement learning.

332 citations


Journal ArticleDOI
TL;DR: This paper develops and investigates a new method for building cloud-based digital twins (CBDT), which can be adapted to the CPCM platform, and introduces a knowledge resource center (KRC) built on a cloud server for information intensive applications.

138 citations


Journal ArticleDOI
TL;DR: This paper investigates the resource allocation problem in device-to-device-based vehicular communications, based on slow fading statistics of channel state information (CSI), to alleviate signaling overhead for reporting rapidly varying accurate CSI of mobile links and proposes a suite of algorithms to address the performance-complexity tradeoffs.
Abstract: This paper investigates the resource allocation problem in device-to-device-based vehicular communications, based on slow fading statistics of channel state information (CSI), to alleviate signaling overhead for reporting rapidly varying accurate CSI of mobile links. We consider the case when each vehicle-to-infrastructure (V2I) link shares spectrum with multiple vehicle-to-vehicle (V2V) links. Leveraging the slow fading statistical CSI of mobile links, we maximize the sum V2I capacity while guaranteeing the reliability of all V2V links. We use graph partitioning tools to divide highly interfering V2V links into different clusters before formulating the spectrum sharing problem as a weighted 3-D matching problem. We propose a suite of algorithms, including a baseline graph-based resource allocation algorithm, a greedy resource allocation algorithm, and a randomized resource allocation algorithm, to address the performance-complexity tradeoffs. We further investigate resource allocation adaption in response to slow fading CSI of all vehicular links and develop a low-complexity randomized algorithm.

131 citations


Journal ArticleDOI
TL;DR: This paper presents an experimental system that can exploit a variety of online QoS aware adaptive task allocation schemes, and three such schemes are designed and compared.
Abstract: The increasingly wide application of Cloud Computing enables the consolidation of tens of thousands of applications in shared infrastructures. Thus, meeting the QoS requirements of so many diverse applications in such shared resource environments has become a real challenge, especially since the characteristics and workload of applications differ widely and may change over time. This paper presents an experimental system that can exploit a variety of online QoS aware adaptive task allocation schemes, and three such schemes are designed and compared. These are a measurement driven algorithm that uses reinforcement learning, secondly a “sensible” allocation algorithm that assigns tasks to sub-systems that are observed to provide a lower response time, and then an algorithm that splits the task arrival stream into sub-streams at rates computed from the hosts’ processing capabilities. All of these schemes are compared via measurements among themselves and with a simple round-robin scheduler, on two experimental test-beds with homogenous and heterogenous hosts having different processing capacities.

117 citations


Journal ArticleDOI
TL;DR: This paper proposes a NEtwork Slicing (NES) framework combining: 1) admission control; 2) resource allocation; and 3) user dropping, and analyzes its performance in equilibrium, showing that it achieves the same or better utility than static resource partitioning, and bound the difference between NES and the socially optimal performance.
Abstract: Technologies that enable network slicing are expected to be a key component of next generation mobile networks. Their promise lies in enabling tenants (such as mobile operators and/or services) to reap the cost and performance benefits of sharing resources while retaining the ability to customize their own allocations. When employing dynamic sharing mechanisms, tenants may exhibit strategic behavior, optimizing their choices in response to those of other tenants. This paper analyzes dynamic sharing in network slicing when tenants support inelastic users with minimum rate requirements . We propose a NEtwork Slicing (NES) framework combining: 1) admission control; 2) resource allocation; and 3) user dropping. We model the network slicing system with admitted users as a NES game ; this is a new class of game where the inelastic nature of the traffic may lead to dropping users whose requirements cannot be met. We show that, as long as admission control guarantees that slices can satisfy the rate requirements of all their users, this game possesses a Nash equilibrium. Admission control policies (a conservative and an aggressive one) are considered, along with a resource allocation scheme and a user dropping algorithm, geared at maintaining the system in Nash equilibria. We analyze our NES framework’s performance in equilibrium, showing that it achieves the same or better utility than static resource partitioning, and bound the difference between NES and the socially optimal performance. Simulation results confirm the effectiveness of the proposed approach.

114 citations


Proceedings ArticleDOI
15 Oct 2018
TL;DR: The efficiency gap introduced by non-reconfigurable allocation strategies of different kinds of resources, from radio access to the core of the network, is quantified and insights are provided on the achievable efficiency of network slicing architectures, their dimensioning, and their interplay with resource management algorithms.
Abstract: By providing especially tailored instances of a virtual network,network slicing allows for a strong specialization of the offered services on the same shared infrastructure. Network slicing has profound implications on resource management, as it entails an inherent trade-off between: (i) the need for fully dedicated resources to support service customization, and (ii) the dynamic resource sharing among services to increase resource efficiency and cost-effectiveness of the system. In this paper, we provide a first investigation of this trade-off via an empirical study of resource management efficiency in network slicing. Building on substantial measurement data collected in an operational mobile network (i) we quantify the efficiency gap introduced by non-reconfigurable allocation strategies of different kinds of resources, from radio access to the core of the network, and (ii) we quantify the advantages of their dynamic orchestration at different timescales. Our results provide insights on the achievable efficiency of network slicing architectures, their dimensioning, and their interplay with resource management algorithms.

97 citations


Journal ArticleDOI
TL;DR: This paper proposes an application of DLTs as a mechanism for dynamic deposit pricing, wherein the deposit of digital currency is used to orchestrate access to a network of shared resources.
Abstract: This paper describes how distributed ledger technologies (DLTs) can be used to enforce social contracts and to orchestrate the behavior of agents trying to access a shared resource. The first part of this paper analyzes the advantages and disadvantages of using DLTs architectures to implement certain control systems in an Internet of Things (IoT) setting and then focuses on a specific type of DLT based on a directed acyclic graph. In this setting, we propose a set of delay differential equations to describe the dynamical behavior of the Tangle, an IoT-inspired directed acyclic graph designed for the cryptocurrency IOTA. The second part proposes an application of DLTs as a mechanism for dynamic deposit pricing , wherein the deposit of digital currency is used to orchestrate access to a network of shared resources. The pricing signal is used as a mechanism to enforce the desired level of compliance according to a predetermined set of rules. After presenting an illustrative example, we analyze the control system and provide sufficient conditions for the stability of the network.

92 citations


Proceedings ArticleDOI
01 Jul 2018
TL;DR: The experimental results demonstrate the feasibility of the proposed BlendCAC approach to offer a decentralized, scalable, lightweight and fine-grained AC solution to IoT systems.
Abstract: The prevalence of Internet of Things (IoT) allows heterogeneous embedded smart devices to collaboratively provide smart services with or without human intervention. While leveraging the large-scale IoT-based applications like Smart Gird or Smart Cities, IoT also incurs more concerns on privacy and security. Among the top security challenges that IoT face, access authorization is critical in resource sharing and information protection. One of the weaknesses of today's access control (AC) is the centralized authorization server, which can be the performance bottleneck or the single point of failure. In this paper, BlendCAC, a blockchain-enabled decentralized capability-based AC is proposed for the security of IoTs. The BlendCAC aims at an effective access control processes to devices, services and information in large scale IoT systems. Based on the blockchain network, a capability delegation mechanism is suggested for access permission propagation. A robust identity-based capability token management strategy is proposed, which takes advantage of a smart contract for registration, propagation and revocation of the access authorization. In the proposed BlendCAC scheme, IoT devices are their own master to control their resources instead of being supervised by a centralized authority. Implemented and tested on a Raspberry Pi device and on a local private blockchain network, the experimental results demonstrate the feasibility of the proposed BlendCAC approach to offer a decentralized, scalable, lightweight and fine-grained AC solution to IoT systems.

92 citations


Journal ArticleDOI
TL;DR: This paper advocates the absolute existence of a share-resource-based VNF assignment strategy that is capable of trading off all of the reliability, bandwidth, and computing resources consumption of a given service chain and proposes a heuristic to work around the complexity of the presently formulated integer linear programming (ILP).
Abstract: Network Function Virtualization (NFV) has revolutionized service provisioning in cloud datacenter networks. It enables the complete decoupling of Network Functions (NFs) from the physical hardware middle boxes that network operators deploy for implementing service-specific and strictly ordered NF chains. Precisely, NFV allows for dispatching NFs as instances of plain software called virtual network functions (VNFs) running on virtual machines hosted by one or more industry standard physical machines. Nevertheless, NF softwarization introduces processing vulnerability ( e.g. , failures caused by hardware or software, and so on). Since any failure of VNFs could break down an entire service chain, thus interrupting the service, the functionality of an NFV-enabled network will require a higher reliability compared with traditional networks. This paper encloses an in-depth investigation of a reliability-aware joint VNF chain placement and flow routing optimization. In order to guarantee the required reliability, an incremental approach is proposed to determine the number of required VNF backups. Through illustration, it is shown herein that the formulated single path routing model can be easily extended to support resource sharing between adjacent backup VNF instances. This paper advocates the absolute existence of a share-resource-based VNF assignment strategy that is capable of trading off all of the reliability, bandwidth, and computing resources consumption of a given service chain. A heuristic is proposed to work around the complexity of the presently formulated integer linear programming (ILP). Thorough numerical analysis and simulations are conducted in order to verify and assert the validity, correctness, and effectiveness of this proposed heuristic reflecting its ability to achieve very close results to those obtained through the resolution of the complex ILP within a negligible amount of time. Above and beyond, the proposed resource-sharing-based VNF placement scheme outperforms existing resource-sharing agnostic schemes by 15.6% and 14.7% in terms of bandwidth and CPU utilization respectively.

90 citations


Journal ArticleDOI
TL;DR: It is found that for non-uniform call arrivals, the computation of the function blocks with resource sharing among operators increases a revenue rate measure by more than 25% compared to the conventional CRAN where each operator utilizes only its own resources.
Abstract: Existing radio access networks (RANs) allow only for very limited sharing of the communication and computation resources among wireless operators and heterogeneous wireless technologies. We introduce the LayBack architecture to facilitate communication and computation resource sharing among different wireless operators and technologies. LayBack organizes the RAN communication and multi-access edge computing (MEC) resources into layers, including a devices layer, a radio node (enhanced Node B and access point) layer, and a gateway layer. LayBack positions the coordination point between the different operators and technologies just behind the gateways and thus consistently decouples the fronthaul from the backhaul. The coordination point is implemented through a software defined networking (SDN) switching layer that connects the gateways to the backhaul (core) network layer. A unifying SDN orchestrator implements an SDN-based management framework that centrally manages the fronthaul and backhaul communication and computation resources and coordinates the cooperation between different wireless operators and technologies. We illustrate the capabilities of the introduced LayBack architecture and SDN-based management framework through a case study on a novel fluid cloud RAN (CRAN) function split. The fluid CRAN function split partitions the RAN functions into function blocks that are flexibly assigned to MEC nodes, effectively implementing the RAN functions through network function virtualization. We find that for non-uniform call arrivals, the computation of the function blocks with resource sharing among operators increases a revenue rate measure by more than 25% compared to the conventional CRAN where each operator utilizes only its own resources.

79 citations


Proceedings ArticleDOI
11 Oct 2018
TL;DR: This work analyzes a recently released 24-hour trace dataset from a production cluster in Alibaba and reveals three key findings which are significantly different from those from the Google trace.
Abstract: Cloud computing with large-scale datacenters provides great convenience and cost-efficiency for end users. However, the resource utilization of cloud datacenters is very low, which wastes a huge amount of infrastructure investment and energy to operate. To improve resource utilization, cloud providers usually co-locate workloads of different types on shared resources. However, resource sharing makes the quality of service (QoS) unguaranteed. In fact, improving resource utilization (IRU) and guaranteeing QoS at the same time in cloud has been a dilemma which we name an IRU-QoS curse. To tackle this issue, characterizing the workloads from real production cloud computing platforms is extremely important. In this work, we analyze a recently released 24-hour trace dataset from a production cluster in Alibaba. We reveal three key findings which are significantly different from those from the Google trace. First, each online service runs in a container while batch jobs run on physical servers. Further, they are concurrently managed by two different schedulers and co-located on same servers, which we call semi-containerized co-location. Second, batch instances largely use the spare resources that containers reserved but not used, which shows the elasticity feature of resource allocation of the Alibaba cluster. Moreover, through resource overprovisioning, overbooking, and overcommitment, the resource allocation of the Alibaba cluster achieves high elasticity. Third, as the high elasticity may hurt the performance of co-located online services, the Alibaba cluster sets bounds of resources used by batch tasks to guarantee the steady performance of both online services and batch tasks, which we call plasticity of resource allocation.

Proceedings ArticleDOI
01 Aug 2018
TL;DR: Using DeepPicar, a low-cost deep neural network based autonomous car platform, the Pi 3's computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles are analyzed and state-of-the-art cache partitioning and memory bandwidth throttling techniques are evaluated.
Abstract: We present DeepPicar, a low-cost deep neural network based autonomous car platform. DeepPicar is a small scale replication of a real self-driving car called DAVE-2 by NVIDIA. DAVE-2 uses a deep convolutional neural network (CNN), which takes images from a front-facing camera as input and produces car steering angles as output. DeepPicar uses the same network architecture—9 layers, 27 million connections and 250K parameters—and can drive itself in real-time using a web camera and a Raspberry Pi 3 quad-core platform. Using DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles. We also systematically compare other contemporary embedded computing platforms using the DeepPicar's CNN-based real-time control workload. We find that all tested platforms, including the Pi 3, are capable of supporting the CNN-based real-time control, from 20 Hz up to 100 Hz, depending on hardware platform. However, we find that shared resource contention remains an important issue that must be considered in applying CNN models on shared memory based embedded computing platforms; we observe up to 11.6X execution time increase in the CNN based control loop due to shared resource contention. To protect the CNN workload, we also evaluate state-of-the-art cache partitioning and memory bandwidth throttling techniques on the Pi 3. We find that cache partitioning is ineffective, while memory bandwidth throttling is an effective solution.

Journal ArticleDOI
TL;DR: These results show that the proposed solutions give nearly optimal performance under a wide range of parameter settings, and the addition of a CAP can significantly reduce the cost of multi-user task offloading compared with conventional mobile cloud computing where only the remote cloud server is available.
Abstract: We consider a mobile cloud computing system with multiple users, a remote cloud server, and a computing access point (CAP). The CAP serves both as the network access gateway and a computation service provider to the mobile users. It can either process the received tasks from mobile users or offload them to the cloud. We jointly optimize the offloading decisions of all users, together with the allocation of computation and communication resources, to minimize the overall cost of energy consumption, computation, and maximum delay among users. The joint optimization problem is formulated as a mixed-integer program. We show that the problem can be reformulated and transformed into a non-convex quadratically constrained quadratic program, which is NP-hard in general. We then propose an efficient solution to this problem by semidefinite relaxation and a novel randomization mapping method. Furthermore, when there is a strict delay constraint for processing each user's task, we further propose a three-step algorithm to guarantee the feasibility and local optimality of the obtained solution. Our numerical results show that the proposed solutions give nearly optimal performance under a wide range of parameter settings, and the addition of a CAP can significantly reduce the cost of multi-user task offloading compared with conventional mobile cloud computing where only the remote cloud server is available.

Journal ArticleDOI
TL;DR: This paper analytically characterize the optimal power allocation of the CUs and D2D links, and develops efficient methods for joint optimization of their channel assignments, and shows that the proposed resource management policies outperform several baseline schemes and can indeed achieve the desired twofold objective.
Abstract: As a promising technology for 5G networks, device-to-device (D2D) communication can improve spectrum utilization by sharing the resources of cellular users (CUs). However, this is at the cost of generating interference to the CUs. While most existing works focused on eliminating or suppressing the interference between the D2D links and the CUs, such interference could in fact be beneficial for improving the security of cellular communication. Specifically, D2D links may, in return for reusing cellular resources to achieve high spectral efficiency, act as friendly jammers and help the CUs against malicious wiretapping. To reach this win-win situation, D2D resource management has to be designed from a physical layer security perspective. In this paper, we consider the joint optimization of power allocation and channel assignment of the D2D links and the CUs with the aim to provide security to the CUs and improve the spectral efficiency of the D2D links simultaneously. We focus on the challenging downlink resource sharing problem and investigate both single-channel and multi-channel D2D communications. The resulting resource management design problems turn out to be difficult nonlinear mixed integer problems. Nevertheless, by exploiting the inherent properties of the formulated optimization problems, we are able to analytically characterize the optimal power allocation of the CUs and D2D links, and develop efficient methods for joint optimization of their channel assignments. Simulation results show that the proposed resource management policies outperform several baseline schemes and can indeed achieve the desired twofold objective.

Journal ArticleDOI
TL;DR: This paper introduces a classification of the scheduling problem in distributed systems by presenting a taxonomy that incorporates recent developments, especially those in cloud computing, and identifies relevant future directions in scheduling for distributed systems.

Journal ArticleDOI
TL;DR: This article presents an architectural framework based on a layered approach comprising network, data link, and physical layers together with a multimode user terminal that can ensure global service, support innovative 5G use cases, and reduce both capital investments and operational costs through efficiencies in network infrastructure deployment and spectrum utilization.
Abstract: 5G systems have started field trials, and deployment plans are being formulated, following completion of comprehensive standardization efforts and the introduction of multiple technological innovations for improving data rates and latency. Similar to earlier terrestrial wireless technologies, build-out of 5G systems will occur initially in higher population density areas offering the best business cases while not fully addressing airborne and marine applications. Satellite communications will thus continue to be indispensable as part of an integrated 5G/satellite architecture to achieve truly universal coverage. Such a unified architecture across terrestrial and satellite wireless technologies can ensure global service, support innovative 5G use cases, and reduce both capital investments and operational costs through efficiencies in network infrastructure deployment and spectrum utilization. This article presents an architectural framework based on a layered approach comprising network, data link, and physical layers together with a multimode user terminal. The network layer uses off-the-shelf building blocks based on 4G and 5G industry standards. The data link layer benefits from dynamic sharing of resources across multiple systems, enabled by intersystem knowledge of estimated and actual traffic demands, RF situational awareness, and resource availability. Communication resource sharing has traditionally comprised time, frequency, and power dimensions. Sharing can be enhanced by leveraging dynamic knowledge of communication platform location, trajectory, and antenna directivity. Logically centralized resource management provides a scalable approach for better utilization of spectrum, especially in higher bands that have traditionally been used by satellites and now are also being proposed for 5G systems. Resource sharing maximizes the utility of a multimode terminal that can access satellite or terrestrial RF links based on specific use cases, traffic demand, and QoS requirements.

Journal ArticleDOI
20 Oct 2018-Sensors
TL;DR: This work proposes an authorization system to facilitate access to consumer information and resource trading, based on blockchain technology, oriented to the Smart communities, an evolution of Community Energy Management Systems.
Abstract: Resource consumption in residential areas requires novel contributions in the field of consumer information management and collaborative mechanisms for the exchange of resources, in order to optimize the overall consumption of the community. We propose an authorization system to facilitate access to consumer information and resource trading, based on blockchain technology. Our proposal is oriented to the Smart communities, an evolution of Community Energy Management Systems, in which communities are involved in the monitoring and coordination of resource consumption. The proposed environment allows a more reliable management of monitoring and authorization functions, with secure data access and storage and delegation of controller functions among householders. We provide the definition of virtual assets for energy and water resource sharing as an auction, which encourages the optimization of global consumption and saves resources. The proposed solution is implemented and validated in application scenarios that demonstrate the suitability of the defined consensus mechanism, trustworthiness in the level of provided security for resource monitoring and delegation and reduction on resource consumption by the resource trading contribution.

Journal ArticleDOI
TL;DR: A joint caching and downlink resource sharing optimization framework (CSF) in 5G networks to assist WMSNs to efficiently deliver multimedia contents to the MUs.
Abstract: In smart cities, millions of things, systems, and people are interconnected and communicate with each other over wireless sensor networks, Internet of Things (IoT), and 5G networks. A tremendous amount of data traffic, which is frequently generated by the things in wireless multimedia sensor networks (WMSNs) and/or IoT, is accessed by a massive number of mobile users (MUs). These MUs are all competing to access the 5G network for data as well as urban applications and services. This can in turn cause exhaustion to the 5G network. In such cases, users can experience low data delivery and traffic congestions through backhaul links by macro base stations (MBSs). In this paper, we propose a joint caching and downlink resource sharing optimization framework (CSF) in 5G networks to assist WMSNs to efficiently deliver multimedia contents to the MUs. The CSF enables the MBSs to optimally decide how many replicas of each multimedia content to cache in which fem to base stations for high multimedia content hit rate. It also optimally exploits the MUs that are willing to share their downlink resources and that have retrieved multimedia contents, for offloading with device-to-device communications. The objective is to eventually maximize the system delivery capacity. Simulation results demonstrate that the CSF provides the best performance in terms of hit rate and system delivery capacity.

Journal ArticleDOI
TL;DR: This paper proposes a general 3C resource sharing framework, which includes many existing 1C/2C sharing models in the literature as special cases and proposes a heuristic algorithm based on linear programming, which can further reduce the computation time and produce an empirically close-to-optimal solution.
Abstract: Tactile Internet often requires: 1) the ultra-reliable and ultra-responsive network connection and 2) the proactive and intelligent actuation at edge devices. A promising approach to address these requirements is to enable mobile edge devices to share their communication, computation, and caching (3C) resources via device-to-device connections. In this paper, we propose a general 3C resource sharing framework, which includes many existing 1C/2C sharing models in the literature as special cases. Comparing with the 1C/2C models, the proposed 3C framework can further improve the resource utilization efficiency by offering more flexibilities in terms of the device cooperation and resource scheduling. As a typical example, we focus on the energy utilization under the proposed 3C framework. Specifically, we formulate an energy consumption minimization problem, which is an integer non-convex optimization problem. To solve the problem, we first transform it into an equivalent integer linear programming problem that is much easier to solve. Then, we propose a heuristic algorithm based on linear programming, which can further reduce the computation time and produce an empirically close-to-optimal solution. Moreover, we evaluate the energy reduction due to the 3C sharing both analytically and numerically. Numerical results show that, comparing with the existing 1C/2C approaches, the proposed 3C sharing framework can reduce the total energy consumption by 83.8% when the D2D energy is negligible. The energy reduction is still 27.5% when the D2D transmission energy per unit time is twice as large as the cellular transmission energy per unit time.

Journal ArticleDOI
TL;DR: A list of challenges and open issues of the emerging technologies that realize the C-RAN concept is compiles, and comparative insights between the current and future state of theC-Ran concept are discussed.
Abstract: Achieving the fifth-generation (5G) vision will introduce new technology innovations and substantial changes in delivering cutting-edge applications and services in current mobile and cellular networks. The Cloud Radio Access Network (C-RAN) concept emerged as one of the most compelling architectures to meet the requirements of the 5G vision. In essence, C-RAN provides an advanced mobile network architecture which can leverage challenging features such as network resource slicing, statistical multiplexing, energy efficiency, and high capacity. The realization of C-RAN is achieved by innovative technologies such as the software-defined networking (SDN) and the network function virtualization (NFV). While SDN technology brings the separation of the control and data planes in the playground, supporting thus advanced traffic engineering techniques such as load balancing, the NFV concept offers high flexibility by allowing network resource sharing in a dynamic way. Although SDN and NFV have many advantages, a number of challenges have to be addressed before the commercial deployment of 5G implementation. In addition, C-RAN introduces a new layer in the mobile network, denoted as the fronthaul, which is adopted from the recent research efforts in the fiber-wireless (Fi-Wi) paradigm. As the fronthaul defines a link between a baseband unit (BBU) and a remote radio unit (RRU), various technologies can be used for this purpose such as optical fibers and millimeter-wave (mm-wave) radios. In this way, several challenges are highlighted which depend on the technology used. In the light of the aforementioned remarks, this paper compiles a list of challenges and open issues of the emerging technologies that realize the C-RAN concept. Moreover, comparative insights between the current and future state of the C-RAN concept are discussed. Trends and advances of those technologies are also examined towards shedding light on the proliferation of 5G through the C-RAN concept.

Proceedings ArticleDOI
Tian Li1, Jie Zhong2, Ji Liu2, Wentao Wu3, Ce Zhang1 
01 Jan 2018
TL;DR: A novel algorithm is developed that combines multi-armed bandits with Bayesian optimization and proves a regret bound under the multi-tenant setting, aiming for minimizing the total regret of all users running automatic model selection tasks.
Abstract: We present ease.ml, a declarative machine learning service platform. With ease.ml, a user defines the high-level schema of an ML application and submits the task via a Web interface. The system then deals with the rest, such as model selection and data movement. The ultimate question we hope to understand is that, as a "service provider" that manages a shared cluster of machines running machine learning workloads, what is the resource sharing strategy that maximizes the global satisfaction of all our users?This paper does not completely answer this general question, but focuses on solving the first technical challenge we were facing when trying to build ease.ml. We observe that resource sharing is a critical yet subtle issue in this multi-tenant scenario, as we have to balance between efficiency and fairness. We first formalize the problem that we call multi-tenant model selection, aiming for minimizing the total regret of all users running automatic model selection tasks. We then develop a novel algorithm that combines multi-armed bandits with Bayesian optimization and prove a regret bound under the multi-tenant setting. Finally, we report our evaluation of ease.ml on synthetic data and on two services we are providing to our users, namely, image classification with deep neural networks and binary classification with Azure ML Studio. Our experimental evaluation results show that our proposed solution can be up to 9.8x faster in achieving the same global average accuracy for all users as the two popular heuristics used by our users before ease.ml, and 4.1 x faster than state-of-the-art systems.

Journal ArticleDOI
TL;DR: This paper addresses some key factors in VANET time synchronization such as requirements analysis, precision, accuracy, availability, scalability and compatibility, and highlights the advantages of Global Navigation Satellite System (GNSS) in VANSet time synchronization.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: This paper constructs an interference hypergraph (IHG) to model the interference relationships among different communication groups and proposes an IHG-based resource allocation scheme with cluster coloring algorithm, which can lead to both effective and efficient resource block assignment with low computational complexity for NOMA-V2X communications.
Abstract: Vehicular communication network is a core application scenario in the fifth generation (5G) mobile communication system which requires ultra high data rate and ultra low latency. Most recently, non-orthogonal multiple access (NOMA) has been regarded as a promising technique for future 5G systems due to its capability in significantly improving the spectral efficiency and reducing the data transmission latency. In this paper, we propose to introduce NOMA in D2D-enabled V2X networks, where resource sharing based on spatial reuse for different V2X communications are permitted through centralized resource management. Considering the complicated interference scenario caused by NOMA and spatial reuse-based resource sharing in the investigated NOMA-integrated V2X networks, we construct an interference hypergraph to model the interference relationships among different communication groups. In addition, based on the constructed hypergraph, we further propose an interference hypergraph-based resource allocation (IHG-RA) scheme with cluster coloring algorithm, which can lead to both effective and efficient QoS-guaranteed resource block (RB) assignment with low computational complexity. Simulation results verify the efficiency of our proposed IHG-RA scheme for NOMA-integrated V2X communications in improving the network sum rate.

Journal ArticleDOI
TL;DR: The computational experiments show that the EA-MAS, with the integration of shared resource information, is able to provide shared scheduling scheme for distributed manufacturing with a good performance.
Abstract: This paper considers a shared scheduling environment for distributed manufacturing resources. In this paper, we proposed a multi-agent system based approach to promote competition and cooperation among multi-agents and to achieve global optimal scheduling. We first build two multi-agent system (MAS) architectures. One is enterprise multi-agent subsystem (Sub-EMAS) architecture comprises job agents, resource agents and manager agents; the other one is enterprise alliance multi-agent system (EA-MAS) architecture involves the addition of a mediator agent and a scheduling agent. Then we design a Shared Contract Net Protocol (SCNP) to support both the Sub-EMAS and EA-MAS. We propose two heuristic algorithms to solve the scheduling model. The computational experiments show that the EA-MAS, with the integration of shared resource information, is able to provide shared scheduling scheme for distributed manufacturing with a good performance.

Journal ArticleDOI
TL;DR: This article proposes a system model for cooperative mobile edge computing where a device social graph model is developed to capture the social relationship among the devices and devise a socially- aware bipartite matching based cooperative task offloading algorithm.
Abstract: In this article we propose a novel paradigm of socially-motivated cooperative mobile edge computing, where the social tie structure among mobile and wearable device users is leveraged for achieving effective and trustworthy cooperation for collaborative computation task executions. We envision that a combination of local device computation and networked resource sharing empowers the devices with multiple flexible task execution approaches, including local mobile execution, D2D offloaded execution, direct cloud offloaded execution, and D2D-assisted cloud offloaded execution. Specifically, we propose a system model for cooperative mobile edge computing where a device social graph model is developed to capture the social relationship among the devices. We then devise a socially- aware bipartite matching based cooperative task offloading algorithm by integrating the social tie structure into the device computation and network resource sharing process. We evaluate the performance of socially-motivated cooperative mobile edge computing using both Erdos-Renyi and real-trace based social graphs, which corroborates the superior performance of the proposed socially-aware mechanism.

Posted Content
TL;DR: Containers, enabling lightweight environment and performance isolation, fast and flexible deployment, and fine-grained resource sharing, have gained popularity in better application management and deployment in addition to hardware virtualization as discussed by the authors.
Abstract: Containers, enabling lightweight environment and performance isolation, fast and flexible deployment, and fine-grained resource sharing, have gained popularity in better application management and deployment in addition to hardware virtualization. They are being widely used by organizations to deploy their increasingly diverse workloads derived from modern-day applications such as web services, big data, and IoT in either proprietary clusters or private and public cloud data centers. This has led to the emergence of container orchestration platforms, which are designed to manage the deployment of containerized applications in large-scale clusters. These systems are capable of running hundreds of thousands of jobs across thousands of machines. To do so efficiently, they must address several important challenges including scalability, fault-tolerance and availability, efficient resource utilization, and request throughput maximization among others. This paper studies these management systems and proposes a taxonomy that identifies different mechanisms that can be used to meet the aforementioned challenges. The proposed classification is then applied to various state-of-the-art systems leading to the identification of open research challenges and gaps in the literature intended as future directions for researchers.

Proceedings ArticleDOI
03 Jun 2018
TL;DR: Numerical results based on theoretical analysis show that the target requirements of URLLC can be achieved with up to 70% reduction in resource utilization compared to the conservative transmission scheme.
Abstract: Ultra-Reliable and Low Latency Communications (URLLC) is an important emerging area for the fifth generation (5G) cellular network. The requirements for URLLC are as stringent as 1−10-5 reliability for a 32-byte packet with user plane latency of 1 ms. To achieve these requirements, grant-free transmission with repetitions of data packet has been recently proposed for uplink transmissions. However, the resource overhead and collisions among URLLC user equipments (UEs) become limiting factors to achieving optimal system performance. In this work, a hybrid resource allocation scheme is proposed to optimally allocate dedicated resource units and/or a shared resource pool to a group of UEs. Numerical results based on theoretical analysis show that the target requirements of URLLC can be achieved with up to 70% reduction in resource utilization compared to the conservative transmission scheme.

Journal ArticleDOI
TL;DR: The conducted numerical study advocates for one of the allocation strategies – dynamic resource sharing with reservation – as the preferred solution for reliable collection of heterogeneous data in large-scale 5G-grade IoT deployments.

Journal ArticleDOI
11 Apr 2018
TL;DR: A new 5G wearable network based on 5G ultra-dense cellular network and mobile edge computing, to meet the access and latency requirements of wearable devices and data-driven network slicing management to adjust the network resources in accordance with the wearable service dynamics is proposed.
Abstract: With the popularization of wearable devices, designing a network architecture that can meet the latency requirements of different wearable devices and can also efficiently utilize network communication, storage, and computation resources will be a challenging problem. In this article, we first propose a new 5G wearable network based on 5G ultra-dense cellular network and mobile edge computing, to meet the access and latency requirements of wearable devices. Then we use network slicing technology in the proposed 5G wearable network to enhance the network resource sharing and energy-efficient utilization. In addition, based on the service cognitive engine and network cognitive engine deployed in the network, we introduce data-driven network slicing management to adjust the network resources in accordance with the wearable service dynamics. Finally, some challenges and open issues are discussed.

Journal ArticleDOI
TL;DR: The preliminary results of a distributed collaboration framework developed with the aim to facilitate the cooperation of various production sites are presented, to manage a network of manufacturers who can dynamically re-configure and share their resources within a pre-registered community.