scispace - formally typeset
Search or ask a question

Showing papers on "Resource management published in 2017"


Journal ArticleDOI
TL;DR: This paper studies resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-divisionmultiple access (OFDMA), for which the optimal resource allocation is formulated as a mixed-integer problem.
Abstract: Mobile-edge computation offloading (MECO) off-loads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite cloud computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function , which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a mixed-integer problem. To solve this challenging problem and characterize its policy structure, a low-complexity sub-optimal algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance in simulation.

1,180 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a simulator, called iFogSim, to model IoT and fog environments and measure the impact of resource management techniques in latency, network congestion, energy consumption, and cost.
Abstract: Summary Internet of Things (IoT) aims to bring every object (eg, smart cameras, wearable, environmental sensors, home appliances, and vehicles) online, hence generating massive volume of data that can overwhelm storage systems and data analytics applications. Cloud computing offers services at the infrastructure level that can scale to IoT storage and processing requirements. However, there are applications such as health monitoring and emergency response that require low latency, and delay that is caused by transferring data to the cloud and then back to the application can seriously impact their performances. To overcome this limitation, Fog computing paradigm has been proposed, where cloud services are extended to the edge of the network to decrease the latency and network congestion. To realize the full potential of Fog and IoT paradigms for real-time analytics, several challenges need to be addressed. The first and most critical problem is designing resource management techniques that determine which modules of analytics applications are pushed to each edge device to minimize the latency and maximize the throughput. To this end, we need an evaluation platform that enables the quantification of performance of resource management policies on an IoT or Fog computing infrastructure in a repeatable manner. In this paper we propose a simulator, called iFogSim, to model IoT and Fog environments and measure the impact of resource management techniques in latency, network congestion, energy consumption, and cost. We describe two case studies to demonstrate modeling of an IoT environment and comparison of resource management policies. Moreover, scalability of the simulation toolkit of RAM consumption and execution time is verified under different circumstances.

1,085 citations


Journal ArticleDOI
TL;DR: In this article, a logical architecture for network-slicing-based 5G systems is introduced, and a scheme for managing mobility between different access networks, as well as a joint power and subchannel allocation scheme in spectrum sharing two-tier systems based on network slicing, where both the co-tier interference and crosstier interference are taken into account.
Abstract: 5G networks are expected to be able to satisfy users' different QoS requirements. Network slicing is a promising technology for 5G networks to provide services tailored for users' specific QoS demands. Driven by the increased massive wireless data traffic from different application scenarios, efficient resource allocation schemes should be exploited to improve the flexibility of network resource allocation and capacity of 5G networks based on network slicing. Due to the diversity of 5G application scenarios, new mobility management schemes are greatly needed to guarantee seamless handover in network-slicing-based 5G systems. In this article, we introduce a logical architecture for network-slicing-based 5G systems, and present a scheme for managing mobility between different access networks, as well as a joint power and subchannel allocation scheme in spectrum-sharing two-tier systems based on network slicing, where both the co-tier interference and cross-tier interference are taken into account. Simulation results demonstrate that the proposed resource allocation scheme can flexibly allocate network resources between different slices in 5G systems. Finally, several open issues and challenges in network-slicing-based 5G networks are discussed, including network reconstruction, network slicing management, and cooperation with other 5G technologies.

585 citations


Proceedings ArticleDOI
14 Oct 2017
TL;DR: An extensive characterization of Microsoft Azure's VM workload, including distributions of the VMs' lifetime, deployment size, and resource consumption is introduced, and Resource Central, a system that collects VM telemetry, learns these behaviors offline, and provides predictions online to various resource managers via a general client-side library is introduced.
Abstract: Cloud research to date has lacked data on the characteristics of the production virtual machine (VM) workloads of large cloud providers. A thorough understanding of these characteristics can inform the providers' resource management systems, e.g. VM scheduler, power manager, server health manager. In this paper, we first introduce an extensive characterization of Microsoft Azure's VM workload, including distributions of the VMs' lifetime, deployment size, and resource consumption. We then show that certain VM behaviors are fairly consistent over multiple lifetimes, i.e. history is an accurate predictor of future behavior. Based on this observation, we next introduce Resource Central (RC), a system that collects VM telemetry, learns these behaviors offline, and provides predictions online to various resource managers via a general client-side library. As an example of RC's online use, we modify Azure's VM scheduler to leverage predictions in oversubscribing servers (with oversubscribable VM types), while retaining high VM performance. Using real VM traces, we then show that the prediction-informed schedules increase utilization and prevent physical resource exhaustion. We conclude that providers can exploit their workloads' characteristics and machine learning to improve resource management substantially.

479 citations


Journal ArticleDOI
TL;DR: The delay and packet loss components in UR LLC and the network availability for supporting the quality of service of users are discussed and tools for resource optimization in URLLC are presented.
Abstract: Supporting ultra-reliable and low-latency communications (URLLC) is one of the major goals in 5G communication systems. Previous studies focus on ensuring end-to-end delay requirement by reducing transmission delay and coding delay, and only consider reliability in data transmission. However, the reliability reflected by overall packet loss also includes other components such as queueing delay violation. Moreover, which tools are appropriate to design radio resource allocation under constraints on delay, reliability, and availability is not well understood. As a result, how to optimize resource allocation for URLLC is still unclear. In this article, we first discuss the delay and packet loss components in URLLC and the network availability for supporting the quality of service of users. Then we present tools for resource optimization in URLLC. Last, we summarize the major challenges related to resource management for URLLC, and perform a case study.

308 citations


Journal ArticleDOI
TL;DR: This paper attempts to maximize the ergodic capacity of the V2I connections while ensuring reliability guarantee for each V2V link, and proposes novel algorithms that yield optimal resource allocation and are robust to channel variations.
Abstract: The widely deployed cellular network, assisted with device-to-device (D2D) communications, can provide a promising solution to support efficient and reliable vehicular communications. Fast channel variations caused by high mobility in a vehicular environment need to be properly accounted for when designing resource allocation schemes for the D2D-enabled vehicular networks. In this paper, we perform spectrum sharing and power allocation based only on slowly varying large-scale fading information of wireless channels. Pursuant to differing requirements for different types of links, i.e., high capacity for vehicle-to-infrastructure (V2I) links and ultrareliability for vehicle-to-vehicle (V2V) links, we attempt to maximize the ergodic capacity of the V2I connections while ensuring reliability guarantee for each V2V link. Sum ergodic capacity of all V2I links is first taken as the optimization objective to maximize the overall V2I link throughput. Minimum ergodic capacity maximization is then considered to provide a more uniform capacity performance across all V2I links. Novel algorithms that yield optimal resource allocation and are robust to channel variations are proposed. Their desirable performance is confirmed by computer simulation.

302 citations


Journal ArticleDOI
TL;DR: In this article, the performance of MIMO-NOMA in terms of sum channel capacity and ergodic sum capacity is proved analytically, and a user admission scheme is proposed to maximize the sum rate and number of admitted users when the signal-to-interference-plus-noise ratio thresholds of the users are equal.
Abstract: In this paper, the performance of multiple-input multiple-output non-orthogonal multiple access (MIMO-NOMA) is investigated, when multiple users are grouped into a cluster. The superiority of MIMO-NOMA over MIMO-OMA in terms of both sum channel capacity and ergodic sum capacity is proved analytically. Furthermore, it is demonstrated that the more users are admitted to a cluster, the lower is the achieved sum rate, which illustrates the tradeoff between the sum rate and maximum number of admitted users. On this basis, a user admission scheme is proposed, which is optimal in terms of both sum rate and the number of admitted users when the signal-to-interference-plus-noise ratio thresholds of the users are equal. When these thresholds are different, the proposed scheme still achieves good performance in balancing both criteria. Moreover, under certain conditions, it maximizes the number of admitted users. In addition, the complexity of the proposed scheme is linear in the number of users per cluster. Simulation results verify the superiority of MIMO-NOMA over MIMO-OMA in terms of both sum rate and user fairness, as well as the effectiveness of the proposed user admission scheme.

281 citations


Journal ArticleDOI
TL;DR: This dataset collates 350,985 individual occurrences of saltmarshes and presents the first global estimate of their known extent, and believes that, while incomplete, the global polygon data cover many of the important areas in Europe, the USA and Australia.
Abstract: BACKGROUND Saltmarshes are extremely valuable but often overlooked ecosystems, contributing to livelihoods locally and globally through the associated ecosystem services they provide, including fish production, carbon storage and coastal protection. Despite their importance, knowledge of the current spatial distribution (occurrence and extent) of saltmarshes is incomplete. In light of increasing anthropogenic and environmental pressures on coastal ecosystems, global data on the occurrence and extent of saltmarshes are needed to draw attention to these critical ecosystems and to the benefits they generate for people. Such data can support resource management, strengthen decision-making and facilitate tracking of progress towards global conservation targets set by multilateral environmental agreements, such as the Aichi Biodiversity Targets of the United Nations' (UN's) Strategic Plan for Biodiversity 2011-2020, the Sustainable Development Goals of the UN's 2030 Agenda for Sustainable Development and the Ramsar Convention. NEW INFORMATION Here, we present the most complete dataset on saltmarsh occurrence and extent at the global scale. This dataset collates 350,985 individual occurrences of saltmarshes and presents the first global estimate of their known extent. The dataset captures locational and contextual data for saltmarsh in 99 countries worldwide. A total of 5,495,089 hectares of mapped saltmarsh across 43 countries and territories are represented in a Geographic Information Systems polygon shapefile. This estimate is at the relatively low end of previous estimates (2.2-40 Mha), however, we took the conservative approach in the mapping exercise and there are notable areas in Canada, Northern Russia, South America and Africa where saltmarshes are known to occur that require additional spatial data. Nevertheless, the most extensive saltmarsh worldwide are found outside the tropics, notably including the low-lying, ice-free coasts, bays and estuaries of the North Atlantic which are well represented in our global polygon dataset. Therefore, despite the gaps, we believe that, while incomplete, our global polygon data cover many of the important areas in Europe, the USA and Australia.

257 citations


Journal ArticleDOI
TL;DR: This paper investigates energy efficiency improvement for a downlink NOMA single-cell network by considering imperfect CSI, and proposes an iterative algorithm for user scheduling and power allocation to maximize the system energy efficiency.
Abstract: Non-orthogonal multiple access (NOMA) exploits successive interference cancellation technique at the receivers to improve the spectral efficiency. By using this technique, multiple users can be multiplexed on the same subchannel to achieve high sum rate. Most previous research works on NOMA systems assume perfect channel state information (CSI). However, in this paper, we investigate energy efficiency improvement for a downlink NOMA single-cell network by considering imperfect CSI. The energy efficient resource scheduling problem is formulated as a non-convex optimization problem with the constraints of outage probability limit, the maximum power of the system, the minimum user data rate, and the maximum number of multiplexed users sharing the same subchannel. Different from previous works, the maximum number of multiplexed users can be greater than two, and the imperfect CSI is first studied for resource allocation in NOMA. To efficiently solve this problem, the probabilistic mixed problem is first transformed into a non-probabilistic problem. An iterative algorithm for user scheduling and power allocation is proposed to maximize the system energy efficiency. The optimal user scheduling based on exhaustive search serves as a system performance benchmark, but it has high computational complexity. To balance the system performance and the computational complexity, a new suboptimal user scheduling scheme is proposed to schedule users on different subchannels. Based on the user scheduling scheme, the optimal power allocation expression is derived by the Lagrange approach. By transforming the fractional-form problem into an equivalent subtractive-form optimization problem, an iterative power allocation algorithm is proposed to maximize the system energy efficiency. Simulation results demonstrate that the proposed user scheduling algorithm closely attains the optimal performance.

250 citations


Proceedings ArticleDOI
Zhiyuan Xu1, Yanzhi Wang1, Jian Tang1, Jing Wang1, Mustafa Cenk Gursoy1 
21 May 2017
TL;DR: A novel DRL-based framework for power-efficient resource allocation in cloud RANs is presented, which can achieve significant power savings while meeting user demands, and it can well handle highly dynamic cases.
Abstract: Cloud Radio Access Networks (RANs) have become a key enabling technique for the next generation (5G) wireless communications, which can meet requirements of massively growing wireless data traffic. However, resource allocation in cloud RANs still needs to be further improved in order to reach the objective of minimizing power consumption and meeting demands of wireless users over a long operational period. Inspired by the success of Deep Reinforcement Learning (DRL) on solving complicated control problems, we present a novel DRL-based framework for power-efficient resource allocation in cloud RANs. Specifically, we define the state space, action space and reward function for the DRL agent, apply a Deep Neural Network (DNN) to approximate the action-value function, and formally formulate the resource allocation problem (in each decision epoch) as a convex optimization problem. We evaluate the performance of the proposed framework by comparing it with two widely-used baselines via simulation. The simulation results show it can achieve significant power savings while meeting user demands, and it can well handle highly dynamic cases.

242 citations


Proceedings ArticleDOI
Ning Liu1, Zhe Li1, Jielong Xu1, Zhiyuan Xu1, Sheng Lin1, Qinru Qiu1, Jian Tang1, Yanzhi Wang1 
05 Jun 2017
TL;DR: The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem and the proposed framework can achieve the best trade-off between latency and power/energy consumption in a server cluster.
Abstract: Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloudcomputing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques. In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradationwithin an acceptable level. Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces. In this paper, we propose a novel hierarchical framework forsolving the overall resource allocation and power management problem in cloud computing systems. The proposed hierarchical framework comprises a global tier for VM resource allocation to the servers and a local tier for distributed power management of local servers. The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem. Furthermore, an autoencoder and a novel weight sharing structure are adopted to handle the high-dimensional state space and accelerate the convergence speed. On the other hand, the local tier of distributed server power managements comprises an LSTM based workload predictor and a model-free RL based power manager, operating in a distributed manner. Experiment results using actual Google cluster traces showthat our proposed hierarchical framework significantly savesthe power consumption and energy usage than the baselinewhile achieving no severe latency degradation. Meanwhile, the proposed framework can achieve the best trade-off between latency and power/energy consumption in a server cluster.

Proceedings ArticleDOI
01 May 2017
TL;DR: This paper focuses on the design of three key network slicing building blocks responsible for traffic analysis and prediction per network slice, admission control decisions for network slice requests, and adaptive correction of the forecasted load based on measured deviations.
Abstract: The emerging network slicing paradigm for 5G provides new business opportunities by enabling multi-tenancy support. At the same time, new technical challenges are introduced, as novel resource allocation algorithms are required to accommodate different business models. In particular, infrastructure providers need to implement radically new admission control policies to decide on network slices requests depending on their Service Level Agreements (SLA). When implementing such admission control policies, infrastructure providers may apply forecasting techniques in order to adjust the allocated slice resources so as to optimize the network utilization while meeting network slices' SLAs. This paper focuses on the design of three key network slicing building blocks responsible for (i) traffic analysis and prediction per network slice, (ii) admission control decisions for network slice requests, and (iii) adaptive correction of the forecasted load based on measured deviations. Our results show very substantial potential gains in terms of system utilization as well as a trade-off between conservative forecasting configurations versus more aggressive ones (higher gains, SLA risk).

Journal ArticleDOI
TL;DR: An overview of the ways in which acoustic telemetry can be used to inform issues central to the ecology, conservation, and management of exploited and/or imperiled fish species is provided.
Abstract: This paper reviews the use of acoustic telemetry as a tool for addressing issues in fisheries management, and serves as the lead to the special Feature Issue of Ecological Applications titled Acoustic Telemetry and Fisheries Management. Specifically, we provide an overview of the ways in which acoustic telemetry can be used to inform issues central to the ecology, conservation, and management of exploited and/or imperiled fish species. Despite great strides in this area in recent years, there are comparatively few examples where data have been applied directly to influence fisheries management and policy. We review the literature on this issue, identify the strengths and weaknesses of work done to date, and highlight knowledge gaps and difficulties in applying empirical fish telemetry studies to fisheries policy and practice. We then highlight the key areas of management and policy addressed, as well as the challenges that needed to be overcome to do this. We conclude with a set of recommendations about how researchers can, in consultation with stock assessment scientists and managers, formulate testable scientific questions to address and design future studies to generate data that can be used in a meaningful way by fisheries management and conservation practitioners. We also urge the involvement of relevant stakeholders (managers, fishers, conservation societies, etc.) early on in the process (i.e., in the co-creation of research projects), so that all priority questions and issues can be addressed effectively.

Journal ArticleDOI
TL;DR: This paper makes three contributions to the Special Issue's theme of enhancing organizational resource management to establish an archetype business process for big data initiatives and identifies drawbacks of resource based theory (RBT) and it's underpinning assumptions in the context of big data.

Journal ArticleDOI
TL;DR: In this article, a power-efficient resource allocation for multicarrier non-orthogonal multiple access (NOMA) systems is studied, which jointly designs the power allocation, rate allocation, user scheduling, and successive interference cancellation (SIC) decoding policy for minimizing the total transmit power.
Abstract: In this paper, we study power-efficient resource allocation for multicarrier non-orthogonal multiple access systems. The resource allocation algorithm design is formulated as a non-convex optimization problem which jointly designs the power allocation, rate allocation, user scheduling, and successive interference cancellation (SIC) decoding policy for minimizing the total transmit power. The proposed framework takes into account the imperfection of channel state information at transmitter and quality of service requirements of users. To facilitate the design of optimal SIC decoding policy on each subcarrier, we define a channel-to-noise ratio outage threshold . Subsequently, the considered non-convex optimization problem is recast as a generalized linear multiplicative programming problem, for which a globally optimal solution is obtained via employing the branch-and-bound approach. The optimal resource allocation policy serves as a system performance benchmark due to its high computational complexity. To strike a balance between system performance and computational complexity, we propose a suboptimal iterative resource allocation algorithm based on difference of convex programming. Simulation results demonstrate that the suboptimal scheme achieves a close-to-optimal performance. Also, both proposed schemes provide significant transmit power savings than that of conventional orthogonal multiple access schemes.

Journal ArticleDOI
TL;DR: A regional cooperative fog-computing-based intelligent vehicular network (CFC-IoV) architecture for dealing with big IoV data in the smart city is proposed, including mobility control, multi-source data acquisition, distributed computation and storage, and multi-path data transmission.
Abstract: As vehicle applications, mobile devices and the Internet of Things are growing fast, and developing an efficient architecture to deal with the big data in the Internet of Vehicles (IoV) has been an important concern for the future smart city. To overcome the inherent defect of centralized data processing in cloud computing, fog computing has been proposed by offloading computation tasks to local fog servers (LFSs). By considering factors like latency, mobility, localization, and scalability, this article proposes a regional cooperative fog-computing-based intelligent vehicular network (CFC-IoV) architecture for dealing with big IoV data in the smart city. Possible services for IoV applications are discussed, including mobility control, multi-source data acquisition, distributed computation and storage, and multi-path data transmission. A hierarchical model with intra-fog and inter-fog resource management is presented, and energy efficiency and packet dropping rates of LFSs in CFC-IoV are optimized.

Journal ArticleDOI
TL;DR: This paper proposes a resource allocation strategy for fog computing based on priced timed Petri nets (PTPNs), by which the user can choose the satisfying resources autonomously from a group of preallocated resources.
Abstract: Fog computing, also called “clouds at the edge,” is an emerging paradigm allocating services near the devices to improve the quality of service (QoS) The explosive prevalence of Internet of Things, big data, and fog computing in the context of cloud computing makes it extremely challenging to explore both cloud and fog resource scheduling strategy so as to improve the efficiency of resources utilization, satisfy the users’ QoS requirements, and maximize the profit of both resource providers and users This paper proposes a resource allocation strategy for fog computing based on priced timed Petri nets (PTPNs), by which the user can choose the satisfying resources autonomously from a group of preallocated resources Our strategy comprehensively considers the price cost and time cost to complete a task, as well as the credibility evaluation of both users and fog resources We construct the PTPN models of tasks in fog computing in accordance with the features of fog resources Algorithm that predicts task completion time is presented Method of computing the credibility evaluation of fog resource is also proposed In particular, we give the dynamic allocation algorithm of fog resources Simulation results demonstrate that our proposed algorithms can achieve a higher efficiency than static allocation strategies in terms of task completion time and price

Journal ArticleDOI
TL;DR: A distributed reputation management system (DREAMS) is proposed, wherein VEC servers are adopted to execute local reputation management tasks for vehicles, and the effectiveness of the reputation-based resource allocation algorithm is demonstrated.
Abstract: Vehicular edge computing (VEC) is introduced to extend computing capacity to vehicular network edge recently. With the advent of VEC, service providers directly host services in close proximity of mobile vehicles for great improvements. As a result, a new networking paradigm, vehicular edge networks is emerged along with the development of VEC. However, it is necessary to address security issues for facilitating VEC well. In this paper, we focus on reputation management to ensure security protection and improve network efficiency in the implementation of VEC. A distributed reputation management system ( DREAMS ) is proposed, wherein VEC servers are adopted to execute local reputation management tasks for vehicles. This system has remarkable features for improving overall performance: 1) distributed reputation maintenance; 2) trusted reputation manifestation; 3) accurate reputation update; and 4) available reputation usage. In particular, we utilize multi-weighted subjective logic for accurate reputation update in DREAMS. To enrich reputation usage in DREAMS, service providers optimize resource allocation in computation offloading by considering reputation of vehicles. Numerical results indicate that DREAMS has great advantages in optimizing misbehavior detection and improving the recognition rate of misbehaving vehicles. Meanwhile, we demonstrate the effectiveness of our reputation-based resource allocation algorithm.

Journal ArticleDOI
TL;DR: This work makes a novel attempt to identify the need of DDoS mitigation solutions involving multi-level information flow and effective resource management during the attack, and concludes that there is a strong requirement of solutions, which are designed keeping utility computing models in mind.

Proceedings ArticleDOI
03 Jul 2017
TL;DR: This work first characterize a class of ‘learnable algorithms’ and then design DNNs to approximate some algorithms of interest in wireless communications, demonstrating the superior ability ofDNNs for approximating two considerably complex algorithms that are designed for power allocation in wireless transmit signal design, while giving orders of magnitude speedup in computational time.
Abstract: For decades, optimization has played a central role in addressing wireless resource management problems such as power control and beamformer design. However, these algorithms often require a considerable number of iterations for convergence, which poses challenges for real-time processing. In this work, we propose a new learning-based approach for wireless resource management. The key idea is to treat the input and output of a resource allocation algorithm as an unknown non-linear mapping and to use a deep neural network (DNN) to approximate it. If the nonlinear mapping can be learned accurately and effectively by a DNN of moderate size, then such DNN can be used for resource allocation in almost real time, since passing the input through a DNN to get the output only requires a small number of simple operations. In this work, we first characterize a class of ‘learnable algorithms’ and then design DNNs to approximate some algorithms of interest in wireless communications. We use extensive numerical simulations to demonstrate the superior ability of DNNs for approximating two considerably complex algorithms that are designed for power allocation in wireless transmit signal design, while giving orders of magnitude speedup in computational time.

Journal ArticleDOI
16 Mar 2017
TL;DR: This paper is focused on the resource allocation problem in distribution systems ahead of a coming hurricane, and a heuristic method is proposed, which obtains the allocation plan by solving a mixed-integer linear program.
Abstract: Proactive preparedness to cope with extreme weather events is significantly helpful in reducing the restoration cost and enhancing the resilience of distribution systems. This paper is focused on the resource allocation problem in distribution systems ahead of a coming hurricane. Generation resources such as diesel oil and batteries are considered for allocation, which can be used to serve outage critical load in the post-hurricane restoration. Electric buses are also considered as a kind of resource. Considering the uncertainties of system faults, the allocation problem is formulated into a mixed-integer stochastic nonlinear program. A heuristic method is then proposed, which obtains the allocation plan by solving a mixed-integer linear program. Numerical simulations are performed on the IEEE 123-node feeder system under several scenarios to demonstrate the effectiveness of the proposed method. The impacts of resources transportation cost, initial distribution of electric buses, and hurricane severity on the allocation plan are discussed.

Journal ArticleDOI
TL;DR: The RAN slicing problem in a multi-cell network in relation to the RRM functionalities that can be used as a support for splitting the radio resources among the RAN slices is analyzed.
Abstract: Network slicing is a fundamental capability for future 5G networks to properly support current and envisioned future application scenarios. Network slicing facilitates a cost-effective deployment and operation of multiple logical networks over a common physical network infrastructure such that each network is customized to best serve the needs of specific applications (e.g., mobile broadband, Internet of Things applications) and/or communications service providers (e.g., special purpose service providers for different sectors such as public safety, utilities, smart city, and automobiles). Slicing a RAN becomes particularly challenging due to the inherently shared nature of the radio channel and the potential influence that any transmitter may have on any receiver. In this respect, this article analyzes the RAN slicing problem in a multi-cell network in relation to the RRM functionalities that can be used as a support for splitting the radio resources among the RAN slices. Four different RAN slicing approaches are presented and compared from different perspectives, such as the granularity in the assignment of radio resources and the degrees of isolation and customization.

Journal ArticleDOI
TL;DR: In this paper, the authors presented an inspiring panorama of the initiatives that have been developed throughout the world for sustainable natural resource management and improve societal development and provided case studies of regions in China and other regions.

Journal ArticleDOI
TL;DR: A novel cloud-based workflow scheduling (CWSA) policy for compute-intensive workflow applications in multi-tenant cloud computing environments, which helps minimize the overall workflow completion time, tardiness, cost of execution of the workflows, and utilize idle resources of cloud effectively is proposed.
Abstract: Multi-tenancy is one of the key features of cloud computing, which provides scalability and economic benefits to the end-users and service providers by sharing the same cloud platform and its underlying infrastructure with the isolation of shared network and compute resources. However, resource management in the context of multi-tenant cloud computing is becoming one of the most complex task due to the inherent heterogeneity and resource isolation. This paper proposes a novel cloud-based workflow scheduling (CWSA) policy for compute-intensive workflow applications in multi-tenant cloud computing environments, which helps minimize the overall workflow completion time, tardiness, cost of execution of the workflows, and utilize idle resources of cloud effectively. The proposed algorithm is compared with the state-of-the-art algorithms, i.e., First Come First Served (FCFS), EASY Backfilling, and Minimum Completion Time (MCT) scheduling policies to evaluate the performance. Further, a proof-of-concept experiment of real-world scientific workflow applications is performed to demonstrate the scalability of the CWSA, which verifies the effectiveness of the proposed solution. The simulation results show that the proposed scheduling policy improves the workflow performance and outperforms the aforementioned alternative scheduling policies under typical deployment scenarios.

Journal ArticleDOI
TL;DR: This article provides a comprehensive review of the state-of-the-art contributions from the perspective of software defined networking and machineto- machine integration, and the overall design of the proposed software-defined machine-to-machine (SD-M2M) framework is presented.
Abstract: The successful realization of smart energy management relies on ubiquitous and reliable information exchange among millions of sensors and actuators deployed in the field with little or no human intervention. This motivates us to propose a unified communication framework for smart energy management by exploring the integration of software-defined networking with machine-to-machine communication. In this article, first we provide a comprehensive review of the state-of-the-art contributions from the perspective of software defined networking and machineto- machine integration. Second, the overall design of the proposed software-defined machine-to-machine (SD-M2M) framework is presented, with an emphasis on its technical contributions to cost reduction, fine granularity resource allocation, and end-to-end quality of service guarantee. Then a case study is conducted for an electric vehicle energy management system to validate the proposed SD-M2M framework. Finally, we identify several open issues and present key research opportunities.

Journal ArticleDOI
TL;DR: In this article, a conceptual schema for arraying property-rights regimes that distinguishes among diverse bundles of rights is presented. But the conceptual framework has been challenged by the fact that many more social actors are involved in resource management than the local communities at the focus of original analysis.

Posted Content
TL;DR: In this paper, the authors present a survey of the state of the art on stream processing engines and mechanisms for exploiting resource elasticity features of cloud computing in stream processing and discuss solutions proposed in the literature to address them.
Abstract: Under several emerging application scenarios, such as in smart cities, operational monitoring of large infrastructure, wearable assistance, and Internet of Things, continuous data streams must be processed under very short delays. Several solutions, including multiple software engines, have been developed for processing unbounded data streams in a scalable and efficient manner. More recently, architecture has been proposed to use edge computing for data stream processing. This paper surveys state of the art on stream processing engines and mechanisms for exploiting resource elasticity features of cloud computing in stream processing. Resource elasticity allows for an application or service to scale out/in according to fluctuating demands. Although such features have been extensively investigated for enterprise applications, stream processing poses challenges on achieving elastic systems that can make efficient resource management decisions based on current load. Elasticity becomes even more challenging in highly distributed environments comprising edge and cloud computing resources. This work examines some of these challenges and discusses solutions proposed in the literature to address them.

Proceedings ArticleDOI
Lu Chengzhi1, Kejiang Ye1, Guoyao Xu1, Cheng-Zhong Xu1, Tongxin Bai1 
01 Dec 2017
TL;DR: This paper performs a deep analysis on a newly released trace dataset by Alibaba in September 2017, consisting of detail statistics of 11089 online service jobs and 12951 batch jobs co-locating on 1300 machines over 12 hours, revealing several important insights about different types of imbalance in the Alibaba cloud.
Abstract: To improve resource efficiency and design intelligent scheduler for clouds, it is necessary to understand the workload characteristics and machine utilization in large-scale cloud data centers. In this paper, we perform a deep analysis on a newly released trace dataset by Alibaba in September 2017, consists of detail statistics of 11089 online service jobs and 12951 batch jobs co-locating on 1300 machines over 12 hours. To the best of our knowledge, this is one of the first work to analyze the Alibaba public trace. Our analysis reveals several important insights about different types of imbalance in the Alibaba cloud. Such imbalances exacerbate the complexity and challenge of cloud resource management, which might incur severe wastes of resources and low cluster utilization. 1) Spatial Imbalance: heterogeneous resource utilization across machines and workloads. 2) Temporal Imbalance: greatly time-varying resource usages per workload and machine. 3) Imbalanced proportion of multi-dimensional resources (CPU and memory) utilization per workload. 4) Imbalanced resource demands and runtime statistics (duration and task number) between online service and offline batch jobs. We argue accommodating such imbalances during resource allocation is critical to improve cluster efficiency, and will motivate the emergence of new resource managers and schedulers.

Journal ArticleDOI
TL;DR: A novel cost-oriented optimization model is proposed for a cloud-based ICT infrastructure to allocate cloud computing resources in a flexible and cost-efficient way and compared with the mature simulating annealing based algorithm.
Abstract: With the rapid increase of monitoring devices and controllable facilities in the demand side of electricity networks, more solid information and communication technology (ICT) resources are required to support the development of demand side management (DSM). Different from traditional computation in power systems which customizes ICT resources for mapping applications separately, DSM especially asks for scalability and economic efficiency, because there are more and more stakeholders participating in the computation process. This paper proposes a novel cost-oriented optimization model for a cloud-based ICT infrastructure to allocate cloud computing resources in a flexible and cost-efficient way. Uncertain factors including imprecise computation load prediction and unavailability of computing instances can also be considered in the proposed model. A modified priority list algorithm is specially developed in order to efficiently solve the proposed optimization model and compared with the mature simulating annealing based algorithm. Comprehensive numerical studies are fulfilled to demonstrate the effectiveness of the proposed cost-oriented model on reducing the operation cost of cloud platform in DSM.

Journal ArticleDOI
TL;DR: This paper advances scale mismatch analysis by explicitly considering collaborations among local and regional organizations doing estuary watershed restoration and how these collaborations align with environmental patterns, using social–ecological network analysis (SENA), which considers relationships among and between social and ecological units.
Abstract: Resource management boundaries seldom align with environmental systems, which can lead to social and ecological problems. Mapping and analyzing how resource management organizations in different areas collaborate can provide vital information to help overcome such misalignment. Few quantitative approaches exist, however, to analyze social collaborations alongside environmental patterns, especially among local and regional organizations (i.e., in multilevel governance settings). This paper develops and applies such an approach using social-ecological network analysis (SENA), which considers relationships among and between social and ecological units. The framework and methods are shown using an estuary restoration case from Puget Sound, United States. Collaboration patterns and quality are analyzed among local and regional organizations working in hydrologically connected areas. These patterns are correlated with restoration practitioners' assessments of the productivity of their collaborations to inform network theories for natural resource governance. The SENA is also combined with existing ecological data to jointly consider social and ecological restoration concerns. Results show potentially problematic areas in nearshore environments, where collaboration networks measured by density (percentage of possible network connections) and productivity are weakest. Many areas also have high centralization (a few nodes hold the network together), making network cohesion dependent on key organizations. Although centralization and productivity are inversely related, no clear relationship between density and productivity is observed. This research can help practitioners to identify where governance capacity needs strengthening and jointly consider social and ecological concerns. It advances SENA by developing a multilevel approach to assess social-ecological (or social-environmental) misalignments, also known as scale mismatches.