scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 2016"


Journal ArticleDOI
TL;DR: This paper transforms the V2X requirements into the constraints that are computable using slowly varying channel state information only, and formulates an optimization problem, taking into account the requirements of both vehicular users and cellular users, and proposes a heuristic algorithm, called Cluster-based Resource block sharing and pOWer allocatioN (CROWN).
Abstract: Deploying direct device-to-device (D2D) links is a promising technology for vehicle-to-X (V2X) applications. However, intracell interference, along with stringent requirements on latency and reliability, are challenging issues. In this paper, we study the radio resource management problem for D2D-based safety-critical V2X communications. We first transform the V2X requirements into the constraints that are computable using slowly varying channel state information only. Secondly, we formulate an optimization problem, taking into account the requirements of both vehicular users (V-UEs) and cellular users (C-UEs), where resource sharing can take place not only between a V-UE and a C-UE but also among different V-UEs. The NP-hardness of the problem is rigorously proved. Moreover, a heuristic algorithm, called Cluster-based Resource block sharing and pOWer allocatioN (CROWN), is proposed to solve this problem. Finally, simulation results indicate promising performance of the CROWN scheme.

178 citations


Journal ArticleDOI
TL;DR: A hybrid method to take full advantages of both traffic offloading and resource sharing methods, where cellular base stations offload traffic to WiFi networks and simultaneously occupy certain number of time slots on unlicensed bands is developed.
Abstract: Traffic offloading and resource sharing are two common methods for delivering cellular data traffic over unlicensed bands. In this paper, we first develop a hybrid method to take full advantages of both traffic offloading and resource sharing methods, where cellular base stations (BSs) offload traffic to WiFi networks and simultaneously occupy certain number of time slots on unlicensed bands. Then, we analytically compare the cellular throughput of the three methods with the guarantee of WiFi per-user throughput in the single-BS scenario. We find that traffic offloading can achieve better performance than resource sharing when existing WiFi user number is below a threshold and the hybrid method achieves the same performance as the resource sharing method when existing WiFi user number is large enough. In the multi-BS scenario where the coverage of small cells and WiFi access points are mutually overlapped, we consider to maximize the minimum average per-user throughput of each small cell and derive a closed-form expression for the throughput upper bound in each method. Meanwhile, practical traffic offloading and resource sharing algorithms are also developed for the three methods, respectively. Numerical results validate our theoretical analysis and demonstrate the effectiveness of the proposed algorithms as well.

125 citations


Journal ArticleDOI
TL;DR: Results indicate that the proposed resource-sharing scheme with the geodistributed cloudlets can improve resource utilization and reduce system power consumption and with the integration of a software-defined network architecture, a vehicular network can easily reach a globally optimal solution.
Abstract: Vehicular networks are expected to accommodate a large number of data-heavy mobile devices and multiapplication services, whereas it faces a significant challenge when we need to deal with the ever-increasing demand of mobile traffic. In this paper, we present a new paradigm of fifth-generation (5G)-enabled vehicular networks to improve network capacity and system computing capability. We extend the original cloud radio access network (C-RAN) to integrate local cloud services to provide a low-cost, scalable, self-organizing, and effective solution. The new C-RAN is named enhanced C-RAN (EC-RAN). Cloudlets in EC-RAN are geographically distributed for local services. Furthermore, device-to-device (D2D) and heterogeneous networks are essential technologies in 5G systems. They can greatly improve spectrum efficiency and support large-scale live video streaming in short-distance communications. We exploit matrix game theoretical approach to operate the cloudlet resource management and allocation. A Nash equilibrium solution can be obtained by a Karush–Kuhn–Tucker (KKT) nonlinear complementarity approach. Illustrative results indicate that the proposed resource-sharing scheme with the geodistributed cloudlets can improve resource utilization and reduce system power consumption. Moreover, with the integration of a software-defined network architecture, a vehicular network can easily reach a globally optimal solution.

123 citations


Journal ArticleDOI
TL;DR: Social ties in human-formed social networks are exploited to enhance D2D resource sharing and a social-community-aware D1D resource allocation framework is proposed, where cellular users would like to share their channels with D2d communications in the same community formed by a group of people with close social ties.
Abstract: Device-to-device (D2D) communication has been proposed as a promising technology for future cellular communication systems due to its advantages of high spectrum efficiency, low energy consumption, and enhanced system capacity. Resource allocation for D2D communications, which occupies nonorthogonal channels with cellular transmissions, is an important problem in terms of achieving the aforementioned benefits. In this problem, there are two fundamental challenges to be addressed: 1) how to emulate cellular users to share their resources and 2) how to efficiently allocate resources in terms of channels for D2D pairs. In this paper, we exploit social ties in human-formed social networks to enhance D2D resource sharing and further propose a social-community-aware D2D resource allocation framework, where cellular users would like to share their channels with D2D communications in the same community formed by a group of people with close social ties. After that, we propose a two-step coalition game, where a coalition formulation is established for communities, and an optimal resource allocation problem is formulated for D2D pairs. Extensive simulations on random networks and real mobile trace verify the effectiveness of the proposed scheme.

99 citations


Journal ArticleDOI
TL;DR: Extensive simulations show that users can benefit from both wireless network virtualization and software-defined D2D communications, and the proposed scheme can achieve considerable performance gains in both system throughput and user utility under practical network settings.
Abstract: Software-defined networking (SDN) and network function virtualization (NFV) are a promising system architecture and control mechanism for future networks. Although some works have been done on wireless SDN and NFV, recent advancements in device-to-device (D2D) communications are largely ignored in this novel framework. In this paper, we study the integration of D2D communication in the framework of SDN and NFV. An inherent challenge in supporting software-defined D2D is the imperfectness of network state information, including channel state information (CSI) and queuing state information, in virtual wireless (QSI) networks. To address this challenge, we formulate the resource sharing problem in this framework as a discrete stochastic optimization problem and develop discrete stochastic approximation algorithms to solve this problem. Such algorithms can reduce the computational complexity compared with exhaustive search while achieving satisfactory performance. Both the static wireless channel and time-varying channels are considered. Extensive simulations show that users can benefit from both wireless network virtualization and software-defined D2D communications, and our proposed scheme can achieve considerable performance gains in both system throughput and user utility under practical network settings.

92 citations


Journal ArticleDOI
TL;DR: This study employs game theoretic approaches to model the problem of minimizing energy consumption as a Stackelberg game among decentralized scheduler agents as they compete with one another in the sharing resources.
Abstract: Data centers hosting distributed computing systems consume huge amounts of electrical energy, contributing to high operational costs, whereas the utilization of data centers continues to be very low. Moreover, a data center generally consists of heterogeneous servers with different performance and energy. Failure to fully consider the heterogeneity of servers will lead to both sub-optimal energy saving and performance. In this study, we employ game theoretic approaches to model the problem of minimizing energy consumption as a Stackelberg game. In our model, the system monitor, who plays the role of the leader, can maximize profit by adjusting resource provisioning, whereas scheduler agents, who act as followers, can select resources to obtain optimal performance. In addition, we model the problem of minimizing average response time of tasks as a noncooperative game among decentralized scheduler agents as they compete with one another in the sharing resources. Several algorithms are presented to implement the game models. Simulation results demonstrate that the proposed technique has immense potential to improve energy efficiency under dynamic work scenarios without compromising service level agreements.

63 citations


Journal ArticleDOI
TL;DR: This paper closely analyzes two representative classes of applications, namely streaming-like and file-sharing-like, and develops optimized solutions to coordinate the cellular and D2D communications with the best resource sharing mode, and enables better resource utilization for heterogeneous applications with less possibility of underprovisioned or overprovisioning.
Abstract: Mobile data traffic has been experiencing a phenomenal rise in the past decade. This ever-increasing data traffic puts significant pressure on the infrastructure of state-of-the-art cellular networks. Recently, device-to-device (D2D) communication that smartly explores local wireless resources has been suggested as a complement of great potential, particularly for the popular proximity-based applications with instant data exchange between nearby users. Significant studies have been conducted on coordinating the D2D and the cellular communication paradigms that share the same licensed spectrum, commonly with an objective of maximizing the aggregated data rate. The new generation of cellular networks, however, have long supported heterogeneous networked applications, which have highly diverse quality-of-service (QoS) specifications. In this paper, we jointly consider resource allocation and power control with heterogeneous QoS requirements from the applications. We closely analyze two representative classes of applications, namely streaming-like and file-sharing-like , and develop optimized solutions to coordinate the cellular and D2D communications with the best resource sharing mode. We further extend our solution to accommodate more general application scenarios and larger system scales. Extensive simulations under realistic configurations demonstrate that our solution enables better resource utilization for heterogeneous applications with less possibility of underprovisioning or overprovisioning.

59 citations


Proceedings ArticleDOI
15 Oct 2016
TL;DR: By dynamically allocating resources and carefully oversubscribing them when necessary, Zorua improves or retains the performance of applications that are already highly tuned to best utilize the hardware resources.
Abstract: This paper introduces a new resource virtualization framework, Zorua, that decouples the programmer-specified resource usage of a GPU application from the actual allocation in the on-chip hardware resources. Zorua enables this decoupling by virtualizing each resource transparently to the programmer. The virtualization provided by Zorua builds on two key concepts — dynamic allocation of the on-chip resources and their oversubscription using a swap space in memory. Zorua provides a holistic GPU resource virtualization strategy, designed to (i) adaptively control the extent of oversubscription, and (ii) coordinate the dynamic management of multiple on-chip resources (i.e., registers, scratchpad memory, and thread slots), to maximize the effectiveness of virtualization. Zorua employs a hardware-software code-sign, comprising the compiler, a runtime system and hardware-based virtualization support. The runtime system leverages information from the compiler regarding resource requirements of each program phase to (i) dynamically allocate/deallocate the different resources in the physically available on-chip resources or their swap space, and (ii) manage the tradeoffbetween higher thread-level parallelism due to virtualization versus the latency and capacity overheads of swap space usage. We demonstrate that by providing the illusion of more resources than physically available via controlled and coordinated virtualization, Zorua offers several important benefits: (i) Programming Ease. Zorua eases the burden on the programmer to provide code that is tuned to efficiently utilize the physically available on-chip resources. (ii) Portability. Zorua alleviates the necessity of re-tuning an application's resource usage when porting the application across GPU generations. (iii) Performance. By dynamically allocating resources and carefully oversubscribing them when necessary, Zorua improves or retains the performance of applications that are already highly tuned to best utilize the hardware resources. The holistic virtualization provided by Zorua can also enable other uses, including fine-grained resource sharing among multiple kernels and low-latency preemption of GPU programs.

58 citations


Proceedings ArticleDOI
10 Apr 2016
TL;DR: In this paper, the authors investigate and compare various sharing configurations in order to capture the enhanced potential of mmWave communications and deliver a number of key insights, corroborated by detailed simulations, which include an analysis of the effects of the distinctive propagation characteristics of the mmWave channel, along with a rigorous multi-antenna characterization.
Abstract: In this paper, we discuss resource sharing, a key dimension in mmWave network design in which spectrum, access and/or network infrastructure resources can be shared by multiple operators. It is argued that this sharing paradigm will be essential to fully exploit the tremendous amounts of bandwidth and the large number of antenna degrees of freedom available in these bands, and to provide statistical multiplexing to accommodate the highly variable nature of the traffic. In this paper, we investigate and compare various sharing configurations in order to capture the enhanced potential of mmWave communications. Our results reflect both the technical and the economical aspects of the various sharing paradigms. We deliver a number of key insights, corroborated by detailed simulations, which include an analysis of the effects of the distinctive propagation characteristics of the mmWave channel, along with a rigorous multi-antenna characterization. Key findings of this study include (i) the strong dependence of the comparative results on channel propagation and antenna characteristics, and therefore the need to accurately model them, and (ii) the desirability of a full spectrum and infrastructure sharing configuration, which may result in increased user rate as well as in economical advantages for both service provider.

56 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed algorithm can offer near-optimal performance and outperforms the comparable algorithms especially in terms of achievable throughput even with markedly reduced complexity levels.
Abstract: Device-to-device (D2D) communication underlaying cellular networks allows closely located user equipments (UEs) to communicate directly by sharing the radio resources assigned to cellular UEs (CUEs). The case of multiple D2D-UEs (DUEs) sharing the same channel while each DUE can reuse multiple channels is considered. In this letter, two phases-based resource sharing algorithm is designed in such a way that its computational complexity can be adapted according to the network condition. The initial set of candidate channels that can be reused by each DUE is adaptively determined in the first phase. In the second phase, Lagrangian dual decomposition is used to determine the optimal power for DUEs that maximizing the network sum-rate. Simulation results show that the proposed algorithm can offer near-optimal performance and outperforms the comparable algorithms especially in terms of achievable throughput even with markedly reduced complexity levels.

55 citations


Journal ArticleDOI
01 Mar 2016
TL;DR: For evaluating and balancing the tradeoff between performance and power consumption, a tradeoff parameter and a pure profit optimization model are developed based on the presented correlation model.
Abstract: Cloud computing is a new emerging technology aimed at large-scale resource sharing and service-oriented computing. To achieve the efficient use of cloud resources for supporting a cloud service, many important factors need to be considered, particularly, reliability, performance, and power consumption of the cloud service. Evaluation of these metrics is essential for further designing rational resource scheduling strategies. However, these metrics are closely related; they do affect one another. The cloud system should consider correlations among the metrics to make more precise evaluation. Most of the existing approaches and models handle these metrics separately, and thus they cannot be used to study the correlations. This paper presents a new hierarchical correlation model for analyzing and evaluating these correlated metrics, which encompasses Markov models, queuing theory, and a Bayesian approach. Various distinctive characteristics of the cloud system are investigated and captured in the model, such as multiple virtual machines (VMs) hosted on the same server, common cause failures of co-located VMs caused by server failures, and logical mapping mechanisms for multicore CPUs. Moreover, for evaluating and balancing the tradeoff between performance and power consumption, a tradeoff parameter and a pure profit optimization model are developed based on the presented correlation model. Numerical examples are provided.

Proceedings ArticleDOI
01 Nov 2016
TL;DR: This paper proves that the proposed resource-oriented partitioned scheduling using PCP combined with a reasonable allocation algorithm can achieve a non-trivial speedup factor guarantee and is highly effective in terms of task sets deemed schedulable.
Abstract: When concurrent real-time tasks have to access shared resources, to prevent race conditions, the synchronization and resource access must ensure mutual exclusion, e.g., by using semaphores. That is, no two concurrent accesses to one shared resource are in their critical sections at the same time. For uniprocessor systems, the priority ceiling protocol (PCP) has been widely accepted and supported in real-time operating systems. However, it is still arguable whether there exists a preferable approach for resource sharing in multiprocessor systems. In this paper, we show that the proposed resource-oriented partitioned scheduling using PCP combined with a reasonable allocation algorithm can achieve a non-trivial speedup factor guarantee. Specifically, we prove that our task mapping and resource allocation algorithm has a speedup factor 11-6/(m+1) on a platform comprising m processors, where a task may request at most one shared resource and the number of requests on any resource by any single job is at most one. Our empirical investigations show that the proposed algorithm is highly effective in terms of task sets deemed schedulable.

Journal ArticleDOI
TL;DR: A multi-timescale resource sharing mechanism, which consists of a global resource allocation process and multiple local resource allocation processes that are performed at different time scales, achieves efficient resource sharing and isolation among service providers.
Abstract: Cloud radio access network (C-RAN) as a promising and cost-efficient cellular architecture has been proposed to meet the increasing demand of wireless data traffic. The main concept of C-RAN is to decouple the baseband unit (BBU) and the remote radio head (RRH), and place the BBUs in a data center for centralized control and processing. In this paper, we study the resource sharing problem in a fronthaul constrained C-RAN, where multiple service providers lease radio resources from a network operator to serve their subscribers. To provide isolation among different service providers, we introduce a threshold-based policy to control the interference among RRHs, and define a new metric to provide minimum resource guarantee for service providers. By leveraging a mobility prediction method, the user locations are predicted for traffic demand estimation and interference control. We propose a multi-timescale resource sharing mechanism, which consists of a global resource allocation process and multiple local resource allocation processes that are performed at different time scales. Simulation results show that the proposed mechanism achieves efficient resource sharing and isolation among service providers.

Journal ArticleDOI
TL;DR: A formal access model is designed to analyze the translating an authorization policy into an equivalent encryption policy and the effect of role hierarchy structure in the authorization process is investigated.
Abstract: Data outsourcing is a major component for cloud computing because data owners are able to distribute resources to external services for sharing with users and organizations A crucial problem for owners is how to secure sensitive information accessed by legitimate users only using the trusted services We address the problem with access control methods to enforce selective access to outsourced data without involving the owner in authorization The basic idea is to combine cryptography with authorizations, and data owners assign keys to roles that will enforce access via encryption A formal access model is designed to analyze the translating an authorization policy into an equivalent encryption policy The paper also investigates the effect of role hierarchy structure in the authorization process The role-based access management methods are implemented with XACML by using WSO Identity Server The comparisons with other related work are presented Finally, the future work is introducedCopyright © 2014 John Wiley & Sons, Ltd

Journal ArticleDOI
TL;DR: ProRed is introduced, a novel prognostic redesign technique that promotes the backup resource sharing at the virtual network level, prior to the embedding phase, and achieves lower-cost mapping solutions and greatly enhances the achievable backup sharing, boosting the overall network's admissibility.
Abstract: In a virtualized infrastructure where multiple virtual networks (or tenants) are running atop the same physical network (e.g., a data center network), a single facility node (e.g., a server) failure can bring down multiple virtual machines, disconnecting their corresponding services and leading to millions of dollars in penalty cost. To overcome losses, tenants or virtual networks can be augmented with a dedicated set of backup nodes and links provisioned with enough backup resources to assume any single facility node failure. This approach is commonly referred to as Survivable Virtual Network (SVN) design. The achievable reliability guarantee of the resultant SVN could come at the expense of lowering the substrate network utilization efficiency, and subsequently its admissibility, since the provisioned backup resources are reserved and remain idle until failures occur. Backup-sharing can replace the dedicated survivability scheme to circumvent the inconvenience of idle resources and reduce the footprints of backup resources. Indeed the problem of SVN design with backup-sharing has recurred multiple times in the literature. In most of the existing work, designing an SVN is bounded to a fixed number of backup nodes; further backup-sharing is only explored and optimized during the embedding phase. This renders the existing redesign techniques agnostic to the backup resource sharing in the substrate network, and highly dependent on the efficiency of the adopted mapping approach. In this paper, we diverge from this dogmatic approach, and introduce ProRed, a novel prognostic redesign technique that promotes the backup resource sharing at the virtual network level, prior to the embedding phase. Our numerical results prove that this redesign technique achieves lower-cost mapping solutions and greatly enhances the achievable backup sharing, boosting the overall network's admissibility.

Journal ArticleDOI
TL;DR: In this paper, the role of social capital dimensions towards resource sharing within R&D cooperation projects funded by the 7th Framework Programme (FP7) was examined, where data were collected in a survey of 553 FP7 project participants and analyzed using two different social network analysis (SNA) methodologies: Logistic regression quadratic assignment procedure and exponential random graph models.

Posted ContentDOI
TL;DR: The paper discusses the possibility of transformation of Library into ‘Universal Resource Center’ and the consequences of such transformation to information sharing throughout the World and further changes in the model of costless higher education and extended opportunity for new knowledge creation.
Abstract: The concept of library in educational institutions is changing as the major constituents of library like physical books, hard copies of journals and newspapers are vanishing and a new format called e-format of these resources emerging through advents in computer science, information science and e-storage technology. The physical copies of books, journals and newspapers are thumbing and their electronic format do not need space for storage and single copy of such resource can be shared by any number of users so as the name of library has no longer validity. Hence libraries are now renamed as Resource Centres with online facility to provide resource sharing services to its registered users. Future libraries so called ‘Resource Centres’ do not need large reading rooms, large book/journal old volume storage area or even independent library building. Individual institutions also do not need independent libraries. There should be one Resource centre for a country or even only one for the entire world through which everybody can connect through ICT for uploading and downloading audio, text and video files so that equality in terms of accessibility to any of these types of resources can be maintained irrespective of gender, region, religion, economical background and the country origin of the users. The paper discusses the possibility of such transformation of Library into ‘Universal Resource Center’ and the consequences of such transformation to information sharing throughout the World and further changes in the model of costless higher education and extended opportunity for new knowledge creation. We also discuss how such transformed Libraries as Universal Resource Centres may provide automated customized service for individuals ubiquitously by incorporating smart library model.

Patent
26 Jul 2016
TL;DR: In this paper, a method for controlling access to a shared resource for a plurality of collaborative users includes securely providing, on a storage and device entity, the shared resource, which is created by a resource owner entity.
Abstract: A method for controlling access to a shared resource for a plurality of collaborative users includes securely providing, on a storage and device entity, the shared resource. The shared resource is created by a resource owner entity. The method further includes specifying access control rules for the shared resource, translating the access control rules into a smart contract, including the smart contract into a blockchain, and if a second user requests access to the shared resource, performing access decisions for the shared resource by evaluating the smart contract with regard to the access control rules.

Journal ArticleDOI
TL;DR: Two new local Schedulability tests are presented to verify the schedulability of real-time applications running on reservation servers under fixed priority and EDF local schedulers to reduce the blocking time of the server when accessing global resources shared among components.
Abstract: Sharing resources in hierarchical real-time systems implemented with reservation servers requires the adoption of special budget management protocols that preserve the bandwidth allocated to a specific component. In addition, blocking times must be accurately estimated to guarantee both the global feasibility of all the servers and the local schedulability of applications running on each component. This paper presents two new local schedulability tests to verify the schedulability of real-time applications running on reservation servers under fixed priority and EDF local schedulers. Reservation servers are implemented with the BROE algorithm. A simple extension to the SRP protocol is also proposed to reduce the blocking time of the server when accessing global resources shared among components. The performance of the new schedulability tests are compared with other solutions proposed in the literature, showing the effectiveness of the proposed improvements. Finally, an implementation of the main protocols on a lightweight RTOS is described, highlighting the main practical issues that have been encountered.

Patent
11 May 2016
TL;DR: In this article, a micro service architecture is used for the college teaching cloud platform, which decouples the teaching cloud into micro services of different functions, enabling each link of teaching to be called in micro service mode, the response is quick, and the efficiency is high.
Abstract: The invention discloses a college teaching cloud platform based on micro services A campus network and private cloud are combined through a virtual private network to build a cloud platform, and a micro service architecture is utilized to build services of the cloud platform; using an open source project Eucalyptus developed by the Department of Computer Science of the University of California to set up private cloud; and all requests of a client first pass through an API gateway, then the API gateway routes the requests to appropriate micro services, and generally the APT gateway calls a plurality of micro services and merges a result to process a request According to the invention, the micro service architecture is used for the college teaching cloud platform, application of the teaching cloud platform is decoupled into micro services of different functions, thereby enabling each link of teaching to be called in a micro service mode, the response is quick, and the efficiency is high; and the private cloud and the campus network are connected through the virtual private network to form the college teaching cloud platform, thereby providing better resource sharing for teachers and students, and the college teaching cloud platform has good elasticity and expansibility

Journal ArticleDOI
TL;DR: Presto is proposed, an efficient online heuristic VN embedding algorithm based on an Artificial Intelligence resource abstraction model, named Blocking Island, that operates with quite low computation complexity and greatly reduces the search space, which far outperforms other candidates.

Journal ArticleDOI
TL;DR: Parking spaces are resources that can be pooled together and shared, particularly when there exist complementary daytime and nighttime users as discussed by the authors, and given a quality of service requirement, how many spaces should be set aside as contingency during the day for nighttime users
Abstract: Parking spaces are resources that can be pooled together and shared, particularly when there exist complementary daytime and nighttime users. We provide solutions to two design questions. First, given a quality of service requirement, how many spaces should be set aside as contingency during the day for nighttime users? Next, how can we replace the first-come-first-served access method by one that aims for optimal efficiency while keeping user preferences private?

Proceedings ArticleDOI
05 Jun 2016
TL;DR: This paper provides a pseudo-polynomial-time schedulability test and response-time analysis for constrained-deadline sporadic task systems, and proposes a task partitioning algorithm that achieves a speedup factor of 7, compared to the optimal schedule.
Abstract: The emergence of multicore and manycore platforms poses a big challenge for the design of real-time embedded systems, especially for timing analysis. We observe in this paper that response-time analysis for multicore platforms with shared resources can be symmetrically approached from two perspectives: a core-centric and a shared-resource-centric perspective. The common "core-centric" perspective is that a task executes on a core until it suspends the execution due to shared resource accesses. The potentially less intuitive "shared-resource-centric" perspective is that a task performs requests on shared resources until suspending itself back to perform computation on its respective core. Based on the above observation, we provide a pseudo-polynomial-time schedulability test and response-time analysis for constrained-deadline sporadic task systems. In addition, we propose a task partitioning algorithm that achieves a speedup factor of 7, compared to the optimal schedule. This constitutes the first result in this research line with a speedup factor guarantee. The experimental evaluation demonstrates that our approach can yield high acceptance ratios if the tasks have only a few resource access segments.

Proceedings ArticleDOI
30 Nov 2016
TL;DR: CoFence is proposed — a DDoS defense mechanism which facilitates a collaboration framework among NFV-based peer domain networks which allows domain networks to help each others handle large volumes of DDoS attacks through resource sharing.
Abstract: With the exponential growth of the Internet use, the impact of cyber attacks are growing rapidly. Distributed Denial of Service (DDoS) attacks are the most common but damaging type of cyber attacks. Among them SYN Flood attack is the most common type. Existing DDoS defense strategies are encountering obstacles due to their high cost and low flexibility. The emerging of Network Function Virtualization (NFV) technology introduces new opportunities for low-cost and flexible DDoS defense solutions. In this work, we propose CoFence - a DDoS defense mechanism which facilitates a collaboration framework among NFV-based peer domain networks. CoFence allows domain networks help each others handle large volumes of DDoS attacks through resource sharing. Specifically, we focus on the resource allocation problem in the collaboration framework. Through CoFence a domain network decides the amount of resource to share with other peers based on a reciprocal-based utility function. Our simulation results demonstrate the designed resource allocation system is effective, incentive compatible, fair, and reciprocal.

Journal ArticleDOI
TL;DR: Through rigorous analysis, it is shown that each individual cloud can achieve a time-averaged profit arbitrarily close to the offline optimum, and asymptotic optimality in social welfare is also achieved under homogeneous cloud settings.
Abstract: By sharing resources among different cloud providers, the paradigm of federated clouds exploits temporal availability of resources and geographical diversity of operational costs for efficient job service. While interoperability issues across different cloud platforms in a cloud federation have been extensively studied, fundamental questions on cloud economics remain: When and how should a cloud trade resources (e.g., virtual machines) with others, such that its net profit is maximized over the long run, while a close-to-optimal social welfare in the entire federation can also be guaranteed? To answer this question, a number of important, interrelated decisions, including job scheduling, server provisioning, and resource pricing, should be dynamically and jointly made, while the long-term profit optimality is pursued. In this work, we design efficient algorithms for intercloud virtual machine (VM) trading and scheduling in a cloud federation. For VM transactions among clouds, we design a double-auction-based mechanism that is strategy-proof, individual-rational, ex-post budget-balanced, and efficient to execute over time. Closely combined with the auction mechanism is a dynamic VM trading and scheduling algorithm, which carefully decides the true valuations of VMs in the auction, optimally schedules stochastic job arrivals with different service level agreements (SLAs) onto the VMs, and judiciously turns on and off servers based on the current electricity prices. Through rigorous analysis, we show that each individual cloud, by carrying out the dynamic algorithm in the online double auction, can achieve a time-averaged profit arbitrarily close to the offline optimum. Asymptotic optimality in social welfare is also achieved under homogeneous cloud settings. We carry out simulations to verify the effectiveness of our algorithms, and examine the achievable social welfare under heterogeneous cloud settings, as driven by the real-world Google cluster usage traces.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This paper proposes a framework for distributed data fusion that specifies the communication architecture and data transformation functions, and specifies an approach for lateral movement detection that uses host-level process communication graphs to infer network connection causations.
Abstract: Attackers often attempt to move laterally from host to host, infecting them until an overall goal is achieved. One possible defense against this strategy is to detect such coordinated and sequential actions by fusing data from multiple sources. In this paper, we propose a framework for distributed data fusion that specifies the communication architecture and data transformation functions. Then, we use this framework to specify an approach for lateral movement detection that uses host-level process communication graphs to infer network connection causations. The connection causations are then aggregated into system-wide host-communication graphs that expose possible lateral movement in the system. In order to provide a balance between the resource usage and the robustness of the fusion architecture, we propose a multilevel fusion hierarchy that uses different clustering techniques. We evaluate the scalability of the hierarchical fusion scheme in terms of storage overhead, number of message updates sent, fairness of resource sharing among clusters, and quality of local graphs. Finally, we implement a host-level monitor prototype to collect connection causations, and evaluate its overhead. The results show that our approach provides an effective method to detect lateral movement between hosts, and can be implemented with acceptable overhead.

Journal ArticleDOI
TL;DR: Simulation results show that the high and stable social welfare can be maintained by the the AoC with the proposed zero-determinant algorithm.
Abstract: Cooperation in resource sharing among wireless users and network operators has been widely studied in wireless communication. However, because of the limited coordination capability or cheating strategies, each participant of the cooperation may cease its cooperative behavior or duties unilaterally during the resource sharing, resulting in unsatisfying quality of services (QoSs) for all other participants. In this paper, we model the resource sharing among participants as an iterated game. Specifically, we first define the participant who is responsible for maintaining the social welfare as an administrator of cooperation (AoC), and other selfish participants as the regular participants of cooperation (PoCs). Then we consider three scenarios, i.e., with two-player applying discrete strategy, two-player applying continuous strategy, and multi-player applying continuous strategy, Finally, we investigate the power control problem in each of scenarios, and apply the zero-determinant strategies for the AoC to find the maximum social welfare that the AoC can achieve with existence of PoCs. Simulation results show that the high and stable social welfare can be maintained by the the AoC with the proposed zero-determinant algorithm.

Journal ArticleDOI
TL;DR: In this article, the authors study four scheduling-policy combinations (SPC) derived from the two schedulers and then evaluate the four SPCs in extensive scenarios, which consider not only four application types, but also three different queue structures for organizing applications.
Abstract: To solve the limitation of Hadoop on scalability, resource sharing, and application support, the open-source community proposes the next generation of Hadoop's compute platform called Yet Another Resource Negotiator YARN by separating resource management functions from the programming model. This separation enables various application types to run on YARN in parallel. To achieve fair resource sharing and high resource utilization, YARN provides the capacity scheduler and the fair scheduler. However, the performance impacts of the two schedulers are not clear when mixed applications run on a YARN cluster. Therefore, in this paper, we study four scheduling-policy combinations SPCs for short derived from the two schedulers and then evaluate the four SPCs in extensive scenarios, which consider not only four application types, but also three different queue structures for organizing applications. The experimental results enable YARN managers to comprehend the influences of different SPCs and different queue structures on mixed applications. The results also help them to select a proper SPC and an appropriate queue structure to achieve better application execution performance. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper presents and evaluates an energy-aware Game-Theory-based solution for resource allocation of Virtualized Network Functions (VNFs) within NFV environments, and examines the effect of different (unconstrained and constrained) forms of the nodes' optimization problem on the equilibrium.
Abstract: Network Functions Virtualization (NFV) is a network architecture concept where network functionality is virtualized and separated into multiple building blocks that may connect or be chained together to implement the required services. The main advantages consist of an increase in network flexibility and scalability. Indeed, each part of the service chain can be allocated and reallocated at runtime depending on demand. In this paper, we present and evaluate an energy-aware Game-Theory-based solution for resource allocation of Virtualized Network Functions (VNFs) within NFV environments. We consider each VNF as a player of the problem that competes for the physical network node capacity pool, seeking the minimization of individual cost functions. The physical network nodes dynamically adjust their processing capacity according to the incoming workload, by means of an Adaptive Rate (AR) strategy that aims at minimizing the product of energy consumption and processing delay. On the basis of the result of the nodes' AR strategy, the VNFs' resource sharing costs assume a polynomial form in the workflows, which admits a unique Nash Equilibrium (NE). We examine the effect of different (unconstrained and constrained) forms of the nodes' optimization problem on the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategy profiles.

Journal ArticleDOI
TL;DR: Reciprocal Resource Fairness (RRF) is proposed, a novel resource allocation mechanism to enable fair sharing on multiple resource types within a tenant coalition in Infrastructure-as-a-Service clouds.
Abstract: This paper presents F2C , a cooperative resource management system for Infrastructure-as-a-Service (IaaS) clouds. Inspired by group-buying mechanisms in real product and service markets, F2C advocates a group of cloud tenants (called tenant coalition ) to buy resource capacity in bulk and share the resource pool in the form of virtual machines (VMs). Tenant coalitions leads to vast opportunities for fine-grained resource sharing among multiple tenants. However, resource sharing, especially for multiple resource types, poses several challenging problems in pay-as-you-use cloud environments, such as sharing incentive, free-riding, lying and economic fairness. To address those problems, we propose Reciprocal Resource Fairness (RRF) , a novel resource allocation mechanism to enable fair sharing on multiple resource types within a tenant coalition. RRF is implemented in two complementary and hierarchical mechanisms: inter-tenant resource trading and intra-tenant weight adjustment. RRF satisfies several highly desirable properties to ensure fairness. We implement F2C in Xen platform. The experimental results show F2C is promising for both cloud providers and tenants. For cloud providers, F2C improves VM density and cloud providers’ revenue by 2.2X compared to the current IaaS cloud models. For tenants, F2C improves application performance by 45 percent and guarantees 95 percent economic fairness among multiple tenants.