scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 2014"


Journal ArticleDOI
TL;DR: This paper constructs a novel analytical model of energy efficiency for different sharing modes, which takes into account quality-of-service (QoS) requirements and the spectrum utilization of each user, and develops a distributed coalition formation algorithm based on the merge-and-split rule and the Pareto order.
Abstract: Device-to-device (D2D) communications bring significant benefits to mobile multimedia services in local areas. However, these potential advantages hinge on intelligent resource sharing between potential D2D pairs and cellular users. In this paper, we study the problem of energy-efficient uplink resource sharing over mobile D2D multimedia communications underlaying cellular networks with multiple potential D2D pairs and cellular users. We first construct a novel analytical model of energy efficiency for different sharing modes, which takes into account quality-of-service (QoS) requirements and the spectrum utilization of each user. Then, we formulate the energy-efficient resource sharing problem as a nontransferable coalition formation game, with the characteristic function that accounts for the gains in terms of energy efficiency and the costs in terms of mutual interference. Moreover, we develop a distributed coalition formation algorithm based on the merge-and-split rule and the Pareto order. The distributed solution is characterized through novel stability notions and can be adapted to user mobility. From it, we obtain the energy-efficient sharing strategy on joint mode selection, uplink reusing allocation, and power management. Extensive simulation results are provided to demonstrate the effectiveness of our proposed game model and algorithm.

237 citations


Journal ArticleDOI
TL;DR: This paper addresses the uplink resource allocation problem for multiple D2D and cellular users from a game theory point of view and proposes a coalition formation game based scheme, which achieves the close optimum solution obtained by the centralized exhaustive algorithm and enhances the system sum rate by about 20%-65% without sacrifice of resource sharing fairness.
Abstract: With emergency of demands for local area services, device-to-device (D2D) communication is proposed as a vital technology component for the next generation cellular communication system to improve spectral reuse and enhance system capacity. These benefits depend on efficient interference management and resource allocation. Existing works usually consider these problems under a restricted cellular system consisting of a pair of D2D users and a cellular user. In this paper, we address the uplink resource allocation problem for multiple D2D and cellular users from a game theory point of view. Combing different transmission modes, mutual interferences, and resource sharing policy in a single utility function, we propose a coalition formation game based scheme. By theoretical analysis, we prove that it converges to Nash-stable equilibrium and further approaches to the system optimal solution with geometric rate. By extensive simulations, we demonstrate the effectiveness of our proposed scheme, which achieves the close optimum solution obtained by the centralized exhaustive algorithm and enhances the system sum rate by about 20%-65% without sacrifice of resource sharing fairness compared with several other practical strategies.

185 citations


Proceedings ArticleDOI
13 Dec 2014
TL;DR: This paper demonstrates through a real- system investigation that the fundamental difference between resource sharing behaviors on CMP and SMT architectures calls for a redesign of the way the authors model interference, and proposes SMiTe, a methodology that enables precise performance prediction for SMT co-location on real-system commodity processors.
Abstract: One of the key challenges for improving efficiency in warehouse scale computers (WSCs) is to improve server utilization while guaranteeing the quality of service (QoS) of latency-sensitive applications. To this end, prior work has proposed techniques to precisely predict performance and QoS interference to identify 'safe' application co-locations. However, such techniques are only applicable to resources shared across cores. Achieving such precise interference prediction on real-system simultaneous multithreading (SMT) architectures has been a significantly challenging open problem due to the complexity introduced by sharing resources within a core. In this paper, we demonstrate through a real-system investigation that the fundamental difference between resource sharing behaviors on CMP and SMT architectures calls for a redesign of the way we model interference. For SMT servers, the interference on different shared resources, including private caches, memory ports, as well as integer and floating-point functional units, do not correlate with each other. This insight suggests the necessity of decoupling interference into multiple resource sharing dimensions. In this work, we propose SMiTe, a methodology that enables precise performance prediction for SMT co-location on real-system commodity processors. With a set of Rulers, which are carefully designed software stressors that apply pressure to a multidimensional space of shared resources, we quantify application sensitivity and contentiousness in a decoupled manner. We then establish a regression model to combine the sensitivity and contentiousness in different dimensions to predict performance interference. Using this methodology, we are able to precisely predict the performance interference in SMT co-location with an average error of 2.80% on SPEC CPU2006 and 1.79% on Cloud Suite. Our evaluation shows that SMiTe allows us to improve the utilization of WSCs by up to 42.57% while enforcing an application's QoS requirements.

141 citations


Journal ArticleDOI
TL;DR: Although cloud computing is based on a 50-year-old business model, evidence indicates that cloud computing still needs to expand and overcome present limitations that prevent the full use of its potential.
Abstract: Cloud computing is an ascending technology that has introduced a new paradigm by rendering a rational computational model possible. It has changed the dynamics of IT consumption by means of a model that provides on-demand services over the Internet. Unlike the traditional hosting service, cloud computing services are paid for per usage and may expand or shrink based on demand. Such services are, in general, fully managed by cloud providers that require users nothing but a personal computer and an Internet access. In recent years, this model has attracted the attention of researchers, investors and practitioners, many of whom have proposed a number of applications, structures and fundamentals of cloud computing, resulting in various definitions, requirements and models. Despite the interest and advances in the field, issues such as security and privacy, service layer agreement, resource sharing, and billing have opened up new questions about the real gains of the model. Although cloud computing is based on a 50-year-old business model, evidence from this study indicates that cloud computing still needs to expand and overcome present limitations that prevent the full use of its potential. In this study, we critically review the state of the art in cloud computing with the aim of identifying advances, gaps and new challenges.

128 citations


Journal ArticleDOI
TL;DR: An opportunistic resource sharing-based mapping framework, ORS, where substrate resources are opportunistically shared among multiple virtual networks, and it is proved that ORS provides a more efficient utilization of substrate resources than two state-of-the-art fixed-resource embedding schemes.
Abstract: Network virtualization has emerged as a promising approach to overcome the ossification of the Internet. A major challenge in network virtualization is the so-called virtual network embedding problem, which deals with the efficient embedding of virtual networks with resource constraints into a shared substrate network. A number of heuristics have been proposed to cope with the NP-hardness of this problem; however, all of the existing proposals reserve fixed resources throughout the entire lifetime of a virtual network. In this paper, we re-examine this problem with the position that time-varying resource requirements of virtual networks should be taken into consideration, and we present an opportunistic resource sharing-based mapping framework, ORS, where substrate resources are opportunistically shared among multiple virtual networks. We formulate the time slot assignment as an optimization problem; then, we prove the decision version of the problem to be NP-hard in the strong sense. Observing the resemblance between our problem and the bin packing problem, we adopt the core idea of first-fit and propose two practical solutions: first-fit by collision probability (CFF) and first-fit by expectation of indicators' sum (EFF). Simulation results show that ORS provides a more efficient utilization of substrate resources than two state-of-the-art fixed-resource embedding schemes.

108 citations


Journal ArticleDOI
TL;DR: This paper investigates in datacenter networks and provides a general overview and analysis of the literature covering various research areas, including data center network interconnection architectures, network protocols for data center networks, and network resource sharing in multitenant cloud data centers.
Abstract: Large-scale data centers enable the new era of cloud computing and provide the core infrastructure to meet the computing and storage requirements for both enterprise information technology needs and cloud-based services. To support the ever-growing cloud computing needs, the number of servers in today’s data centers are increasing exponentially, which in turn leads to enormous challenges in designing an efficient and cost-effective data center network. With data availability and security at stake, the issues with data center networks are more critical than ever. Motivated by these challenges and critical issues, many novel and creative research works have been proposed in recent years. In this paper, we investigate in data center networks and provide a general overview and analysis of the literature covering various research areas, including data center network interconnection architectures, network protocols for data center networks, and network resource sharing in multitenant cloud data centers. We start with an overview on data center networks and together with its requirements navigate the data center network designs. We then present the research literature related to the aforementioned research topics in the subsequent sections. Finally, we draw the conclusions.

89 citations


Journal ArticleDOI
04 Aug 2014
TL;DR: This paper provides a definition for Cloud, Jungle and Fog computing, and the key characteristics of them are determined; their architectures are illustrated and several main use cases are introduced.
Abstract: The distributed computing attempts to improve performance in large-scale computing problems by resource sharing. Moreover, rising low-cost computing power coupled with advances in communications/networking and the advent of big data, now enables new distributed computing paradigms such as Cloud, Jungle and Fog computing. Cloud computing brings a number of advantages to consumers in terms of accessibility and elasticity. It is based on centralization of resources that possess huge processing power and storage capacities. Fog computing, in contrast, is pushing the frontier of computing away from centralized nodes to the edge of a network, to enable computing at the source of the data. On the other hand, Jungle computing includes a simultaneous combination of clusters, grids, clouds, and so on, in order to gain maximum potential computing power. To understand these new buzzwords, reviewing these paradigms together can be useful. Therefore, this paper describes the advent of new forms of distributed computing. It provides a definition for Cloud, Jungle and Fog computing, and the key characteristics of them are determined. In addition, their architectures are illustrated and, finally, several main use cases are introduced.

84 citations


Journal ArticleDOI
01 Oct 2014
TL;DR: This paper investigates the usage of CPU-cache based side-channels in the cloud and how they compare to traditional side-channel attacks, and designs and implements two new cache-based side- channel mitigation techniques.
Abstract: Cloud computing is a unique technique for outsourcing and aggregating computational hardware needs. By abstracting the underlying machines cloud computing is able to share resources among multiple mutually distrusting clients. While there are numerous practical benefits to this system, this kind of resource sharing enables new forms of information leakage such as hardware side-channels. In this paper, we investigate the usage of CPU-cache based side-channels in the cloud and how they compare to traditional side-channel attacks. We go on to demonstrate that new techniques are necessary to mitigate these sorts of attacks in a cloud environment, and specify the requirements for such solutions. Finally, we design and implement two new cache-based side-channel mitigation techniques, implementing them in a state-of-the-art cloud system, and testing them against traditional cloud technology.

80 citations


Journal ArticleDOI
TL;DR: Simulations and trace-driven experiments on the real-world PlanetLab testbed show that Harmony outperforms existing resource management and reputation management systems in terms of QoS, efficiency and effectiveness.
Abstract: Advancements in cloud computing are leading to a promising future for collaborative cloud computing (CCC), where globally-scattered distributed cloud resources belonging to different organizations or individuals (i.e., entities) are collectively used in a cooperative manner to provide services. Due to the autonomous features of entities in CCC, the issues of resource management and reputation management must be jointly addressed in order to ensure the successful deployment of CCC. However, these two issues have typically been addressed separately in previous research efforts, and simply combining the two systems generates double overhead. Also, previous resource and reputation management methods are not sufficiently efficient or effective. By providing a single reputation value for each node, the methods cannot reflect the reputation of a node in providing individual types of resources. By always selecting the highest-reputed nodes, the methods fail to exploit node reputation in resource selection to fully and fairly utilize resources in the system and to meet users' diverse QoS demands. We propose a CCC platform, called Harmony, which integrates resource management and reputation management in a harmonious manner. Harmony incorporates three key innovations: integrated multi-faceted resource/reputation management, multi-QoS-oriented resource selection, and price-assisted resource/reputation control. The trace data we collected from an online trading platform implies the importance of multi-faceted reputation and the drawbacks of highest-reputed node selection. Simulations and trace-driven experiments on the real-world PlanetLab testbed show that Harmony outperforms existing resource management and reputation management systems in terms of QoS, efficiency and effectiveness.

74 citations


Journal ArticleDOI
TL;DR: The results show that the adopted resource discovery approach can discovers multi-attribute and range queries very fast and detects logical problems such as soundness, completeness, and consistency.
Abstract: Grid computing is the federation of resources from multiple locations to facilitate resource sharing and problem solving over the Internet. The challenge of finding services or resources in Grid environments has recently been the subject of many papers and researches. These researches and papers evaluate their approaches only by simulation and experiments. Therefore, it is possible that some part of the state space of the problem is not analyzed and checked well. To overcome this defect, model checking as an automatic technique for the verification of the systems is a suitable solution. In this paper, an adopted type of resource discovery approach to address multi-attribute and range queries has been presented. Unlike the papers in this scope, this paper decouple resource discovery behavior model to data gathering, discovery and control behavior. Also it facilitates the mapping process between three behaviors by means of the formal verification approach based on Binary Decision Diagram (BDD). The formal approach extracts the expected properties of resource discovery approach from control behavior in the form of CTL and LTL temporal logic formulas, and verifies the properties in data gathering and discovery behaviors comprehensively. Moreover, analyzing and evaluating the logical problems such as soundness, completeness, and consistency of the considered resource discovery approach is provided. To implement the behavior models of resource discovery approach the ArgoUML tool and the NuSMV model checker are employed. The results show that the adopted resource discovery approach can discovers multi-attribute and range queries very fast and detects logical problems such as soundness, completeness, and consistency.

73 citations


Patent
23 Apr 2014
TL;DR: In this article, the authors proposed a network-based group management and floor control mechanism in which a server may receive a request to occupy a shared IoT resource from a member device in an IoT device group and transmit a message granting the member IoT device permission to occupy the shared IoT resources based on one or more policies.
Abstract: In the network-based group management and floor control mechanism disclosed herein, a server may receive a request to occupy a shared IoT resource from a member device in an IoT device group and transmit a message granting the member IoT device permission to occupy the shared IoT resource based on one or more policies. For example, the granted permission may comprise a floor that blocks other IoT devices from accessing the shared IoT resource while the member IoT device holds the floor. Furthermore, the server may revoke the permission if the member IoT device fails to transmit a keep-alive message before a timeout period expires, a high-priority IoT device pre-empts the floor, and/or based on the policies. Alternatively, the server may make the shared IoT resource available if the member IoT device sends a message that voluntarily releases the floor.

Journal ArticleDOI
TL;DR: AMCV proposes an ant colony optimization-based community communication strategy that dynamically bridges communities to support fast search for resources and achieves high scalability by making use of a designed community maintenance mechanism to uniformly distribute the maintenance cost of members and resources in the community, according to various member roles.
Abstract: Highly efficient distribution and management of media resources and fast content discovery are key determinants for mobile peer-to-peer video-on-demand solutions, especially in wireless mobile networks. Virtual communities making use of users' common characteristics such as interest and interaction to describe the boundary of sharing content and objects are a promising avenue for high-efficiency resource sharing. In this paper, we propose a novel ant-inspired mini-community-based video sharing solution for on-demand streaming services in wireless mobile networks (AMCV). AMCV relies on a newly designed two layer architecture and on an algorithm inspired from the indirect communication between ants via pheromone trails which enables them to discover and use shortest paths. The architecture is composed of a mini-community network layer and a community member layer. The ant-inspired algorithm enables finding the common interest of users in video content within large amounts of pseudo disorderly interactive behavior data. AMCV proposes an ant colony optimization-based community communication strategy that dynamically bridges communities to support fast search for resources. AMCV achieves high scalability by making use of a designed community maintenance mechanism to uniformly distribute the maintenance cost of members and resources in the community, according to various member roles. Simulation-based testing shows how AMCV outperforms another state-of- the-art solution in terms of a wide set of performance metrics.

Journal ArticleDOI
TL;DR: An asymmetric partitioning-based bare-metal approach that achieves near-native performance while supporting a new out-of-operating system mechanism for value-added services and considerably reduces virtualization overhead is presented.
Abstract: Advancements in cloud computing enable the easy deployment of numerous services. However, the analysis of cloud service access platforms from a client perspective shows that maintaining and managing clients remain a challenge for end users. In this paper, we present the design, implementation, and evaluation of an asymmetric virtual machine monitor (AVMM), which is an asymmetric partitioning-based bare-metal approach that achieves near-native performance while supporting a new out-of-operating system mechanism for value-added services. To achieve these goals, AVMM divides underlying platforms into two asymmetric partitions: a user partition and a service partition. The user partition runs a commodity user OS, which is assigned to most of the underlying resources, maintaining end-user experience. The service partition runs a specialized OS, which consumes only the needed resources for its tasks and provides enhanced features to the user OS. AVMM considerably reduces virtualization overhead through two approaches: 1) Peripheral devices, such as graphics equipment, are assigned to be monopolized by a single user OS. 2) Efficient resource management mechanisms are leveraged to alleviate complicated resource sharing in existing virtualization technologies. We implement a prototype that supports Windows and Linux systems. Experimental results show that AVMM is a feasible and efficient approach to client virtualization.

Journal ArticleDOI
TL;DR: The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time.
Abstract: A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

Journal ArticleDOI
TL;DR: A novel adaptive method of resource discovery is proposed from a different point of view to distinguish it from existing work and automatically transforms between centralized and flooding strategies to save energy according to different network environments.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: This paper proposes a resource scheduling approach for the container virtualized cloud environments to reduce response time of customers' jobs and improve providers' resource utilization rate.
Abstract: Cloud computing is a new paradigm to deliver computing resources to customers in a pay-as-you-go model. In this paper, the recently emerged container-based virtualization technology is adopted for building the infrastructure of a cloud data center. Cloud providers are concerned with the resource usage in a multi-type resource sharing environment while cloud customers desire higher quality of services. According to this, we propose a resource scheduling approach for the container virtualized cloud environments to reduce response time of customers' jobs and improve providers' resource utilization rate. The stable matching theory is applied to generate an optimal mapping from containers to physical servers. Simulations are implemented to evaluate our resource scheduling approach. The results show that our approach achieves a better performance for customers and maximal profits for cloud providers by improving the resource utilization.

Journal ArticleDOI
TL;DR: In this article, a virtualized GPU resource adaptive scheduling algorithm for cloud games is proposed, which interposes scheduling algorithms in the graphics API of the operating system, and hence the host graphic driver or the guest operating system remains unmodified.
Abstract: As the virtualization technology for GPUs matures, cloud gaming has become an emerging application among cloud services. In addition to the poor default mechanisms of GPU resource sharing, the performance of cloud games is inevitably undermined by various runtime uncertainties such as rendering complex game scenarios. The question of how to handle the runtime uncertainties for GPU resource sharing remains unanswered. To address this challenge, we propose vGASA, a virtualized GPU resource adaptive scheduling algorithm in cloud gaming. vGASA interposes scheduling algorithms in the graphics API of the operating system, and hence the host graphic driver or the guest operating system remains unmodified. To fulfill the service level agreement as well as maximize GPU usage, we propose three adaptive scheduling algorithms featuring feedback control that mitigates the impact of the runtime uncertainties on the system performance. The experimental results demonstrate that vGASA is able to maintain frames per second of various workloads at the desired level with the performance overhead limited to 5-12 percent.

Journal ArticleDOI
TL;DR: By considering users' bandwidth, computing power and energy, the proposed system architecture gives users corresponding counters, which are stored and managed by the server, and enables users' spontaneous resource sharing.

Journal ArticleDOI
TL;DR: This work identifies two aspects of virtual network embedding in software-defined networks: virtual node and link mapping, and controller placement and develops techniques to perform embedding with two goals: balancing the load on the substrate network and minimizing controller-to-switch delays.

Journal ArticleDOI
01 Mar 2014
TL;DR: S/T/A is presented, a modeling language to describe system adaptation processes at the system architecture level in a generic, human-understandable and reusable way, and the results show how a holistic model-based approach can close the gap between complex manual adaptations and their autonomous execution.
Abstract: Today, software systems are more and more executed in dynamic, virtualized environments. These environments host diverse applications of different parties, sharing the underlying resources. The goal of this resource sharing is to utilize resources efficiently while ensuring that quality-of-service requirements are continuously satisfied. In such scenarios, complex adaptations to changes in the system environment are still largely performed manually by humans. Over the past decade, autonomic self-adaptation techniques aiming to minimize human intervention have become increasingly popular. However, given that adaptation processes are usually highly system-specific, it is a challenge to abstract from system details, enabling the reuse of adaptation strategies. In this paper, we present S/T/A, a modeling language to describe system adaptation processes at the system architecture level in a generic, human-understandable and reusable way. We apply our approach to multiple different realistic contexts (dynamic resource allocation, run-time adaptation planning, etc.). The results show how a holistic model-based approach can close the gap between complex manual adaptations and their autonomous execution.

Proceedings ArticleDOI
27 Aug 2014
TL;DR: The EUROSERVER device will embed multiple silicon "chiplets" on an active silicon interposer, which is pioneering a system architecture approach that allows specialized silicon devices to be built even for low-volume markets where NRE costs are currently prohibitive.
Abstract: EUROSERVER is a collaborative project that aims to dramatically improve data centre energy-efficiency, cost, and software efficiency. It is addressing these important challenges through the coordinated application of several key recent innovations: 64-bit ARM cores, 3D heterogeneous silicon-on-silicon integration, and fully-depleted silicon-on-insulator (FD SOI) process technology, together with new software techniques for efficient resource management, including resource sharing and workload isolation. We are pioneering a system architecture approach that allows specialized silicon devices to be built even for low-volume markets where NRE costs are currently prohibitive. The EUROSERVER device will embed multiple silicon "chiplets" on an active silicon interposer. Its system architecture is being driven by requirements from three use cases: data centres and cloud computing, telecom infrastructures, and high-end embedded systems. We will build two fully integrated full-system prototypes, based on a common micro-server board, and targeting embedded servers and enterprise servers.

Proceedings ArticleDOI
24 Mar 2014
TL;DR: This paper discusses how to combine a MC scheduling strategy with an optimization method for the partitioning of tasks to cores as well as the static mapping of memory blocks, i.e., task data and communication buffers, to the banks of a shared memory architecture.
Abstract: A common trend in real-time embedded systems is to integrate multiple applications on a single platform. Such systems are known as mixed-criticality (MC) systems when the applications are characterized by different criticality levels. Nowadays, multicore platforms are promoted due to cost and performance benefits. However, certification of multicore MC systems is challenging as concurrently executed applications of different criticalities may block each other when accessing shared platform resources. Most of the existing research on multicore MC scheduling ignores the effects of resource sharing on the response times of applications. Recently, a MC scheduling strategy was proposed, which explicitly accounts for these effects. This paper discusses how to combine this policy with an optimization method for the partitioning of tasks to cores as well as the static mapping of memory blocks, i.e., task data and communication buffers, to the banks of a shared memory architecture. Optimization is performed at design time targeting at minimizing the worst-case response times of tasks and achieving efficient resource utilization. The proposed optimization method is evaluated using an industrial application.

Patent
26 Mar 2014
TL;DR: In this article, the authors proposed a motor vehicle resource sharing system, which comprises a management system which is a data processing and coordination center of the motor vehicle resources sharing system and is coupled to other modules in a communication mode, including vehicle file database, vehicle renting pricing and expense settlement system, vehicle renter information reading system, a self-service terminal and an information inquiry and booking system.
Abstract: Disclosed is a motor vehicle resource sharing system. The motor vehicle resource sharing system comprises a management system which is a data processing and coordination center of the motor vehicle resource sharing system and is coupled to other modules in a communication mode, a vehicle file database, a vehicle renting pricing and expense settlement system, a vehicle renter information reading system, a self-service terminal and an information inquiry and booking system. According to the motor vehicle resource sharing system, motor vehicle resources in the society are placed into the unified sharing allocation, the mode that a plurality of users or families share a group of vehicles is achieved, the resources are shared, congestion and parking difficulties are effectively relieved, the better life index of citizens is improved, and the carbon emission of the whole society is effectively reduced.

Proceedings ArticleDOI
16 Nov 2014
TL;DR: Reciprocal Resource Fairness (RRF), a novel resource allocation mechanism to enable fair sharing multiple types of resource among multiple tenants in new-generation cloud environments, is proposed and results show that RRF is promising for both cloud providers and tenants.
Abstract: Resource sharing in virtualized environments have been demonstrated significant benefits to improve application performance and resource/energy efficiency. However, resource sharing, especially for multiple resource types, poses several severe and challenging problems in pay-as-you-use cloud environments, such as sharing incentive, free-riding, lying and economic fairness. To address those problems, we propose Reciprocal Resource Fairness (RRF), a novel resource allocation mechanism to enable fair sharing multiple types of resource among multiple tenants in new-generation cloud environments. RRF implements two complementary and hierarchical mechanisms for resource sharing: inter-tenant resource trading and intra-tenant weight adjustment. We show that RRF satisfies several highly desirable properties to ensure fairness. Experimental results show that RRF is promising for both cloud providers and tenants. Compared to existing cloud models, RRF improves virtual machine (VM) density and cloud providers' revenue by 2.2X. For tenants, RRF improves application performance by 45% and guarantees 95% economic fairness among multiple tenants.

Proceedings ArticleDOI
03 Nov 2014
TL;DR: The 'Merlin' approach to managing the resources of multicore platforms is presented, which satisfies an application's resource requirements efficiently -- using low cost allocations -- and improves isolation -- measured as increased predictability of application execution.
Abstract: Workload consolidation, whether via use of virtualization or with lightweight, container-based methods, is critically important for current and future datacenter and cloud computing systems. Yet such consolidation challenges the ability of current systems to meet application resource needs and isolate their resource shares, particularly for high core count or 'scaleup' servers. This paper presents the 'Merlin' approach to managing the resources of multicore platforms, which satisfies an application's resource requirements efficiently -- using low cost allocations -- and improves isolation -- measured as increased predictability of application execution. Merlin (i) creates a virtual platform (VP) as a system-level resource commitment to an application's resource shares, (ii) enforces its isolation, and (iii) operates with low runtime overhead. Further, Merlin's resource (re)-allocation and isolation methods operate by constructing online models that capture the resource 'sensitivities' of the currently running applications along all of their resource dimensions. Elevating isolation into a first-class management principle, these sensitivity- and cost-based allocation and sharing methods lead to efficient methods for shared resource use on scaleup server systems. Experimental evaluations on a large core-count machine demonstrate improved performance with reduced performance variation and increased system throughput and efficiency, for a wide range of popular datacenter workloads, compared with the methods used in prior work and with the state-of-art Xen hypervisor.

Patent
23 Jul 2014
TL;DR: In this article, a control processing method and system based on multiple accounts and multiple target devices was proposed. But the authors did not reveal the details of the control process and system.
Abstract: The invention discloses a control processing method and system based on multiple accounts and multiple target devices The method includes the steps that an intelligent terminal requests a sever to establish an administrator account through the MAC address of the intelligent terminal, the server end automatically generates the administrator account corresponding to the MAC address of the intelligent terminal according to the request, and a plurality of ordinary accounts used for logging on for browsing by different users are established through distribution of the administrator account; the intelligent terminal of the administrator account is logged on, resource sharing two-dimension codes of the intelligent with resource sharing are scanned, and the two-dimension codes are bound to the intelligent terminal with resource sharing to share resources; an intelligent terminal bound to the intelligent with resource sharing for the first time is set to have the authority of a super administrator so that resource sharing between the intelligent terminal and other intelligent terminals can be controlled Through the control processing method and system, different programs can be recommended through different accounts, convenience is provided for the users, resources can be shared by different devices, operation is safer, and safety is improved

Patent
08 Dec 2014
TL;DR: In this paper, a High Speed Link System providing network and data transfer capabilities, implemented via standard input/output (I/O) device controllers, protocols, cables and components, comprising a System, Apparatus and Method is claimed; and described in one or more embodiments.
Abstract: A High Speed Link System providing network and data transfer capabilities, implemented via standard input/output (I/O) device controllers, protocols, cables and components, to connect one or more Host computing systems, comprising a System, Apparatus and Method is claimed; and described in one or more embodiments. An illustrative embodiment of the invention connects two or more Host systems via USB 3.0 ports and cables, establishing Network, Control, Data Exchange, and Power management required to route and transfer data at high speeds, as well as resource sharing. A Link System established using USB 3.0 operates at the full 4.8Gbps, eliminating losses inherent when translating to, or encapsulating within, a network protocol, such as the Internet Protocol. Method claimed herein describes how two or more connected Host systems, detect one another, and establish separate communication and data exchange bridges, wherein control sequences from the Hosts' application direct the operation of the Apparatus.

Proceedings ArticleDOI
Wang Yuda1, Renyu Yang1, Tianyu Wo1, Wenbo Jiang1, Chunming Hu1 
01 Dec 2014
TL;DR: This paper proposes a system to combine long-running VM service with typical batch workload like MapReduce to improve the holistic cluster utilization through dynamic resource adjustment mechanism for VM without violating other batch workload executions.
Abstract: Virtualization is one of the most fascinating techniques because it can facilitate the infrastructure management and provide isolated execution for running workloads. Despite the benefits gained from virtualization and resource sharing, improved resource utilization is still far from settled due to the dynamic resource requirements and the widely-used over-provision strategy for guaranteed QoS. Additionally, with the emerging demands for big data analytic, how to effectively manage hybrid workloads such as traditional batch task and long-running virtual machine (VM) service needs to be dealt with. In this paper, we propose a system to combine long-running VM service with typical batch workload like MapReduce. The objectives are to improve the holistic cluster utilization through dynamic resource adjustment mechanism for VM without violating other batch workload executions. Furthermore, VM migration is utilized to ensure high availability and avoid potential performance degradation. The experimental results reveal that the dynamically allocated memory is close to the real usage with only 10% estimation margin, and the performance impact on VM and MapReduce jobs are both within 1%. Additionally, at most 50% increment of resource utilization could be achieved. We believe that these findings are in the right direction to solving workload consolidation issues in hybrid computing environments.

DOI
14 Apr 2014
TL;DR: This thesis presents and evaluates methods to analyze and optimize the performance of multi-tenant software on those two levels and focuses on one of these challenges in this thesis, namely performance.
Abstract: Multi-tenant software systems are Software-as-a-Service systems in which customers (or tenants) share the same resources. The key characteristics of multi-tenancy are hardware resource sharing, a high degree of configurability and a shared application and database instance. We can deduct from these characteristics that they lead to challenges compared to traditional software schemes. To better understand these challenges, we have come up with a reengineering pattern for transforming an existing single-tenant application into a multi-tenant one. We have done a case study in which we transform a single-tenant research prototype into a multi-tenant version. This case study showed that in a layered application, this transformation could be done in less than 100 lines of code. With a better understanding of the challenges inflicted by multi-tenancy, we have focused on one of these challenges in this thesis, namely performance. Because tenants share resources in multi-tenant applications, it is necessary to optimize these applications on two levels: (1) at the hardware level and (2) at the software level. In this thesis, we present and evaluate methods to analyze and optimize the performance of multi-tenant software on those two levels.

Journal ArticleDOI
TL;DR: The solution to the problem of resource integration and optimal scheduling in cloud manufacturing is presented from the perspective of global optimization based on the consideration of sharing and correlation among virtual resources.
Abstract: To deal with the problem of resource integration and optimal scheduling in cloud manufacturing, based on the analyzation of the existing literatures, multitask oriented virtual resource integration and optimal scheduling problem is presented from the perspective of global optimization based on the consideration of sharing and correlation among virtual resources. The correlation models of virtual resources in a task and among tasks are established. According to the correlation model and characteristics of resource sharing, the formulation in which resource time-sharing scheduling strategy is employed is put forward, and then the formulation is simplified to solve the problem easily. The genetic algorithm based on the real number matrix encoding is proposed. And crossover and mutation operation rules are designed for the real number matrix. Meanwhile, the evaluation function with the punishment mechanism and the selection strategy with pressure factor are adopted so as to approach the optimal solution more quickly. The experimental results show that the proposed model and method are feasible and effective both in situation of enough resources and limited resources in case of a large number of tasks.