scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 2006"


Journal ArticleDOI
TL;DR: This paper proposes a novel distributed service discovery protocol based on the concepts of peer-to-peer caching of service advertisements and group-based intelligent forwarding of service requests for pervasive environments.
Abstract: The paper proposes a novel distributed service discovery protocol for pervasive environments. The protocol is based on the concepts of peer-to-peer caching of service advertisements and group-based intelligent forwarding of service requests. It does not require a service to be registered with a registry or lookup server. Services are described using the Web Ontology Language (OWL). We exploit the semantic class/subClass hierarchy of OWL to describe service groups and use this semantic information to selectively forward service requests. OWL-based service description also enables increased flexibility in service matching. We present simulation results that show that our protocol achieves increased efficiency in discovering services (compared to traditional broadcast-based mechanisms) by efficiently utilizing bandwidth via controlled forwarding of service requests.

273 citations


Journal ArticleDOI
TL;DR: It is found that some intuitive 'fairness' notions may unexpectedly lead to 'unfair' allocations, which do not provide the right incentives for peers, and implementation of P2P systems may want to compromise the degree of ' fairness' in favor of maintaining system robustness and reducing overheads.

197 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to provide service differentiation in a P2P network based on the amount of services each node has provided to the network community, and to present a generalized incentive mechanism for nodes having heterogeneous utility functions.
Abstract: Conventional peer-to-peer (P2P) networks do not provide service differentiation and incentive for users. Therefore, users can easily obtain information without themselves contributing any information or service to a P2P community. This leads to the well known free-riding problem. Consequently, most of the information requests are directed towards a small number of P2P nodes which are willing to share information or provide service, causing the "tragedy of the commons." The aim of this paper is to provide service differentiation in a P2P network based on the amount of services each node has provided to the network community. Since the differentiation is based on nodes' prior contributions, the nodes are encouraged to share information/services with each other. We first introduce a resource distribution mechanism for all the information sharing nodes. The mechanism is distributed in nature, has linear time complexity, and guarantees Pareto-optimal resource allocation. Second, we model the whole resource request/distribution process as a competition game between the competing nodes. We show that this game has a Nash equilibrium. To realize the game, we propose a protocol in which the competing nodes can interact with the information providing node to reach Nash equilibrium efficiently and dynamically. We also present a generalized incentive mechanism for nodes having heterogeneous utility functions. Convergence analysis of the competition game is carried out. Examples are used to illustrate that the incentive protocol provides service differentiation and can induce productive resource sharing by rational network nodes. Lastly, the incentive protocol is adaptive to node arrival and departure events, and to different forms of network congestion.

190 citations


Proceedings ArticleDOI
05 Dec 2006
TL;DR: Detailed response time analysis is provided enabling the schedulability of application servers and tasks to be determined for systems with local and global resource access.
Abstract: This paper focuses on resource sharing in hierarchical fixed priority pre-emptive systems where a number of separate applications, each with its own server, reside on a single processor. It defines the Hierarchical Stack Resource Policy, an appropriate global resource access policy that bounds priority inversion and also limits interference due to overruns during resource access. The paper provides detailed response time analysis enabling the schedulability of application servers and tasks to be determined for systems with local and global resource access. This analysis is applicable to real-world systems where server-based applications need mutually exclusive access to shared resources such as communications buffers, peripheral devices, operating system calls and data structures shared with interrupt handlers.

170 citations


Patent
21 Feb 2006
TL;DR: In this paper, the authors address deficiencies of the art in respect to dynamic computing resource allocation, and provide a method, system and computer program product for dynamic resource allocation for disparate application performance requirements.
Abstract: Embodiments of the invention address deficiencies of the art in respect to dynamic computing resource allocation, and provide a method, system and computer program product for dynamic resource allocation for disparate application performance requirements. In one embodiment of the invention, a resource allocation data processing system can include a shared resource pool including resources and a resource configurator coupled to the shared resource pool. The system further can include a service processor coupled to the resource configurator, wherein the service processor can include an application programming interface (API) exposing methods for commanding the resource configurator to configure the resources in the shared resource pool.

131 citations


Proceedings ArticleDOI
15 Jan 2006
TL;DR: DagStream as discussed by the authors proposes a new P2P streaming framework called DagStream, which organizes peers into a directed acyclic graph (DAG) where each node maintains at least k parents, thus has provable network connectivity (and hence failure resilience), and enables peers to quickly achieve locality awareness in a distributed fashion, thus ensuring efficient network resource usage.
Abstract: Live peer to peer (P2P) media streaming faces many challenges such as peer unreliability and bandwidth heterogeneity. To effectively address these challenges, general "mesh" based P2P streaming architectures have recently been adopted. Mesh-based systems allow peers to aggregate bandwidth from multiple neighbors, and dynamically adapt to changing network conditions and neighbor failures. However, a drawback of mesh-based overlays is that it is difficult to guarantee network connectivity in a distributed fashion, especially when network locality needs to be optimized. This paper introduces a new P2P streaming framework called DagStream, which (1) organizes peers into a directed acyclic graph (DAG) where each node maintains at least k parents, thus has provable network connectivity (and hence failure resilience), and (2) enables peers to quickly achieve locality awareness in a distributed fashion, thus ensures efficient network resource usage. Our experiment results in both simulation and wide area environment show that with our DagStream protocol, peers can quickly self-organize into a locality aware DAG. Further, by selecting additional parents as needed, peers can achieve good streaming quality commensurate with their downlink bandwidth.

89 citations


Dissertation
01 Jan 2006
TL;DR: This dissertation is to develop solutions for efficient query processing over sliding windows by focusing on the semantics of persistent queries, lower window maintenance costs, as well as novel query processing, query optimization, and concurrency control strategies.
Abstract: Database management systems (DBMSs) have been used successfully in traditional business applications that require persistent data storage and an efficient querying mechanism. Typically, it is assumed that the data are static, unless explicitly modified or deleted by a user or application. Database queries are executed when issued and their answers reflect the current state of the data. However, emerging applications, such as sensor networks, real-time Internet traffic analysis, and on-line financial trading, require support for processing of unbounded data streams. The fundamental assumption of a data stream management system (DSMS) is that new data are generated continually, making it infeasible to store a stream in its entirety. At best, a sliding window of recently arrived data may be maintained, meaning that old data must be removed as time goes on. Furthermore, as the contents of the sliding windows evolve over time, it makes sense for users to ask a query once and receive updated answers over time. This dissertation begins with the observation that the two fundamental requirements of a DSMS are dealing with transient (time-evolving) rather than static data and answering persistent rather than transient queries. One implication of the first requirement is that data maintenance costs have a significant effect on the performance of a DSMS. Additionally, traditional query processing algorithms must be re-engineered for the sliding window model because queries may need to re-process expired data and "undo" previously generated results. The second requirement suggests that a DSMS may execute a large number of persistent queries at the same time, therefore there exist opportunities for resource sharing among similar queries. The purpose of this dissertation is to develop solutions for efficient query processing over sliding windows by focusing on these two fundamental properties. In terms of the transient nature of streaming data, this dissertation is based upon the following insight. Although the data keep changing over time as the windows slide forward, the changes are not random; on the contrary, the inputs and outputs of a DSMS exhibit patterns in the way the data are inserted and deleted. It will be shown that the knowledge of these patterns leads to an understanding of the semantics of persistent queries, lower window maintenance costs, as well as novel query processing, query optimization, and concurrency control strategies. In the context of the persistent nature of DSMS queries, the insight behind the proposed solution is that various queries may need to be refreshed at different times, therefore synchronizing the refresh schedules of similar queries creates more opportunities for resource sharing.

88 citations


Journal ArticleDOI
TL;DR: This is the first work that addresses how to handle dynamic flash crowds in a structured P2P network and shows that the SWOP network can efficiently deliver popular, possibly dynamic, objects to all the requesting nodes.

80 citations


Proceedings ArticleDOI
05 Dec 2006
TL;DR: The technique of reducing the duration of blocking by the replication of selected resources is explored: an algorithm is presented which determines the minimum amount of resource replication necessary to achieve specified blocking times.
Abstract: Resource sharing in priority-based systems can give rise to priority-inversion and blocking, wherein a job?s execution is delayed because a lower-priority job holds some resource that is needed for execution. The Stack Resource Policy (SRP) can be used to reduce such blocking in EDFscheduled systems. An efficient implementation of an algorithm is presented for determining whether systems scheduled in this manner are feasible. Some interesting properties of such systems are derived. The technique of reducing the duration of blocking by the replication of selected resources is explored: an algorithm is presented which determines the minimum amount of resource replication necessary to achieve specified blocking times.

78 citations


Proceedings ArticleDOI
07 Jun 2006
TL;DR: A usage control (UCON) based authorization frame-work for collaborative applications using subject and object attributes, along with system attributes as conditions, to support context-based authorizations in ad-hoc collaborations is proposed.
Abstract: Collaborative systems such as Grids provide efficient and scalable access to distributed computing capabilities and enable seamless resource sharing between users and platforms. This heterogeneous distribution of resources and the various modes of collaborations that exist between users, virtual organizations, and resource providers require scalable, flexible, and fine-grained access control to pro-tect both individual and shared computing resources. In this paper we propose a usage control (UCON) based authorization frame-work for collaborative applications. In our framework, usage con-trol policies are defined using subject and object attributes, along with system attributes as conditions. General attributes include not only persistent attributes such as role and group memberships, but also mutable usage attributes of subjects and objects. Conditions in UCON can be used to support context-based authorizations in ad-hoc collaborations. As a proof-of-concept we implement a pro-totype system based on our proposed architecture and conduct ex-perimental studies to demonstrate the feasibility and performance of our approach.

77 citations


Dissertation
01 Jul 2006
TL;DR: This paper introduces a heuristic for the selection of resources based on a solution to the Set Covering Problem (SCP), and pair this mapping heuristic with the well-known MinMin scheduling algorithm and conduct performance evaluation through extensive simulations.
Abstract: The next generation of scientific experiments and studies are being carried out by large collaborations of researchers distributed around the world engaged in analysis of huge collections of data generated by scientific instruments. Grid computing has emerged as an enabler for such collaborations as it aids communities in sharing resources to achieve common objectives. Data Grids provide services for accessing, replicating and managing data collections in these collaborations. Applications used in such Grids are distributed data-intensive, that is, they access and process distributed datasets to generate results. These applications need to transparently and efficiently access distributed data and computational resources. This thesis investigates properties of data-intensive computing environments and presents a software framework and algorithms for mapping distributed data-oriented applications to Grid resources. The thesis discusses the key concepts behind Data Grids and compares them with other data sharing and distribution mechanisms such as content delivery networks, peer-to-peer networks and distributed databases. This thesis provides comprehensive taxonomies that cover various aspects of Data Grid architecture, data transportation, data replication and resource allocation and scheduling. The taxonomies are mapped to various Data Grid systems not only to validate the taxonomy but also to better understand their goals and methodology. The thesis concentrates on one of the areas delineated in the taxonomy – scheduling distributed data-intensive applications on Grid resources. To this end, it presents the design and implementation of a Grid resource broker that mediates access to distributed computational and data resources running diverse middleware. The broker is able to discover remote data repositories, interface with various middleware services and select suitable resources in order to meet the application requirements. The use of the broker is illustrated by a case study of scheduling a data-intensive high energy physics analysis application on an Australia-wide Grid. The broker provides the framework to realise scheduling strategies with differing objectives. One of the key aspects of any scheduling strategy is the mapping of jobs to the appropriate resources to meet the objectives. This thesis presents heuristics for mapping jobs with data dependencies in an environment with heterogeneous Grid resources and multiple data replicas. These heuristics are then compared with performance evaluation metrics obtained through extensive simulations. This is to certify that (i) the thesis comprises only my original work, (ii) due acknowledgement has been made in the text to all other material used, (iii) the thesis is less than 100,000 words in length, exclusive of table, maps, bibliographies, appendices and footnotes.

Patent
Robert L. Beck1
11 Oct 2006
TL;DR: The extensible file sharing described in this article allows users, administrators, or other third party developers to expand or enhance a file sharing service or application to provide virtually any desired functionality, such as resource management, security management, management of user experience, and the like.
Abstract: A file sharing service facilitates file sharing between a client and a host over a network. An extensible architecture provides an interface by which the file sharing service can be expanded to include additional functionality. This additional functionality may include resource management, security management, management of user experience, and the like. For example, users or administrators of the host or another computing device on the network may wish to oversee the file sharing service as a whole and/or individual file sharing transactions. The extensible file sharing described herein allows users, administrators, or other third party developers to expand or enhance a file sharing service or application to provide virtually any desired functionality.

Journal ArticleDOI
TL;DR: A better understanding of peer relationships in resource sharing is given, which may help a better design of future P2P networks and open the path to the study of transport processes on top of real P1P topologies.
Abstract: This paper presents a statistical analysis of the structure of peer-to-peer (P2P) social networks that captures social associations of distributed peers in resource sharing. Peer social networks appear to be mainly composed of pure resource providers that guarantee high resource availability and reliability of P2P systems. The major peers that both provide and request resources are only a small fraction. The connectivity between peers, including undirected, directed (out and in) and weighted connections, is scale-free and the social networks of all peers and major peers are small world networks. The analysis also confirms that peer social networks show in general disassortative correlations, except that active providers are connected between each other and by active requesters. The study presented in this paper gives a better understanding of peer relationships in resource sharing, which may help a better design of future P2P networks and open the path to the study of transport processes on top of real P2P topologies.

Proceedings ArticleDOI
05 May 2006
TL;DR: The feasibility of using sensor networks as a shared resource is discussed, and the research in addressing the various technical challenges that arise in enabling such sensor portability and rapid deployment is described.
Abstract: Wireless sensor networks (WSNs) are a relatively new and rapidly developing technology; they have a wide range of applications including environmental monitoring, agriculture, and public health. Shared technology is a common usage model for technology adoption in developing countries. WSNs have great potential to be utilized as a shared resource due to their on-board processing and ad-hoc networking capabilities, however their deployment as a shared resource requires that the technical community first address several challenges. The main challenges include enabling sensor portability: (1) the frequent movement of sensors within and between deployments, and rapidly deployable systems; (2) systems that are quick and simple to deploy. We first discuss the feasibility of using sensor networks as a shared resource, and then describe our research in addressing the various technical challenges that arise in enabling such sensor portability and rapid deployment. We also outline our experiences in developing and deploying water quality monitoring wireless sensor networks in Bangladesh and California

Book ChapterDOI
TL;DR: This chapter presents the implementation of a resource broker for UNICORE as this functionality was not supported in the previous version of Gridbus and a comparison of these systems on the basis of the architecture, implementation model and several other features is included.
Abstract: Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries This makes Grid application management and deployment a complex undertaking Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment Several software toolkits and systems have been developed, most of which are results of academic research projects, all over the world This chapter will focus on four of these middlewares—UNICORE, Globus, Legion and Gridbus It also presents our implementation of a resource broker for UNICORE as this functionality was not supported in it A comparison of these systems on the basis of the architecture, implementation model and several other features is included

Patent
Brian Petersen1
15 Dec 2006
TL;DR: In this paper, the total offered traffic load for a shared resource within a network switching system may include a sum of offered traffic loads from one or more active virtual output queues (VOQs).
Abstract: According to an example embodiment, a total offered traffic load for a shared resource within a network switching system may be determined, the total offered traffic load may include, for example, a sum of offered traffic loads from one or more active virtual output queues (VOQs) of the network switching system. A capacity of the shared resource within the network switching system may be determined. A transmission rate from one or more of the active VOQs over the shared resource may be adjusted such that the total traffic load from the active VOQs does not exceed the capacity of the shared resource.

Journal ArticleDOI
TL;DR: Numerical results demonstrate that the proposed CAC scheme is able to simultaneously provide satisfactory QoS to both voice and data users and maintain a relatively high resource utilization in a dynamic traffic load environment.
Abstract: Resource allocation and call admission control (CAC) are key management functions in future cellular networks, in order to provide multimedia applications to mobiles users with quality of service (QoS) guarantees and efficient resource utilization. In this paper, we propose and analyze a priority based resource sharing scheme for voice/data integrated cellular networks. The unique features of the proposed scheme are that 1) the maximum resource utilization can be achieved, since all the leftover capacity after serving the high priority voice traffic can be utilized by the data traffic; 2) a Markovian model for the proposed scheme is established, which takes account of the complex interaction of voice and data traffic sharing the total resources; 3) optimal CAC parameters for both voice and data calls are determined, from the perspective of minimizing resource requirement and maximizing new call admission rate, respectively; 4) load adaption and bandwidth allocation adjustment policies are proposed for adaptive CAC to cope with traffic load variations in a wireless mobile environment. Numerical results demonstrate that the proposed CAC scheme is able to simultaneously provide satisfactory QoS to both voice and data users and maintain a relatively high resource utilization in a dynamic traffic load environment. The recent measurement-based modeling shows that the Internet data file size follows a lognormal distribution, instead of the exponential distribution used in our analysis. We use computer simulations to demonstrate that the impact of the lognormal distribution can be compensated for by conservatively applying the Markovian analysis results.

Patent
28 Jul 2006
TL;DR: In this article, a shared control channel information element (501) is sent to the group of mobile stations and provides a bitmap having fields for group ordering, resource allocations, failure handling resources, and an ordering pattern.
Abstract: A base station (103) assigns a set of mobile stations (101) to a group wherein the group will share a set of radio resources (710). A shared control channel information element (501) is sent to the group of mobile stations (101) and provides a bitmap having fields for group ordering (511), resource allocations (530), failure handling resources (540), and an ordering pattern (513). If a mobile station fails to decode the shared control channel information element (501) it will access the failure handling resources in order to receive data. The failure handling channel may be persistent in some embodiments, or may be released after the mobile station is once again able to decode the shared control channel information element (501) and thereby share in the shared resource pool allocated to its mobile station group.

Proceedings ArticleDOI
11 Nov 2006
TL;DR: A prototype grid hosting system is presented, in which a set of independent globus grids share a network of cluster sites and each grid instance runs a coordinator that leases and configures cluster resources for its grid on demand.
Abstract: Grid computing environments need secure resource control and predictable service quality in order to be sustainable. We propose a grid hosting model in which independent, self-contained grid deployments run within isolated containers on shared resource provider sites. Sites and hosted grids interact via an underlying resource control plane to manage a dynamic binding of computational resources to containers. We present a prototype grid hosting system, in which a set of independent Globus grids share a network of cluster sites. Each grid instance runs a coordinator that leases and configures cluster resources for its grid on demand. Experiments demonstrate adaptive provisioning of cluster resources and contrast job-level and container-level resource management in the context of two grid application managers.

Patent
23 Oct 2006
TL;DR: In this paper, authentication information can be used to architect a trustworthy system to divulging sensitive client data (such as user/computer passwords) to a host system when a client establishes a relationship having a degree of trust with the host.
Abstract: Sharing resources on a network include, for example, a domain controller hierarchy scheme, which is used in some implementations to organize and share both secure and non-secure resources in an efficient manner. Using authentication information can be used to architect a trustworthy system to divulging sensitive client data (such as user/computer passwords) to a host system. The sensitive client data can be released to the host system when a client establishes a relationship having a degree of trust with the host.

Proceedings ArticleDOI
04 Apr 2006
TL;DR: A hierarchical I/O scheduling algorithm to manage performance resources on an underlying storage device that combines deadline scheduling with proportional-style resource sharing in a novel way, and achieves throughput nearly equal to that of an unmanaged device.
Abstract: Large-scale storage systems often hold data for multiple applications and users. A problem in such systems is isolating applications and users from each other to prevent their workloads from interacting in unexpected ways. Another is ensuring that each application receives an appropriate level of performance. As part of the solution to these problems, we have designed a hierarchical I/O scheduling algorithm to manage performance resources on an underlying storage device. Our algorithm uses a simple allocation abstraction: an application or user has a corresponding pool of throughput, and manages throughput within its pool by opening sessions. The algorithm ensures that each pool and session receives at least a reserve rate of throughput and caps usage at a limit rate, using hierarchical token buckets and EDF I/O scheduling. Once it has fulfilled the reserves of all active sessions and pools, it shares unused throughput fairly among active sessions and pools such that they tend to receive the same amount. It thus combines deadline scheduling with proportional-style resource sharing in a novel way. We assume that the device performs its own low-level head scheduling, rather than modeling the device in detail. Our implementation shows the correctness of our algorithm, imposes little overhead on the system, and achieves throughput nearly equal to that of an unmanaged device.

Patent
Gerrit Huizenga1
07 Aug 2006
TL;DR: A technique for managing application workloads in a data processing system includes establishing a reserve resource allocation of data processing resource for use by data processing application having a need to utilize the resource.
Abstract: A technique for managing application workloads in a data processing system includes establishing a reserve resource allocation of a data processing resource for use by a data processing application having a need to utilize the resource, sharing a portion of the resource that is not part of the reserve resource allocation or otherwise utilized by the application with other applications, allowing the application to consume the reserve resource allocation when the application desires to increase its utilization of the resource, and replenishing the reserve resource allocation from the sharable portion of the resource as the application consumes the reserve resource allocation. Advantageously, the reserve resource allocation can be effectively managed so that the data processing resource is available to the application in order to reduce application latency without depriving other applications of the ability to use the resource.

Patent
20 Jul 2006
TL;DR: In this article, a scheduler parameter for a maximum allocation of capacity of a shared resource to a consumer for a scheduling interval is defined, and when the allocated capacity of the shared resource is completely utilized by the consumer during the given scheduling interval, the scheduler increases the allocation to a predefined maximum amount in response to an allocated amount of capacity being completely utilized during the scheduling interval.
Abstract: A method comprises defining a scheduler parameter for a maximum allocation of capacity of a shared resource to a consumer for a scheduling interval. Utilization of an allocated capacity of the shared resource by the consumer during a given scheduling interval is measured, and when the allocated capacity of the shared resource is completely utilized by the consumer during the given scheduling interval, the scheduler increases the allocated capacity of the shared resource to the defined maximum allocation for the consumer for a next scheduling interval. Thus, rather than gradually increasing the allocation of capacity over many intervals, the scheduler immediately increases the allocation to a predefined maximum amount in response to an allocated amount of capacity being completely utilized during a scheduling interval.

Patent
13 Oct 2006
TL;DR: In this paper, a real-time integrated information sharing and telecommunications collaboration system is disclosed, which includes at least one central server to create, store, display, edit, distribute, share, control and archive voice, data, video and images with a plurality of simultaneous wireless and wireline remote display devices.
Abstract: A real-time integrated information sharing and telecommunications collaboration system is disclosed. The system includes at least one central server to create, store, display, edit, distribute, share, control and archive voice, data, video and images with a plurality of simultaneous wireless and wireline remote display devices. The system includes at least one central server monitors, controls and protects voice, data, video and image communications to, from and between display devices through encrypted token based security identifiers. The sharing of information and communication data packets between the display devices is contingent upon permissions assigned to individual human or machine end users. All data and communications, including the encrypted token based security identifiers may be stored simultaneously or individually within the central server, display device, or a third-party remote storage device whereby each or all may reside behind additional security systems and firewalls at a plurality of locations. All voice, data, video and images are seamlessly integrated through either one or in combination of communications paths to include, but not limited to, the Public Switched Telephone Network, World Wide Web, Internet, Wireless Wide Area Network (WWAN), Wide Area Network (WAN), Local Area Network (LAN), satellite, land mobile radio, WiFi, Worldwide Interoperability for Microwave Access (WiMAX), broadband over powerlines and other wireline and wireless networks.

Journal ArticleDOI
Anees Shaikh1, Sambit Sahu1, Marcel-Catalin Rosu1, Michael John Shea1, Debanjan Saha1 
TL;DR: A prototype implementation of a service platform for online games that offers integration using open standards and off-the-shelf software and embraces virtualization and simplification to enable sharing resources across games.
Abstract: A shared infrastructure, based on emerging on demand computing models, that supports multiple games offers an attractive option for large-scale multiplayer online game providers who want to avoid the risk of investing in dedicated resources In this paper, we describe a prototype implementation of a service platform for online games The platform design follows the on demand computing paradigm It offers integration using open standards and off-the-shelf software and embraces virtualization and simplification to enable sharing resources across games We describe our experience with identifying appropriate performance metrics for provisioning game servers and with implementing reusable platform components that provide useful functionality for a variety of games

Patent
26 Jul 2006
TL;DR: In this paper, the authors propose a method for determining an amount to which resource usage by a driver domain on behalf of a given virtual machine (VM) is to be limited.
Abstract: A method comprises determining an amount to which resource usage by a driver domain on behalf of a given virtual machine (VM) is to be limited. The method further comprises controlling the resource usage by the driver domain on behalf of the given VM so as not to exceed the determined amount. A system comprises a VM, a driver domain, and a shared resource that is usable by the VM and by the driver domain. The system further comprises a controller for limiting usage of the shared resource by the driver domain on behalf of the VM to prevent the usage of the shared resource by the driver domain on behalf of the VM from exceeding a determined amount.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a method for determining how much to charge users of a communication network when they share bandwidth, which can be employed either when a network owner wishes to sell bandwidth for a specified period of time to a number of different users, or when users cooperate to build a network to be shared among themselves.
Abstract: We propose a method for determining how much to charge users of a communication network when they share bandwidth. Our approach can be employed either when a network owner wishes to sell bandwidth for a specified period of time to a number of different users, or when users cooperate to build a network to be shared among themselves. Our proposed contract and balancing mechanism can mediate between rapidly fluctuating prices and the longer time scales over which bandwidth contracts may be traded. An advantage of the process is that it avoids perverse incentives for a capacity provider to increase congestion.

Proceedings ArticleDOI
25 Sep 2006
TL;DR: A market-based SLA coordination mechanism, based on a well known contract net protocol, that allows resource owners to have finer degree of control over the resource allocation and superschedulers to bid for SLA contracts in the contract net with focus on completing a job within a user specified deadline is presented.
Abstract: The Service Level Agreement (SLA) based grid superscheduling approach promotes coordinated resource sharing. Superscheduling is facilitated between administratively and topologically distributed grid sites via grid schedulers such as Resource brokers and workflow engines. In this work, we present a market-based SLA coordination mechanism, based on a well known contract net protocol. The key advantages of our approach are that it allows: (i) resource owners to have finer degree of control over the resource allocation which is something that is not possible with traditional mechanisms; and (ii) superschedulers to bid for SLA contracts in the contract net, with focus on completing a job within a user specified deadline. In this work, we use simulation to show the effectiveness of our proposed approach.

Proceedings ArticleDOI
07 Jun 2006
TL;DR: This paper proposes a role-based access management framework to enable secure resource sharing, especially focusing on the digital information sharing in the heterogeneous scientific collaboration environments, and designs and implements a prototype system based on a peer-to-peer information sharing toolkit developed by Lawrence Berkeley National Laboratory.
Abstract: Under scientific collaborations, resource sharing tends to be highly dynamic and often ad hoc. The dynamic characteristics and sharing patterns of ad-hoc collaborative sharing impose a need for comprehensive and flexible approaches to reflect and cope with the unique access control requirements associated with the ad-hoc collaboration. In this paper, we propose a role-based access management framework to enable secure resource sharing,especially focusing on the digital information sharing in the heterogeneous scientific collaboration environments.Our framework incorporates role-based approach to address distributed access control, delegation and dissemination control involved in the resource sharing within such environments. A set of XACML-based policy schemas is proposed to specify policies on our framework. To demonstrate the feasibility of our framework, we design and implement a proof-of-concept prototype system called ShareEnabler, which is based on a peer-to-peer information sharing toolkit developed by Lawrence Berkeley National Laboratory.

Journal ArticleDOI
01 Feb 2006
TL;DR: The underlying analytical and algorithmic substrates based on randomization for job distribution, replication, monitoring, aggregation and oblivious resource sharing and communication between participating hosts are described and support the claims of robustness and scalability analytically with high probabilistic guarantees.
Abstract: Motivated by the needs and success of projects such as SETI@home and genome@home, we propose an architecture for a sustainable large-scale peer-to-peer environment for distributed cycle sharing among Internet hosts. Such networks are characterized by highly dynamic state due to high arrival and departure rates. This makes it difficult to build and maintain structured networks and to use state-based resource allocation techniques. We build our system to work in an environment similar to current file-sharing networks such as Gnutella and Freenet. In doing so, we are able to leverage vast network resources while providing resilience to random failures, low network overhead, and an open architecture for resource brokering. This paper describes the underlying analytical and algorithmic substrates based on randomization for job distribution, replication, monitoring, aggregation and oblivious resource sharing and communication between participating hosts. We support our claims of robustness and scalability analytically with high probabilistic guarantees. Our algorithms do not introduce any state dependencies, and hence are resilient to dynamic node arrivals, departures, and failures. We support all analytical claims with a detailed simulation-based evaluation of our distributed framework.