scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 2010"


Proceedings ArticleDOI
14 Mar 2010
TL;DR: The results show that even though the data center network is lightly utilized, virtualization can still cause significant throughput instability and abnormal delay variations.
Abstract: Cloud computing services allow users to lease computing resources from large scale data centers operated by service providers. Using cloud services, users can deploy a wide variety of applications dynamically and on-demand. Most cloud service providers use machine virtualization to provide flexible and cost-effective resource sharing. However, few studies have investigated the impact of machine virtualization in the cloud on networking performance. In this paper, we present a measurement study to characterize the impact of virtualization on the networking performance of the Amazon Elastic Cloud Computing (EC2) data center. We measure the processor sharing, packet delay, TCP/UDP throughput and packet loss among Amazon EC2 virtual machines. Our results show that even though the data center network is lightly utilized, virtualization can still cause significant throughput instability and abnormal delay variations. We discuss the implications of our findings on several classes of applications.

720 citations


Journal ArticleDOI
13 Mar 2010
TL;DR: This study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling, and finds a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware.
Abstract: Contention for shared resources on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past. Previous solutions focused primarily on hardware techniques and software page coloring to mitigate this problem. Our goal is to investigate how and to what extent contention for shared resource can be mitigated via thread scheduling. Scheduling is an attractive tool, because it does not require extra hardware and is relatively easy to integrate into the system. Our study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling. The most difficult part of the problem is to find a classification scheme for threads, which would determine how they affect each other when competing for shared resources. We provide a comprehensive analysis of such classification schemes using a newly proposed methodology that enables to evaluate these schemes separately from the scheduling algorithm itself and to compare them to the optimal. As a result of this analysis we discovered a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware. To show the applicability of our analysis we design a new scheduling algorithm, which we prototype at user level, and demonstrate that it performs within 2\% of the optimal. We also conclude that the highest impact of contention-aware scheduling techniques is not in improving performance of a workload as a whole but in improving quality of service or performance isolation for individual applications.

532 citations


Proceedings ArticleDOI
18 Apr 2010
TL;DR: The evaluation results of the proposed mode selection procedure show that it enables a much more reliable device-to-device communication with limited interference to the cellular network compared to simpler mode selection procedures.
Abstract: Device-to-Device communication underlaying a cellular network enables local services with limited interference to the cellular network. In this paper we study the optimal selection of possible resource sharing modes with the cellular network in a single cell. Based on the learning from the single cell studies we propose a mode selection procedure for a multi-cell environment. Our evaluation results of the proposed procedure show that it enables a much more reliable device-to-device communication with limited interference to the cellular network compared to simpler mode selection procedures. A well performing and practical mode selection is critical to enable the adoption of underlay device-to-device communication in cellular networks.

476 citations


Journal ArticleDOI
13 Mar 2010
TL;DR: The technique, Fairness via Source Throttling (FST), estimates the unfairness in the entire shared memory system and throttles down cores causing unfairness, thereby eliminating the need for and complexity of developing fairness mechanisms for each individual resource.
Abstract: Cores in a chip-multiprocessor (CMP) system share multiple hardware resources in the memory subsystem. If resource sharing is unfair, some applications can be delayed significantly while others are unfairly prioritized. Previous research proposed separate fairness mechanisms in each individual resource. Such resource-based fairness mechanisms implemented independently in each resource can make contradictory decisions, leading to low fairness and loss of performance. Therefore, a coordinated mechanism that provides fairness in the entire shared memory system is desirable. This paper proposes a new approach that provides fairness in the entire shared memory system, thereby eliminating the need for and complexity of developing fairness mechanisms for each individual resource. Our technique, Fairness via Source Throttling (FST), estimates the unfairness in the entire shared memory system. If the estimated unfairness is above a threshold set by system software, FST throttles down cores causing unfairness by limiting the number of requests they can inject into the system and the frequency at which they do. As such, our source-based fairness control ensures fairness decisions are made in tandem in the entire memory system. FST also enforces thread priorities/weights, and enables system software to enforce different fairness objectives and fairness-performance tradeoffs in the memory system. Our evaluations show that FST provides the best system fairness and performance compared to four systems with no fairness control and with state-of-the-art fairness mechanisms implemented in both shared caches and memory controllers.

321 citations


Proceedings ArticleDOI
05 Jul 2010
TL;DR: This paper outlines the vision of, and experiences with, creating a Social Storage Cloud, looking specifically at possible market mechanisms that could be used to create a dynamic Cloud infrastructure in a Social network environment.
Abstract: With the increasingly ubiquitous nature of Social networks and Cloud computing, users are starting to explore new ways to interact with, and exploit these developing paradigms. Social networks are used to reflect real world relationships that allow users to share information and form connections between one another, essentially creating dynamic Virtual Organizations. We propose leveraging the pre-established trust formed through friend relationships within a Social network to form a dynamic“Social Cloud”, enabling friends to share resources within the context of a Social network. We believe that combining trust relationships with suitable incentive mechanisms (through financial payments or bartering) could provide much more sustainable resource sharing mechanisms. This paper outlines our vision of, and experiences with, creating a Social Storage Cloud, looking specifically at possible market mechanisms that could be used to create a dynamic Cloud infrastructure in a Social network environment.

213 citations


Proceedings ArticleDOI
12 Apr 2010
TL;DR: The worst-case completion time for real-time tasks when time division multiple access (TDMA) policies are applied for resource arbitration is analyzed, for a given TDMA arbiter.
Abstract: Modern computing systems have adopted multicore architectures and multiprocessor systems on chip (MPSoCs) for accommodating the increasing demand on computation power. However, performance boosting is constrained by shared resources, such as buses, main memory, DMA, etc.This paper analyzes the worst-case completion (response) time for real-time tasks when time division multiple access (TDMA) policies are applied for resource arbitration.Real-time tasks execute periodically on a processing element and are constituted by sequential superblocks. A superblock is characterized by its accesses to a shared resource and its computation time. We explore three models of accessing shared resources: (1)dedicated access model, in which accesses happen only at the beginning and the end of a superblock, (2) general access model, in which accesses could happen anytime during the execution of a superblock, and (3) hybrid access model, which combines the dedicated and general access models. We present a framework to analyze the worst-case completion time of real-time tasks (superblocks) under these three access models, for a given TDMA arbiter. We compare the timing analysis of the three proposed models for a real-world application.

112 citations


Patent
18 Oct 2010
TL;DR: In this paper, a system and method for establishing inter-Cloud resource sharing agreements and policies such that dynamic expansion/contraction of Cloud resource requests can be seamlessly addressed without requiring physical build-out of the primary Cloud infrastructure and advertising the need for additional resources or the offer to provide additional resources can be brokered through an established marketplace.
Abstract: The present invention provides a system and method for establishing inter-Cloud resource sharing agreements and policies such that dynamic expansion/contraction of Cloud resource requests can be seamlessly addressed without requiring physical build-out of the primary Cloud infrastructure and advertising the need for additional resources or the offer to provide additional resources can be brokered through an established marketplace. The financial transaction will support a symbiotic bi- lateral fair- share method that better aligns with an alternating supplier/consumer business model. Using this system and method will decrease the amount of time needed to respond to a given Cloud service request while advantaging a resource sharing model amongst established Cloud providers.

102 citations


Patent
22 Jun 2010
TL;DR: A system and method for pairing computing devices using an authentication protocol that allows an initiating computing device to gain access to a secure, encrypted network of a target computing device is described in this article.
Abstract: A system and method are disclosed for pairing computing devices using an authentication protocol that allows an initiating computing device to gain access to a secure, encrypted network of a target computing device.

96 citations


Proceedings ArticleDOI
17 Aug 2010
TL;DR: A novel pattern driven application consolidation (PAC) system to achieve efficient resource sharing in virtualized cloud computing infrastructures using signal processing techniques to dynamically discover significant patterns called signatures of different applications and hosts.
Abstract: To reduce cloud system resource cost, application consolidation is a must. In this paper, we present a novel pattern driven application consolidation (PAC) system to achieve efficient resource sharing in virtualized cloud computing infrastructures. PAC employs signal processing techniques to dynamically discover significant patterns called signatures of different applications and hosts. PAC then performs dynamic application consolidation based on the extracted signatures. We have implemented a prototype of the PAC system on top of the Xen virtual machine platform and tested it on the NCSU Virtual Computing Lab. We have tested our system using RUBiS benchmarks, Hadoop data processing systems, and IBM System S stream processing system. Our experiments show that 1) PAC can efficiently discover repeating resource usage patterns in the tested applications; 2) Signatures can reduce resource prediction errors by 50-90% compared to traditional coarse-grained schemes; 3) PAC can improve application performance by up to 50% when running a large number of applications on a shared cluster.

95 citations


Proceedings ArticleDOI
05 Jul 2010
TL;DR: This measurement based analysis on performance impact of co-locating applications in a virtualized cloud in terms of throughput and resource sharing effectiveness, including the impact of idle instances on applications that are running concurrently on the same physical host is presented.
Abstract: Virtualization is a key technology for cloud based data centers to implement the vision of infrastructure as a service (IaaS) and to promote effective server consolidation and application consolidation. However, current implementation of virtual machine monitor does not provide sufficient performance isolation to guarantee the effectiveness of resource sharing, especially when the applications running on multiple virtual machines of the same physical machine are competing for computing and communication sources. In this paper, we present our performance measurement study of network I/O applications in virtualized cloud. We focus our measurement based analysis on performance impact of co-locating applications in a virtualized cloud in terms of throughput and resource sharing effectiveness, including the impact of idle instances on applications that are running concurrently on the same physical host. Our results show that by strategically co-locating network I/O applications, performance improvement for cloud consumers can be as high as 34%, and the cloud providers can achieve over 40% performance gain.

90 citations


Proceedings ArticleDOI
08 Mar 2010
TL;DR: A method is proposed that captures the request distances of multiple shared resource accesses by single tasks and also by multiple tasks that are dynamically scheduled on the same processor to allow addressing also dynamic cache misses that surface dynamically during the execution of the tasks.
Abstract: Predicting timing behavior is key to reliable real-time system design and verification, but becomes increasingly difficult for current multiprocessor systems on chip. The integration of formerly separate functionality into a single multicore system introduces new inter-core timing dependencies, resulting from the common use of the now shared resources. In order to conservatively bound the delay due to the shared resource accesses, upper bounds on the potential amount of conflicting requests from other processors are required. This paper proposes a method that captures the request distances of multiple shared resource accesses by single tasks and also by multiple tasks that are dynamically scheduled on the same processor. Unlike previous work, we acknowledge the fact that on a single processor, tasks will not actually execute in parallel, but in alternation. This consideration leads to a more accurate load model. In a final step, the approach is extended to allow addressing also dynamic cache misses that do not occur at predefined times but surface dynamically during the execution of the tasks.

Book ChapterDOI
01 Jan 2010
TL;DR: This paper investigates few major security issues with cloud computing and the existing counter measures to those security challenges in the world of cloud computing.
Abstract: Cloud Computing is one of the biggest buzzwords in the computer world these days. It allows resource sharing that includes software, platform and infrastructure by means of virtualization. Virtualization is the core technology behind cloud resource sharing. This environment strives to be dynamic, reliable, and customizable with a guaranteed quality of service. Security is as much of an issue in the cloud as it is anywhere else. Different people share different point of view on cloud computing. Some believe it is unsafe to use cloud. Cloud vendors go out of their way to ensure security. This paper investigates few major security issues with cloud computing and the existing counter measures to those security challenges in the world of cloud computing..

Proceedings ArticleDOI
13 Sep 2010
TL;DR: Based on extensive experiments and measurements, accurate power and performance models for a high performance multi-core server system with virtualization are presented.
Abstract: Virtualization has become a very important technology which has been adopted in many enterprise computing systems and data centers. Virtualization makes resource management and maintenance easier, and can decrease energy consumption through resource consolidation. To develop and employ sophisticated resource management, accurate power and performance models of the hardware resources in a virtualized environment are needed. Based on extensive experiments and measurements, this paper presents accurate power and performance models for a high performance multi-core server system with virtualization.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the emergence and shaping of shared services in the context of government reforms, and identify important preconditions for shared service emergence, including cost pressure as a motive, the existence of key actors as well as the presence of prior cooperation.
Abstract: Purpose – The paper seeks to investigate into the shared services phenomenon in the context of government reforms. It especially aims to address the emergence and shaping of shared services. The paper seeks to develop the notion of shared service centres (SSCs) and shared service networks (SSNs).Design/methodology/approach – An interview‐ and document analysis‐based multiple case study was conducted in Germany. The qualitative analysis covered two shared service projects on the local government level.Findings – Important preconditions for shared service emergence are identified, including cost pressure as motive, the existence of key actors as well as the existence of prior cooperation. Moreover, the paper provides evidence that the structure of previous cooperation exerts influence on if shared services are organized in a centralised (SSC) or decentralised format (SSN).Research limitations/implications – The case selection is a possible limitation of the presented study. The selected cases give an insigh...

Journal ArticleDOI
01 May 2010
TL;DR: This survey supplements and complements existing surveys by reviewing, comparing, and highlighting existing research initiatives on applying bargaining (negotiation) as a mechanism to Grid resource management.
Abstract: Since Grid computing systems involve large-scale resource sharing, resource management is central to their operations. Whereas there are more Grid resource management systems adopting auction, commodity market, and contract-net (tendering) models, this survey supplements and complements existing surveys by reviewing, comparing, and highlighting existing research initiatives on applying bargaining (negotiation) as a mechanism to Grid resource management. The contributions of this paper are: 1) discussing the motivations for considering bargaining models for Grid resource allocation; 2) discussing essential design considerations such as modeling devaluation of Grid resources, considering market dynamics, relaxing bargaining terms, and co-allocation of resources when building Grid negotiation mechanisms; 3) reviewing the strategies and protocols of state-of-the-art Grid negotiation mechanisms; 4) providing detailed comparisons and analyses on how state-of-the-art Grid negotiation mechanisms address the design considerations mentioned in 3); and 5) suggesting possible new directions.

Book ChapterDOI
19 Jun 2010
TL;DR: This work takes a topology-aware approach to on-chip QOS and proposes to segregate shared resources into dedicated, QOS-enabled regions of the chip via a combination of topology and operating system support.
Abstract: Power limitations and complexity constraints demand modular designs, such as chip multiprocessors (CMPs) and systems-on-chip (SOCs). Today's CMPs feature up to a hundred discrete cores, with greater levels of integration anticipated in the future. Supporting effective on-chip resource sharing for cloud computing and server consolidation necessitates CMP-level quality-of-service (QOS) for performance isolation, service guarantees, and security. This work takes a topology-aware approach to on-chip QOS. We propose to segregate shared resources into dedicated, QOS-enabled regions of the chip. We than eliminate QOS-related hardware and its associated overheads from the rest of the die via a combination of topology and operating system support. We evaluate several topologies for the QOS-enabled regions, including a new organization called Destination Partitioned Subnets (DPS) which uses a light-weight dedicated network for each destination node. DPS matches or bests other topologies with comparable bisection bandwidth in performance, area- and energy-efficiency, fairness, and preemption resilience.

Journal ArticleDOI
TL;DR: Overall, the results of the study suggest that MeshChord can be successfully utilized for implementing file/resource sharing applications in wireless mesh networks.
Abstract: Wireless mesh networks are a promising area for the deployment of new wireless communication and networking technologies. In this paper, we address the problem of enabling effective peer-to-peer resource sharing in this type of networks. Starting from the well-known Chord protocol for resource sharing in wired networks, we propose a specialization that accounts for peculiar features of wireless mesh networks: namely, the availability of a wireless infrastructure, and the 1-hop broadcast nature of wireless communication, which bring to the notions of location awareness and MAC layer cross-layering. Through extensive packet-level simulations, we investigate the separate effects of location awareness and MAC layer cross-layering, and of their combination, on the performance of the P2P application. The combined protocol, MeshChord, reduces message overhead of as much as 40 percent with respect to the basic Chord design, while at the same time improving the information retrieval performance. Notably, differently from the basic Chord design, our proposed MeshChord specialization displays information retrieval performance resilient to the presence of both CBR and TCP background traffic. Overall, the results of our study suggest that MeshChord can be successfully utilized for implementing file/resource sharing applications in wireless mesh networks.

Posted Content
TL;DR: This paper presents a decentralized event-triggered implementation, over sensor/actuator networks, of centralized nonlinear controllers of centralizedNonlinear controllers to reduce the network traffic and reduce the energy expenditures of battery powered wireless sensor nodes.
Abstract: In recent years we have witnessed a move of the major industrial automation providers into the wireless domain. While most of these companies already offer wireless products for measurement and monitoring purposes, the ultimate goal is to be able to close feedback loops over wireless networks interconnecting sensors, computation devices, and actuators. In this paper we present a decentralized event-triggered implementation, over sensor/actuator networks, of centralized nonlinear controllers. Event-triggered control has been recently proposed as an alternative to the more traditional periodic execution of control tasks. In a typical event-triggered implementation, the control signals are kept constant until the violation of a condition on the state of the plant triggers the re-computation of the control signals. The possibility of reducing the number of re-computations, and thus of transmissions, while guaranteeing desired levels of performance makes event-triggered control very appealing in the context of sensor/actuator networks. In these systems the communication network is a shared resource and event-triggered implementations of control laws offer a flexible way to reduce network utilization. Moreover reducing the number of times that a feedback control law is executed implies a reduction in transmissions and thus a reduction in energy expenditures of battery powered wireless sensor nodes.

Patent
21 Jul 2010
TL;DR: In this article, a digital multimedia information transmission platform, which comprises an acquisition system, a manufacturing system, media resource system, management system, and a release system, is described.
Abstract: The invention discloses a digital multimedia information transmission platform, which comprises an acquisition system, a manufacturing system, a media resource system, a management system and a release system, wherein the acquisition system consists of control equipment, outside network material receiving server and an foreign signal, studio signal and magnet tape material acquisition and collection subsystem; the manufacturing system consists of a program editing system, a program examination system, a background packing and synthesizing system and a resource manager; the media resource system consists of a media resource cataloging and searching work station, a transcoding server, a database server, a storage management and migration server and a system management working station; the management system consists of a uniform user identification system and a network management system; the release system serving as an external interface module of a multimedia center encrypts finished products in multiple formats and executes related release according to outside service demands; and the platform is an integrated production line of digital media contents and also a digital media resource comprehensive service system platform and can realize overall media resource sharing

Patent
John Neystadt1, Nir Nice1
26 Feb 2010
TL;DR: In this paper, a particular method receives a resource access identifier associated with a shared computing resource and embeds the access identifier into a link to the shared resource, which is inserted into an information element, and an access control scheme is associated with the information element to generate a protected information element.
Abstract: Methods, systems, and computer-readable media are disclosed for access control. A particular method receives a resource access identifier associated with a shared computing resource and embeds the resource access identifier into a link to the shared resource. The link to the shared resource is inserted into an information element. An access control scheme is associated with the information element to generate a protected information element, and the protected information element is sent to a destination computing device.

Journal ArticleDOI
TL;DR: A novel approach for decentralized and cooperative workflow scheduling in a dynamic and distributed Grid resource sharing environment that derives from a Distributed Hash Table based d-dimensional logical index space with regard to resource discovery, coordination and overall system decentralization is proposed.

Patent
16 Dec 2010
TL;DR: In this article, a networked solution offering a software-based service via networked architecture having a system landscape can discover a shared resource within the system landscape, for example by accessing a landscape directory comprising information about a plurality of shared resources available in the system.
Abstract: A networked solution offering a software-based service via a networked architecture having a system landscape can discover a shared resource within the system landscape, for example by accessing a landscape directory comprising information about a plurality of shared resources available in the system landscape. The information about the discovered shared resource can include a second networked solution within the system landscape that has previously configured the discovered shared resource. Configuration settings can be retrieved for the discovered shared resource from the second networked solution. Using the retrieved configuration settings, a shared resource-specific communication channel can be determined for the networked solution to access the discovered shared resource in a peer-to-peer manner. A resource type-specific application programming interface can be provided to the software-based service to enable consumption of the discovered shared resource by the software-based service. Related methods, systems, and articles of manufacture are described.

Book ChapterDOI
19 Sep 2010
TL;DR: A new service-oriented networked manufacturing model--cloud manufacturing, which is the combination of cloud computing and SOA, is proposed to support resource sharing and cooperative work between enterprises for global manufacturing.
Abstract: The emerging and spring up of cloud computing gives manufacturing a new solution and chance to realize resource sharing and cooperative work between enterprises for global manufacturing, the paper proposes a new service-oriented networked manufacturing model--cloud manufacturing, which is the combination of cloud computing and SOA. The resource sharing method in cloud manufacturing environment is proposed to support resource sharing and cooperative work between enterprises for global manufacturing. The description of manufacturing services and the business-driver building cloud manufacturing application method are introduced in detail. At last, we make a conclusion and put forward the future work.

Patent
01 Jul 2010
TL;DR: In this article, a broker server can facilitate more secure single sign-on by providing a single-use ticket to a client device that authenticates with the broker server, so that the client device can use this single use ticket to authenticate with a shared resource.
Abstract: Systems and methods for enhancing security of single sign-on are described. These systems and methods can reduce the amount of sensitive information stored on a client device while still providing single sign-on access to shared resources such as virtual desktops or Terminal Servers. For example, storage of authentication information on client devices can be avoided while still allowing client devices to connect to the shared resources. Instead, such information can be stored at a broker server that brokers connections from client devices to the shared resources. The broker server can facilitate more secure single sign-on by providing a single-use ticket to a client device that authenticates with the broker server. The client device can use this single-use ticket to authenticate with a shared resource.

Journal ArticleDOI
TL;DR: The contention for shared resources on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past as discussed by the authors, despite the fact that there is a large body of work dedicated to the problem.
Abstract: Contention for shared resources on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past. Previous solution...

Journal ArticleDOI
TL;DR: A strategy and a model based on the use of services for the design of distributed knowledge discovery services are described and how Grid frameworks can be developed as a collection of services and how they can be used to develop distributed data analysis tasks and knowledge discovery processes using the SOA model.
Abstract: IntroductionComputer science applications are becoming more and more network centric, ubiquitous, knowledge intensive, and computing demanding. This trend will result soon in an ecosystem of pervasive applications and services that professionals and end-users can exploit everywhere. Recently, collections of IT services and applications, such as Web services and Cloud computing services, became available opening the way for accessing computing services as public utilities, like water, gas and electricity.Key technologies for implementing that perspective are Cloud computing and Web services, semantic Web and ontologies, pervasive computing, P2P systems, Grid computing, ambient intelligence architectures, data mining and knowledge discovery tools, Web 2.0 facilities, mashup tools, and decentralized programming models. In fact, it is mandatory to develop solutions that integrate some or many of those technologies to provide future knowledge-intensive software utilities. The Grid paradigm can represent a key component of the future Internet, a cyber infrastructure for efficiently supporting that scenario.Grid and Cloud computing are evolved models of distributed computing and parallel processing technologies. The Grid is a distributed computing infrastructure that enables coordinated resource sharing within dynamic organizations consisting of individuals, institutions, and resources. In the area of Grid computing a proposed approach in accordance with the trend outlined above is the Service-Oriented Knowledge Utilities (SOKU) model that envisions the integrated use of a set of technologies that are considered as a solution to information, knowledge and communication needs of many knowledge-based industrial and business applications. The SOKU approach stems from the necessity of providing knowledge and processing capabilities to everybody, thus supporting the advent of a competitive knowledge-based economy. Although the SOKU model is not yet implemented, Grids are increasingly equipped with data management tools, semantic technologies, complex work-flows, data mining features and other Web intelligence approaches. Similar efforts are currently devoted to develop knowledge and intelligent Clouds. These technologies can facilitate the process of having Grids and Clouds as strategic components for supporting pervasive knowledge intensive applications and utilities.Grids were originally designed for dealing with problems involving large amounts of data and/or compute-intensive applications. Today, however, Grids enlarged their horizon as they are going to run business applications supporting consumers and end-users. To face those new challenges, Grid environments must support adaptive knowledge management and data analysis applications by offering resources, services, and decentralized data access mechanisms. In particular, according to the service-oriented architecture (SOA) model, data mining tasks and knowledge discovery processes can be delivered as services in Grid-based infrastructures.Through a service-based approach we can define integrated services for supporting distributed business intelligence tasks in Grids. Those services can address all the aspects that must be considered in data mining and in knowledge discovery processes such as data selection and transport, data analysis, knowledge models representation and visualization. We worked in this direction for providing Grid-based architectures and services for distributed knowledge discovery such as the Knowledge Grid the Weka4WS toolkit, and mobile Grid services for data mining.Here we describe a strategy and a model based on the use of services for the design of distributed knowledge discovery services and discuss how Grid frameworks, such those mentioned above, can be developed as a collection of services and how they can be used to develop distributed data analysis tasks and knowledge discovery processes using the SOA model.

Patent
07 Jul 2010
TL;DR: In this article, a method for realizing resource share among terminals, a resource processing system and the terminals is proposed, which comprises the following steps: after being connected with a second terminal, the first terminal acquires resource information shared by the second terminal; after receiving an instruction input by a user for operating a specific shared resource on the second node, the node creates a virtual resource corresponding to the shared resource and sends an operation instruction input for operating the virtual resource to the user.
Abstract: The invention aims to provide a method for realizing resource share among terminals, a resource processing system and the terminals. The method comprises the following steps: after being connected with a second terminal, the first terminal acquires resource information shared by the second terminal; after receiving an instruction input by a user for operating a specific shared resource on the second terminal, the first terminal creates a virtual resource corresponding to the specific shared resource and sends an operation instruction input by the user for operating the virtual resource to the second terminal; after receiving the operation instruction, the second terminal operates the corresponding resource according to the operation instruction and then sends an operation result of the corresponding resource to the first terminal; and after receiving information of the operation result, the first terminal displays operation result information the corresponding resource. According to the method and the resource processing system, different terminals can share and use data and functions mutually without the support of third software.

01 Jan 2010
TL;DR: The main focus of this dissertation is on developing topology aware mapping algorithms for parallel applications with regular and irregular communication patterns, and proposes algorithms and techniques for automatic mapping of parallel applications to relieve the application developers of this burden.
Abstract: Petascale machines with hundreds of thousands of cores are being built. These machines have varying interconnect topologies and large network diameters. Computation is cheap and communication on the network is becoming the bottleneck for scaling of parallel applications. Network contention, specifically, is becoming an increasingly important factor affecting overall performance. The broad goal of this dissertation is performance optimization of parallel applications through reduction of network contention. Most parallel applications have a certain communication topology. Mapping of tasks in a parallel application based on their communication graph, to the physical processors on a machine can potentially lead to performance improvements. Mapping of the communication graph for an application on to the interconnect topology of a machine while trying to localize communication is the research problem under consideration. The farther different messages travel on the network, greater is the chance of resource sharing between messages. This can create contention on the network for networks commonly used today. Evaluative studies in this dissertation show that on IBM Blue Gene and Cray XT machines, message latencies can be severely affected under contention. Realizing this fact, application developers have started paying attention to the mapping of tasks to physical processors to minimize contention. Placement of communicating tasks on nearby physical processors can minimize the distance traveled by messages and reduce the chances of contention. Performance improvements through topology aware placement for applications such as NAMD and OpenAtom are used to motivate this work. Building on these ideas, the dissertation proposes algorithms and techniques for automatic mapping of parallel applications to relieve the application developers of this burden. The effect of contention on message latencies is studied in depth to guide the design of mapping algorithms. The hop-bytes metric is proposed for the evaluation of mapping algorithms as a better metric than the previously used maximum dilation metric. The main focus of this dissertation is on developing topology aware mapping algorithms for parallel applications with regular and irregular communication patterns. The automatic mapping framework is a suite of such algorithms with capabilities to choose the best mapping for a problem with a given communication graph. The dissertation also briefly discusses completely distributed mapping techniques which will be imperative for machines of the future.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: The dedicated sequential model is proposed as the model of choice for time critical resource sharing multi-processor/multi-core systems and the relation of schedulability between different models is explored.
Abstract: Multi-processor and multi-core systems are becoming increasingly important in time critical systems. Shared resources, such as shared memory or communication buses are used to share data and read sensors. We consider real-time tasks constituted by superblocks, which can be executed sequentially or by a time triggered static schedule. Three models to access shared resources are explored: (1) the dedicated access model, in which accesses happen only in dedicated phases, (2) the general access model, in which accesses could happen at anytime, and (3) the hybrid access model, combining the dedicated and general access model. For resource access based on a Time Division Multiple Access (TDMA) protocol, we analyze the worst-case completion time for a superblock, derive worst-case response times for tasks and obtain the relation of schedulability between different models. We conclude with proposing the dedicated sequential model as the model of choice for time critical resource sharing multi-processor/multi-core systems.

Patent
12 Mar 2010
TL;DR: In this paper, a computer implemented method, and computer program product for requesting resources is described, and the computer transmits a request to a server using at least one preferred uniform resource identifier using a packet network.
Abstract: A computer implemented method, and computer program product for requesting resources. The computer receives an assignment of an Internet protocol address. The computer compares a computer context of a client computer with an intranet access criterion to form a comparison result. The computer selects at least one preferred uniform resource identifier based on the comparison result, indicating the intranet is accessible. The computer transmits a request to a server using at least one preferred uniform resource identifier using a packet network.