scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Peer-to-Peer Computing in 2002"


Proceedings ArticleDOI
05 Sep 2002
TL;DR: This work proposes a CAN-based extension of a DHT-functionality for distributed storing of pairs which complements current solutions such as MDS-2 by adding self-organization, fault-tolerance and an ability to efficiently handle dynamic attributes, such as server processing capacity.
Abstract: Recent peer-to-peer (P2P) systems such as Tapestry, Chord or CAN act primarily as a distributed hash table (DHT). A DHT is a data structure for distributed storing of pairs (key, data) which allows fast locating of data when a key is given. To facilitate efficient queries on a range of keys, we propose a CAN-based extension of this DHT-functionality. The design of our extension suggests several range query strategies; their efficiency is investigated in the paper. A further goal is to enhance the routing aspects of current DHT-systems so that frequently changing data can also be handled efficiently. We show that relatively simple approaches are able to reduce the communication overhead in this case. The design of the system is driven by its application as a part of the information infrastructure for computational grids. Such grids provide an infrastructure for sharing computing resources; an information infrastructure is their inherent part which collects resource data and provides search functionality. Our approach complements current solutions such as MDS-2 by adding self-organization, fault-tolerance and an ability to efficiently handle dynamic attributes, such as server processing capacity. We evaluate our system in this context via a simulation and show that its design along with particular query and update strategies meet the goals of scalability, communication-efficiency and availability.

305 citations


Proceedings ArticleDOI
05 Sep 2002
TL;DR: This work proposes a graph topology which allows for very efficient broadcast and search, and provides an efficient topology construction and maintenance algorithm which, crucial to symmetric peer-to-peer networks, does neither require a central server nor super nodes in the network.
Abstract: Semantic Web Services are a promising combination of Semantic Web and Web service technology, aiming at providing means of automatically executing, discovering and composing semantically marked-up Web services. We envision peer-to-peer networks which allow for carrying out searches in real-time on permanently reconfiguring networks to be an ideal infrastructure for deploying a network of Semantic Web Service providers. However, P2P networks evolving in an unorganized manner suffer from serious scalability problems, limiting the number of nodes in the network, creating network overload and pushing search times to unacceptable limits. We address these problems by imposing a deterministic shape on P2P networks: We propose a graph topology which allows for very efficient broadcast and search, and we provide an efficient topology construction and maintenance algorithm which, crucial to symmetric peer-to-peer networks, does neither require a central server nor super nodes in the network. We show how our scheme can be made even more efficient by using a globally known ontology to determine the organization of peers in the graph topology, allowing for efficient concept-based search.

190 citations


Proceedings ArticleDOI
05 Sep 2002
TL;DR: The concept of passive distributed indexing, a general-purpose distributed search service for mobile file sharing applications, which is based on peer-to-peer technology, is presented and it is shown that due to the flexible design PDI can be employed for several kinds of applications.
Abstract: In this paper, we present the concept of passive distributed indexing, a general-purpose distributed search service for mobile file sharing applications, which is based on peer-to-peer technology. The service enables resource-effective searching for files distributed across mobile devices based on simple queries. Building blocks of PDI constitute local broadcast transmission of query- and response messages, together with caching of query results at every device participating in PDI. Based on these building blocks, the need for flooding the entire network with query messages can be eliminated for most application. In extensive simulation studies, we demonstrate the performance of PDI. Because the requirements of a typical mobile file sharing application are not known-or even do not exist at all-we study the performance of PDI for different system environments and application requirements. We show that due to the flexible design PDI can be employed for several kinds of applications.

140 citations


Proceedings ArticleDOI
05 Sep 2002
TL;DR: It appears, that P2P actually scales much better than predicted by more conventional theory relying on simplifying assumptions, and thus exponential growth of the messaging load in pure peer-to-peer networks must not be assumed.
Abstract: Recently peer-to-peer (P2P) configurations have found considerable interest in the Internet community. At the same time P2P has often been criticized for poor scaling behavior. We analyze P2P signaling traffic, both via analytic estimates and via computer simulation. With the help of two probabilistic approaches, we can derive an upper as well as a lower bound for the growth of P2P-signaling traffic, according to a pure peer-to-peer protocol as given e.g. in the Gnutella protocol. With the help of a simulation we are able to verify our mathematical derivations. As a result it appears, that P2P actually scales much better than predicted by more conventional theory relying on simplifying assumptions, and thus exponential growth of the messaging load in pure peer-to-peer networks must not be assumed.

36 citations


Proceedings ArticleDOI
05 Sep 2002
TL;DR: The novel multiring topology is presented, designed to meet requirements of high performance group communication in peer-to-peer networks, and improves data communication by applying several concepts: building overlay networks for each topic, a topology consisting of multiple rings, backup links and dual mode links.
Abstract: Emerging peer-to-peer applications require efficient group communication. However, current techniques for group communication are not optimal for peer-to-peer networks, which do not have group communication methods themselves. This paper presents the novel multiring topology, which is designed to meet requirements of high performance group communication in peer-to-peer networks. It improves data communication by applying several concepts: building overlay networks for each topic, a topology consisting of multiple rings, backup links and dual mode links. Experimental results provide evidence for improving performance and scalability of peer group communication.

34 citations


Proceedings ArticleDOI
05 Sep 2002
TL;DR: The ubiquitous service-oriented network (USON) is a network architecture for providing future networking services based on ubiquitous technologies based on three technologies-service composition core technology, state-acquisition technology, and network reflective technology.
Abstract: As network technologies evolve in two areas-peer-to-peer technology and nomadic technology-new types of services provided through a ubiquitous environment will be created. To allow users to enjoy services in a ubiquitous environment, services that are appropriate to users' needs and desires must be created and discovered. In this paper, we describe the ubiquitous service-oriented network (USON). The USON is a network architecture for providing future networking services based on ubiquitous technologies. It is based on three technologies-service composition core technology, state-acquisition technology, and network reflective technology. Services provided through the USON will consist of autonomous distributed service elements (SEs). We describe the architecture, implementation, and an example of the USON, and explain how it will provide user services through a ubiquitous environment.

33 citations


Proceedings ArticleDOI
05 Sep 2002
TL;DR: This paper has two aims; firstly to report on the use of JXTA in converting a server-centric legacy forum system into a P2P system and to encourage others in redesigning existing client-server systems into P1P applications as a way of to better understand and evaluate the costs and benefits of this technology.
Abstract: Decentralized file-sharing systems like Napster and Gnutella have popularized the peer-to-peer approach, which emphasizes the use of distributed resources in a decentralized manner. Peer-to-peer (P2P) systems are a relatively new addition to the large area of distributed systems. Their emphasis on sharing distributed resources, self-organization and use of discovery mechanisms sets them apart from other forms of distributed computing. Avoiding centralized components, and extensive resource/service sharing allows P2P systems to outperform other forms of distributed systems with regards to scalability and robustness due to load distribution and the avoidance of bottlenecks and single points of failure. This paper has two aims; firstly to report on the use of JXTA in converting a server-centric legacy forum system into a P2P system. It also attempts to encourage others in redesigning existing client-server systems into P2P applications as a way of to better understand and evaluate the costs and benefits of this technology.

33 citations


Proceedings ArticleDOI
05 Sep 2002
TL;DR: This paper aims to identify the main dependability properties (and related properties) that can play a part within P2P systems, which can be used to help inform the creation of more dependable systems.
Abstract: This paper aims to identify the main dependability properties (and related properties) that can play a part within P2P systems. This, in turn, can be used to help inform the creation of more dependable systems. Given the influence the choice of architecture can have, this paper first provides an overview of the main P2P architectures, before going on to identifying the different properties. Future work will provide a detailed analysis of the effect the architectures can have on these properties.

30 citations


Proceedings ArticleDOI
05 Sep 2002
TL;DR: Pole's AgentScape is a framework designed to support large-scale multi-agent systems and Pole extends this framework with peer-to-peer computing, which facilitates the development and deployment of new agent-based peer- to-peer applications and services.
Abstract: The combination of peer-to-peer networking and agent-based computing seems to be a perfect match. Agents are cooperative and communication oriented, while peer-to-peer networks typically support distributed systems in which all nodes have equal roles and responsibilities. AgentScape is a framework designed to support large-scale multi-agent systems. Pole extends this framework with peer-to-peer computing. This combination facilitates the development and deployment of new agent-based peer-to-peer applications and services.

28 citations


Proceedings ArticleDOI
05 Sep 2002
TL;DR: Rhubarb organizes nodes in a virtual network, allowing connections across firewalls/NAT, and efficient broadcasting, and the virtual network is scalable due to a hierarchical organization and efficient state management.
Abstract: Rhubarb is a platform for building peer-to-peer (P2P) applications. Rhubarb offers an API similar to Berkeley sockets. Using Rhubarb, P2P applications can be developed that are independent of centralized resources and the DNS system. Rhubarb organizes nodes in a virtual network, allowing connections across firewalls/NAT, and efficient broadcasting. The virtual network is scalable due to a hierarchical organization and efficient state management. Rhubarb is securely protected against outside and inside attacks.

24 citations


Proceedings ArticleDOI
05 Sep 2002
TL;DR: A CDI design approach that addresses issues of request routing, content delivery, and replication, based on P2P techniques is proposed.
Abstract: In this paper we discuss the use of algorithms and techniques, currently deployed in peer-to-peer (P2P) systems, in the design of content distribution internetworks (CDIs). Specifically, we propose a CDI design approach that addresses issues of request routing, content delivery, and replication, based on P2P techniques.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: This paper describes an advanced peer-to-peer (P2P) platform called the Semantic Information Oriented Network (SIONet), a meta-network that delivers user events based on semantic information (meta-data) and searches for specific entities dynamically in the network.
Abstract: This paper describes an advanced peer-to-peer (P2P) platform called the Semantic Information Oriented Network (SIONet). This is a meta-network that delivers user events based on semantic information (meta-data) and searches for specific entities dynamically in the network. SIONet consists of Semantic Information (SI) Switch (SI-SW), which compares the meta-data of each event with the receiver's conditions, SI-Router (SI-R), which routes events between two SI-SWs, Event Place (EP), which is a logical subnet constructed by SI-SW and SI-R, and SI-Gateway (SI-GW), which connects EPs. These elements are self-organizing as needed, making possible a secure and scalable P2P network.

Proceedings ArticleDOI
D. Kato1
05 Sep 2002
TL;DR: This paper proposes GISP (global information sharing protocol), which aims at a world-wide distributed index, which is a set of protocols for a peer-to-peer platform and provides a Java reference implementation.
Abstract: This paper proposes GISP (global information sharing protocol), which aims at a world-wide distributed index. A distributed index consists of a set of pair data (key, value) shared by many peers. Each peer is responsible for a part of the index based on a hash function. Every peer is basically flat and there is no single point of failure. A distributed index is an essential building block for peer-to-peer systems. The design of GISP is simple, open, and easy to develop. GISP deals with peer heterogeneity and undesirable peers. Each peer promotes its strength so that stronger peers contribute more than weaker peers. Redundancy is important for defending against undesirable peers. Peers replicate pair data so that each pair data of the index is covered by several peers. There is a project at jxta.org for developing GISP. JXTA is a set of protocols for a peer-to-peer platform and provides a Java reference implementation. At the project, GISP is implemented in the Java language on JXTA. By building GISP on top of JXTA, a peer could reach a peer behind a firewall and even a peer in a different network transport. Jnushare is another project at jxta.org, which is to provide an application of GISP. Using Jnushare, people would share information such as files, messages and bookmarks.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: This work provides a description of a framework to support learning in the context a community of interacting peers, and shows how this can be used to support resource sharing in computational grids.
Abstract: Managing resources in large scale distributed systems is an important concern for both peer-2-peer and computational grid systems, and is a complex and time sensitive process. Although existing peer-2-peer systems are divided into those that support computation (CPU) sharing or data sharing, users in a computational grid generally need to share both. Identifying which resources to select is important to guarantee reasonable execution time and cost to a given user or group of users. We provide first a description of a framework to support learning in the context a community of interacting peers, and show how this can be used to support resource sharing in computational grids.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: This paper describes part of the research being conducted in the Extrovert Gadgets project geared towards applying P2P computing solutions to the context of networked everyday objects.
Abstract: In the new paradigm of computer use, the computer ceases to exist as an integrated multi-task device, but disintegrates into a task-oriented collection of networked devices. These devices do not resemble computers yet they have computational abilities. None of these concepts will be realised without appropriate support from communication technologies-P2P networking being the primary candidate. This paper describes part of the research being conducted in the Extrovert Gadgets project geared towards applying P2P computing solutions to the context of networked everyday objects.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: Rajkumar Buyya is one of the creators of system software for PARAM supercomputers developed by the Center for Development of Advanced Computing (CDAC), Bangalore, India and has lectured on advanced technologies such as Parallel, Distributed and Multithreaded Computing, Internet and Java, Cluster Computing, Java and High Performance Computing, and Grid computing in many international conferences and institutions.
Abstract: Rajkumar Buyya is Co-Chair of the IEEE Computer Society Task Force on Cluster Computing (TFCC) and international speaker in the IEEE Computer Society Chapter Tutorials Program. Currently at the University of Melbourne, Australia, he is leading research activities of the Grid computing and Distributed Systems (GRIDS) Laboratory. He has authored three books Microprocessor x86 Programming, Mastering C++, and Design of PARAS Microkernel. He has edited book High Performance Cluster Computing, Prentice Hall, USA and High Performance Mass Storage and Parallel I/O, Wiley/IEEE Press, USA. He is one of the creators of system software for PARAM supercomputers developed by the Center for Development of Advanced Computing (CDAC), Bangalore, India. He has lectured on advanced technologies such as Parallel, Distributed and Multithreaded Computing, Internet and Java, Cluster Computing, Java and High Performance Computing, and Grid computing in many international conferences and institutions. He has organized and chaired IEEE/ACM international conferences in the area of Cluster and Grid Computing. For further information, please browse http://www.buyya.com.

Proceedings ArticleDOI
Jens Mache1, M. Gilbert1, J. Guchereau1, Jeff Lesh1, F. Ramli1, M. Wilkinson1 
05 Sep 2002
TL;DR: This work designs and test several new request algorithms that dynamically change the overlay network after failed requests and by further rewarding the fulfillers of successful requests, which improved median pathlength by up to a factor of 9.25.
Abstract: In most peer-to-peer systems, edge resources self-organize into overlay networks. At the core of Freenet-style peer-to-peer systems are insert and request algorithms that dynamically change the overlay network and replicate files on demand. We ran simulations to test how effective these algorithms are at improving the performance of subsequent queries. Our results show that for the original Freenet algorithms, performance improved less rapidly with a ratio of 99 requests to 1 insert than with an equal number of requests and inserts. This motivated us to design and test the performance of several new request algorithms. By changing the overlay network after failed requests and by further rewarding the fulfillers of successful requests, our new algorithms improved median pathlength by up to a factor of 9.25.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: This work proposes an agent-based architectural model as a middleware for intelligent P2P electronic software delivery and argues that the proposed model can be used in building a system that meets most, if not all, of the identified criteria.
Abstract: Internet has given software providers possibilities for electronic software distribution (ESD). At the same time bandwidth limitations lead to poor performance and scalability of the delivery process. The problem is even harder when delivering large, resource consuming software packages and media content. We propose an agent-based architectural model as a middleware for intelligent P2P electronic software delivery. We analyze the possibility of applying the peer-to-peer technology to the process of software delivery, relying on the experience and learned lessons of the related technologies. To this effect we analyze the available material on this topic and identify the important criteria that are crucial for the wide acceptance of ESD by both software providers and end-users. We argue that the proposed model can be used in building a system that meets most, if not all, of the identified criteria. The work presented opens a number of interesting research issues and investigation opportunities.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: A new hypothetical model of software and service delivery is proposed that is accompanied by two case studies and comparisons and demonstrates the evolving path from HTTP-IP based networking to OGSA grid computing.
Abstract: Both practitioners and academics in the information technology field are becoming increasingly familiar with the term application service provision (ASP). Predictions for growth in the ASP global market in 2003 are estimated to range from $5 billion (IDC) to $35 billion (Qwest) (Kluge, 2002). If the predicted growth is realised, ASPs will have a significant impact on IS (Information Systems) strategies and outsourcing practice, not only for large companies, but also for the under-exploited SME (small or medium enterprise) sector. This potential is attracting many companies that aspire to become ASPs, but as some early ASPs began to fail, many companies tried to distance themselves from this term (Campos, 2002). With the emergence of the Open Grid Service Architecture and Globus Toolkit as incubators for evolving an ASP business model to maintain profitability, this paper proposes a hypothetical Grid Service Provision model. ASPS' evolution paths can be defined by service delivery and infrastructure axes. In the service delivery axis, the paper presents software application delivery moving from a pre-packaged one-to-many model to the interoperable Web service model. In the infrastructure axis, the paper demonstrates the evolving path from HTTP-IP based networking to OGSA grid computing. This paper proposes a new hypothetical model of software and service delivery that is accompanied by two case studies and comparisons.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: The UD MetaProcessor software represents a form of grid computing in the enterprise that is extremely cost-effective and substantially improves the return on investment in existing compute resources.
Abstract: The computing needs of enterprises are typically met by purchasing or using existing dedicated high-performance compute resources. An alternative, proposed in this presentation, is to meet these computing needs by harnessing under-utilized resources across the enterprise. This represents a form of grid computing in the enterprise that is extremely cost-effective and substantially improves the return on investment in existing compute resources. The primary source of under-utilized resources are PCs and, in the aggregate, represent a dominant source of compute power. The UD MetaProcessor software represents these resources as a single large grid service for administrators, application developers and users. I present some of the challenges with this grid service together with some of the solutions to these challenges. A number of case studies will show the immediate benefits of adopting this technology to satisfy computational needs today. The sheer amount of compute power available also enables feasible solutions to problems that were previously discarded as being impractical or impossible to solve.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: An overview on existing peer-to-peer computing models is provided and an investigation into the existing market models and their usability in a compute-sharing P2P system is investigated.
Abstract: A number of systems exist for harnessing the power of idle workstations and home computers, but these only make that power available for central projects. Making the power of these systems available to home users could be achieved with a peer-to-peer architecture. An unregulated peer computation sharing system has the potential for abuse by free riders. In order to encourage contribution to the system, a free market model would allow users to 'bank' contributed computing power with other peers, for redemption at a later time. This paper has two aims. Firstly provides an overview on existing peer-to-peer computing models. Secondly it is an investigation into the existing market models and their usability in a compute-sharing P2P system.

Proceedings ArticleDOI
Sascha Alda1
05 Sep 2002
TL;DR: This work towards the realisation of a component model as well as of an architecture serving as a runtime environment for component-based, distributed peer-to-peer applications is presented.
Abstract: One great challenge in the field of software engineering is to develop reusable, adaptable and scalable software systems. To address this goal, a multiplicity of approaches have been proposed. We present our work towards the realisation of a component model as well as of an architecture serving as a runtime environment for component-based, distributed peer-to-peer applications. We further explain additional concepts for adaptability in peer-to-peer applications.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: How P2P technology can be applied pervasively to provide a serverless infrastructure that lowers the cost of wide-scale private networking right through from narrow-band networking at 2,400 bps to full broadband communications is reviewed.
Abstract: The degree of user empowerment enabled by present day peer-to-peer applications in the areas of content-brokerage, super-distribution and group collaboration is now bringing IT professionals to an awareness of the many new and useful applications for "P2P" technology today. As a result, P2P technology and applications are beginning to take root throughout the industry as a whole. But these applications have only started to scratch the surface. The pervasive deployment of P2P technology in a truly serverless environment brings new opportunities and exciting challenges capable of completely transforming the way society experiences IT and media communications. This paper reviews how P2P technology can be applied pervasively to provide a serverless infrastructure that lowers the cost of wide-scale private networking right through from narrow-band networking at 2,400 bps to full broadband communications. The paper presents a three-part serverless infrastructure model, covering "application", "connectivity" and "management" and describes techniques for user discovery, auto-population and Quality of Service control applied at the end-user level. It describes a fault-tolerant approach to administrating large groups of users and network resources and discusses security at the transport, data storage and LAN levels. The paper also sets out a telecommunications carrier solution that enables service providers to further exploit their existing network infrastructure and presents a low cost solution for operators faced with high network configuration costs.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: A new P2P network topology is proposed that reduces bandwidth consumption and provides complete searches by employing a tree hierarchy of indexing nodes that facilitates the search functionality.
Abstract: We propose a new P2P network topology that reduces bandwidth consumption and provides complete searches by employing a tree hierarchy of indexing nodes that facilitates the search functionality. Each node consists of a cluster of peers to provide fault tolerance and scalability.

Proceedings ArticleDOI
D. Barkai1
05 Sep 2002
TL;DR: David Barkai is a member of the Distributed Systems Lab of Intel's Corporate Technology Group and has also been a content architect for the Intel Developer Forum conference and a software scientist in the Microcomputer Software Lab.
Abstract: David Barkai is a member of the Distributed Systems Lab of Intel's Corporate Technology Group. He has also been a content architect for the Intel Developer Forum conference and a software scientist in the Microcomputer Software Lab. Before joining Intel in 1996, David worked for 25 years in the field of scientific and engineering supercomputing for Control Data Corporation, Cray Research Inc., Supercomputer Systems Inc., and NASA Ames Research Center.

Proceedings ArticleDOI
Z. Segall1, A. Fortier1, Gerd Kortuem1, Jay Schneider, S. Workman 
05 Sep 2002
TL;DR: The architecture and the implementation of Multishelf-a decentralized peer-to-peer infomediator designed and built during a ten-week senior software methodology course using the Proem peer- to-peer platform is described.
Abstract: We describe the architecture and the implementation of Multishelf-a decentralized peer-to-peer infomediator. Multishelf was designed and built during a ten-week senior software methodology course using the Proem peer-to-peer platform. We discuss the infomediation problem, present our scalable peer-to-peer solution including the effectiveness of using the Proem peer-to-peer platform, and conclude with plans for future work.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: Some representative peer-to-peer file sharing applications are compared against two sets of features and the obtained classification points out the mutual relationships between the expressive power and the degree of abstraction over low-level issues offered by each application.
Abstract: In this paper some representative peer-to-peer file sharing applications are compared against two sets of features. The first set describes the semantics of the relevant primitive operations over the shared data space. The second set describes the algorithmic and architectural solutions to implement these primitives. The obtained classification points out the mutual relationships between the expressive power and the degree of abstraction over low-level issues offered by each application.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: A new measure, "goodness of overlay networks", is introduced, to quantify the quality of an overlay network for a given metric, and is proposed, a simple, distributed and scalable component that can be combined with any connected overlay network in order to allow the latter to adapt, and to become "good" within a finite amount of time.
Abstract: The peer-to-peer (P2P) computing paradigm is an emerging paradigm that aims to overcome most of the main limitations of the traditional client/server architecture. In the P2P setting, individual computers communicate directly with each other in order to share information and resources without relying on any kind of centralized server. To achieve this full decentralization, an application-level (or overlay) network is constructed using, for example, TCP connections. In most of the existing P2P systems, the overlay network is built in a manner that does not guarantee that the overlay network is efficient with respect to a given metric (e.g. latency, hop count and bandwidth). Hence, an overlay node can be very far away, in terms of a given metric, from its overlay neighbors. This can result in both, an inefficient routing at the overlay network and an ineffective use of the underlying IP network. In this paper, we introduce a new measure, "goodness of overlay networks", to quantify the quality of an overlay network for a given metric. We then propose NetProber, a simple, distributed and scalable component that can be combined with any connected overlay network in order to allow the latter to adapt, and to become "good" within a finite amount of time.