scispace - formally typeset
Search or ask a question

Showing papers by "Jon Crowcroft published in 2006"


Journal ArticleDOI
11 Aug 2006
TL;DR: The results show that COPE largely increases network throughput, and the gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol.
Abstract: This paper proposes COPE, a new architecture for wireless mesh networks. In addition to forwarding packets, routers mix (i.e., code) packets from different sources to increase the information content of each transmission. We show that intelligently mixing packets increases network throughput. Our design is rooted in the theory of network coding. Prior work on network coding is mainly theoretical and focuses on multicast traffic. This paper aims to bridge theory with practice; it addresses the common case of unicast traffic, dynamic and potentially bursty flows, and practical issues facing the integration of network coding in the current network stack. We evaluate our design on a 20-node wireless network, and discuss the results of the first testbed deployment of wireless network coding. The results show that COPE largely increases network throughput. The gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol.

890 citations


Proceedings ArticleDOI
23 Apr 2006
TL;DR: A simplified model based on the renewal theory is used to study how the parameters of the distribution impact the delay performance of previously proposed forwarding algorithms, in the context of human carried devices.
Abstract: Studying transfer opportunities between wireless devices carried by humans, we observe that the distribution of the inter-contact time, that is the time gap separating two contacts of the same pair of devices, exhibits a heavy tail such as one of a power law, over a large range of value. This observation is confirmed on six distinct experimental data sets. It is at odds with the exponential decay implied by most mobility models. In this paper, we study how this new characteristic of human mobility impacts a class of previously proposed forwarding algorithms. We use a simplified model based on the renewal theory to study how the parameters of the distribution impact the delay performance of these algorithms. We make recommendation for the design of well founded opportunistic forwarding algorithms, in the context of human carried devices.

623 citations


Proceedings Article
18 Jan 2006
TL;DR: This work identifies the general scenario faced by the user of Pocket Switched Networking (PSN), and presents a set of architectural principles for PSN, and the high-level design of Haggle, an asynchronous, data-centric network architecture which addresses this environment by “raising” the API.
Abstract: Current mobile computing applications are infrastructure-centric, due to the IP-based API that these applications are written around. This causes many frustrations for end users, whose needs might be easily met with local connectivity resources but whose applications do not support this (e.g. emailing someone sitting next to you when there is no wireless access point). We identify the general scenario faced by the user of Pocket Switched Networking (PSN), and discuss why the IP-based status quo does not cope well in this environment. We present a set of architectural principles for PSN, and the high-level design of Haggle, our asynchronous, data-centric network architecture which addresses this environment by “raising” the API so that applications can provide the network with application-layer data units (ADUs) with high-level metadata concerning ADU identification, security and delivery to user-named endpoints

279 citations


Proceedings ArticleDOI
04 Dec 2006
TL;DR: This paper shows how using a small label, identifying users according to their affiliation, can bring a large improvement in forwarding performance, in term of both delivery ratio and cost.
Abstract: It is widely believed that identifying communities in an ad hoc mobile communications system, such as a pocket switched network, can reduce the amount of traffic created when forwarding messages, but there has not been any empirical evidence available to support this assumption to date. In this paper, we show through use of real experimental human mobility data, how using a small label, identifying users according to their affiliation, can bring a large improvement in forwarding performance, in term of both delivery ratio and cost.

233 citations


Proceedings ArticleDOI
11 Sep 2006
TL;DR: This paper investigates the feasibility of a city-wide content distribution architecture composed of short range wireless access points and looks at how a target group of intermittently and partially connected mobile nodes can improve the diffusion of information within the group by leveraging fixed and mobile nodes that are exterior to the group.
Abstract: This paper investigates the feasibility of a city-wide content distribution architecture composed of short range wireless access points. We look at how a target group of intermittently and partially connected mobile nodes can improve the diffusion of information within the group by leveraging fixed and mobile nodes that are exterior to the group. The fixed nodes are data sources, and the external mobile nodes are data relays, and we examine the trade off between the use of each in order to obtain high satisfaction within the target group, which consists of data sinks. We conducted an experiment in Cambridge, UK, to gather mobility traces that we used for the study of this content distribution architecture. In this scenario, the simple fact that members of the target group collaborate leads to a delivery ratio of 90%. In addition, the use of external mobile nodes to relay the information slightly increases the delivery ratio while significantly decreasing the delay.

202 citations



Proceedings Article
26 May 2006
TL;DR: The response to the Call for Papers has shown that the REALMAN community is steadily increasing both in number and in quality, and shows that, following the success of the first edition, REALMAN is establishing as a premiere forum for presenting and discussing measurement studies and experiences based on real ad hoc network test-beds and prototypes.
Abstract: Welcome to the second edition of the Workshop on Multi-hop Ad hoc Networks: from Theory to Reality, REALMAN 2006 This year the workshop is co-located with ACM MobiHoc 2006 and is sponsored by ACM SIGMOBILEAd hoc networking technologies have big potentialities for innovative applications of great impact on our everyday life To exploit these potentialities, simulation modeling and theoretical analyses have to be complemented by real experiences (eg, measurements on real prototypes), which provide both a direct evaluation of ad hoc networks and, at the same time, precious information to realistically model these systemsIn the last few years, researchers have increasingly regarded experimental studies as a key approach to understand the very features of multi-hop ad hoc networks, and eventually enable the adoption of this technology in the mass market This stimulated a new community of researchers combining theoretical research on ad hoc networking with experiences/measurements obtained by implementing ad hoc network prototypes The aim of REALMAN is to bring together these researchersThe response to the Call for Papers has shown that the REALMAN community is steadily increasing both in number and in quality In response to the Call for Papers, we received 68 papers, addressing topics related to all fields of multi-hop ad hoc networking Out of them, the Program Committee selected 12 papers for presentation in the workshop sessions In addition, 3 papers have been selected for presentation in the poster session In response to a separate Call for Demos, we received several interesting demo proposals We have selected 10 proposals out of them to be demonstrated during the workshop Finally, the REALMAN Program also includes a Keynote Speech given by Prof Nitin H Vaidya of the University of Illinois at Urbana-Champaign, and a panel These figures show that, following the success of the first edition, REALMAN is establishing as a premiere forum for presenting and discussing measurement studies and experiences based on real ad hoc network test-beds and prototypes

48 citations


Journal Article
TL;DR: This work proposes Pocket Switched Networking, a communication paradigm which reflects the reality faced by the mobile user, and describes the challenges that this approach entails and provides evidence that it is feasible with today's technology.
Abstract: The Internet is built around the assumption of contemporaneous end-to-end connectivity. This is at odds with what typically happens in mobile networking, where mobile devices move between islands of connectivity, having opportunity to transmit packets through their wireless interface or simply carrying the data toward a connectivity island. We propose Pocket Switched Networking, a communication paradigm which reflects the reality faced by the mobile user. Pocket Networking falls under DTN. We describe the challenges that this approach entails and provide evidence that it is feasible with today's technology.

40 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper presents a new communications scheme for PSN, called Osmosis, based on the biological phenomenon, and shows how this scheme can be applied to file sharing, using epidemic routing to perform file lookup and controlled flooding for file transfer.
Abstract: The increase in the variety and capability of mobile communications devices carried by people today has made it possible to envision a new class of networks called pocket switched networks (PSNs). In a PSN, because the nodes are constantly moving and their communication abilities are limited, the design of networking protocols and applications is challenging. This paper presents a new communications scheme for PSN, called Osmosis, based on the biological phenomenon. We show how this scheme can be applied to file sharing. This scheme uses epidemic routing to perform file lookup and achieves controlled flooding for file transfer. The flooding requires very little state information, which is collected during the lookup. This paper analyzes the performance of Osmosis in simulation studies based on real mobility traces.

38 citations


23 Feb 2006
TL;DR: The emergence of powerful digital infrastructures, wireless networks and mobile devices has already started to move computing away from the desktop and embed it in the public spaces, architectures, furniture and personal fabric of everyday life.
Abstract: The emergence of powerful digital infrastructures, wireless networks and mobile devices has already started to move computing away from the desktop and embed it in the public spaces, architectures, furniture and personal fabric of everyday life. Handheld and wearable computers, mobile phones, digital cameras, satellite navigation, and a host of similar devices join the Personal Computer as commonplace digital tools. We are increasingly becoming accustomed to using a heterogeneous collection of computing devices to support a growing range of activities. These embryonic forms of ubiquitous computing technology have already had a major impact on the ways that people work, learn, entertain themselves, and interact.

34 citations


Journal ArticleDOI
28 Apr 2006
TL;DR: In the last issue ACM Computer Communication Review, Christophe Diot, the Editor-in-Chief kicked off a series of contributions to CCR by members of the technical community on networking papers that they would recommend to others.
Abstract: In the last issue ACM Computer Communication Review, Christophe Diot, the Editor-in-Chief kicked off a series of contributions to CCR by members of our technical community on networking papers that they would recommend to others. Of course, we all have our lists of favourites (I have 4 book lists on Amazon!), but this is more than just stamp collecting. Search engines and citation indexes have several problems, including: it is hard to balance recency of an article with popularity – a single dimension index doesn't really tell you whether a paper is seminal or just popular; some material is not available (too old, or not there any more; the average of everyone's opinion may not be as useful as a subjective view by someone you trust (or distrust); one day, these problems may be solved by contextualizing information that is retrieved and presenting the recommendation network that your retrieval was made by (Some search engines like http://beta.previewseek.com/ are starting to do this). Until that day, lists like this are a good substitute, and they are also fun starting points for discussion. My list is explicitly not my " top ten " papers ever. Rather, it represents a sample made at a snapshot. These are papers that came up in recent discussions in PhD supervision, research project work, and in reviewing papers for conferences. For each paper, I've given some indication of the value I got from the paper. In some cases, I also give the context that I first saw the paper. Here they are, in random order: • " Experience with Grapevine: the growth of a distributed system, " [Schroeder 1984]. This has so many ideas in how to actually do things right (compared say to DNS) and includes some things people have forgotten about 10 times (including later work that used both epidemic models and control theory applied to the update traffic). We used to work on Directory Systems at UCL in the 1980s – we also worked on comparing early DNS implementations (Berkeley BIND and Stanford's DRUID). The baseline for all of these, though, was GrapeVine. Paul Dourish (now at Irvine) visited us in UCL from Xerox's European PARC, round then and told us the stories of epidemic problems that showed up in this paper. • " The Design and Implementation of an Operating System to Support Distributed Multimedia Applications, " [Leslie 2000]. It's a shame so …

Proceedings ArticleDOI
25 Oct 2006
TL;DR: A novel variant of the Jacobson-Vo algorithm employing a flexible gap-minimising alignment model suitable for network traffic is introduced, and it is found that the software implementation outperforms the commonly used Smith-Waterman approach.
Abstract: String comparison algorithms, inspired by methods used in bioinformatics, have recently gained popularity in network applications. In this paper we demonstrate the need for careful selection of alignment models if such algorithms are to yield the desired results when applied to network traffic. We introduce a novel variant of the Jacobson-Vo algorithm employing a flexible gap-minimising alignment model suitable for network traffic, and find that our software implementation outperforms the commonly used Smith-Waterman approach by a factor of 33 on average and up to 58.5 in the best case on a wide range of network protocols.

01 Jan 2006
TL;DR: A model of loss is presented and how the amount of redundancy should be varied with the loss rate is determined, and a preliminary investigation of the position of redundant encodings relative to the original encoding is made.
Abstract: The use of redundant audio encoding has been advocated for lossy networks like the Internet[1, 2] as a way of reducing the impact of loss in audioconferences. We present a model of loss and determine how the amount of redundancy should be varied with the loss rate. In addition, we make loss measurements and make a preliminary investigation of the position of redundant encodings relative to the original encoding.

Proceedings ArticleDOI
26 May 2006
TL;DR: This work proposes an alternative overlay network architecture by introducing a set of generic functions in network edges and end hosts that offers a number of advantages for upper layer end-to-end applications, including intrinsic provisioning of resilience and DoS prevention in a dynamic and nomadic environment.
Abstract: With today's penetration in volume and variety of information flowing across the Internet, data and services are experiencing various issues with the TCP/IP infrastructure, most notably availability, reliability and mobility Therefore, a critical infrastructure is highly desireable, in particular for multimedia streaming applications So far the proposed approaches have focused on applying application-layer routing and path monitoring for reliability and on enforcing stateful packet filters in hosts or network to protect against Denial of Service (DoS) attacks Each of them solves its own aspect of the problem, trading scalability for availability and reliability among a relatively small set of nodes, yet there is no single overall solution available which addresses these issues in a large scaleWe propose an alternative overlay network architecture by introducing a set of generic functions in network edges and end hosts We conjecture that the network edge constitutes a major source of DoS, resilience and mobility issues to the network, and propose a new solution to this problem, namely the General Internet Signaling Transport (GIST) Overlay Networking Extension, or GONE The basic idea of GONE is to create a half-permanent overlay mesh consisting of GONE-enabled edge routers, which employs capability-based DoS prevention and forwards end-to-end user traffic using the GIST messaging associations GONE's use of GIST on top of SCTP allows multi-homing, multi-streaming and partial reliability, while only a limited overhead for maintaining the messaging association is introduced In addition, upon the services provided by GONE overlays, hosts are identified by their unique host identities independent of their topologies location, and simply require (de-)multiplexing instead of the traditional connection management and other complex functionality in the transport layer As a result, this approach offers a number of advantages for upper layer end-to-end applications, including intrinsic provisioning of resilience and DoS prevention in a dynamic and nomadic environment


Journal ArticleDOI
10 Jan 2006
TL;DR: The purpose is to draw attention to the potential unintended consequences that result from decisions being made at the time of writing, in the monitoring, command, communications and control of private vehicles on the public highway arena.
Abstract: Monitoring, and command,communications and control of private vehicles on the public highway is now high on the political agenda. This is both because it is becoming feasible, and because it may be desirable. From the economic perspective, more efficient se of road resources may be achievable. From a safety perspective, it would clearly be good to reduce road injury and death statistics below the current "9/11"'s-worth per year in the UK (and other similar sized European countries).Various prototypes, proposals and projects are being undertaken. There are a number of technologies that interact as well as numerous legal, political and economic stakeholders. In this note, we pay particular attention to the impact on privacy and safety of di .erent approaches to the overall problem.The purpose is to draw attention to the potential unintended consequences that result from decisions being made at the time of writing, in this arena.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: This paper considers the steady-state solution of Continuous Time Markov Chains, and considers a modified form of Multi- Terminal Binary Decision Diagrams to compactly store CTMCs, and presents a parallel method for the CTMC steady- state solution.
Abstract: This paper considers the steady-state solution of Continuous Time Markov Chains (CTMCs). CTMCs are a widely used formalism for the performance analysis of computer and communication systems. A large variety of useful performance measures can be derived from a CTMC via the computation of its steady-state probabilities. However, CTMC models for realistic systems are very large. We address this largeness problem in this paper by considering parallelisation of implicit methods. In particular, we consider a modified form of Multi- Terminal Binary Decision Diagrams (MTBDDs) to compactly store CTMCs, and, using Jacobi iterative method, we present a parallel method for the CTMC steady-state solution. Employing a 24-node processor bank, we analyse our parallel implicit method using the experimental results for three widely used CTMC benchmark models, with well over a billion states and sixteen billion transitions.

Proceedings ArticleDOI
24 Apr 2006
TL;DR: A network monitoring scheme, developing two multicast protocols, and analytically estimating the achievable latencies and reliability in terms of controllable protocol parameters for reliably multicasting messages over a loss-prone network of arbitrary topology such as the Internet are described.
Abstract: E-business organizations commonly trade services together with quality of service (QoS) guarantees that are often dynamically agreed upon prior to service provisioning. Violating agreed QoS levels incurs penalties and hence service providers agree to QoS requests only after assessing the resource availability. Thus the system should, in addition to providing the services: (i) monitor resource availability, (ii) assess the affordability of a requested QoS level, and (iii) adapt autonomically to QoS perturbations which might undermine any assumptions made during assessment. This paper will focus on building such a system for reliably multicasting messages of arbitrary size over a loss-prone network of arbitrary topology such as the Internet. The QoS metrics of interest will be reliability, latency and relative latency. We meet the objectives (i)-(iii) by describing a network monitoring scheme, developing two multicast protocols, and by analytically estimating the achievable latencies and reliability in terms of controllable protocol parameters. Protocol development involves extending in two distinct ways an existing QoS-adaptive protocol designed for a single packet. Analytical estimation makes use of experimentally justified approximations and their impact is evaluated through simulations. As the protocol extension approaches are complementary in nature, so are the application contexts they are found best suited to; e.g., one is suited to small messages while the other to large messages.


Journal ArticleDOI
10 Oct 2006
TL;DR: This document summarizes the events leading up to the Sigcomm deadline and lists the key dates, keywords, and dates that need to be filled out in order to meet the deadline.
Abstract: Only three hundred sixty four days left until the Sigcomm deadline. Only three hundred sixty three days left until the Sigcomm deadline. Only three hundred sixty two days left until the Sigcomm deadline. Only three hundred sixty one days left until the Sigcomm deadline. Only three hundred sixty days left until the Sigcomm deadline. Only three hundred fifty nine days left until the Sigcomm deadline. Only three hundred fifty eight days left until the Sigcomm deadline. Only three hundred fifty seven days left until the Sigcomm deadline. Only three hundred fifty six days left until the Sigcomm deadline. Only three hundred fifty five days left until the Sigcomm deadline.

Journal ArticleDOI
TL;DR: Departing from reactive MIPv6 standards, this study brings new insights towards seamless handoffs by investigating the influential aspects of non-determinism in the mobility pattern of the MN and demonstrates quantitatively the performance benefit attained by maximising MN's service utility through PoA selectivity embedded in MN IPv6 handoff decision.
Abstract: A major challenge in building ‘all-IP’ wireless access networks, besides the use of IP as the unifying layer, relates to the transparency of the IP handoff process as the mobile node (MN) transits across heterogeneous wireless network domains in IPv6 mobility management. Transparency in IP handoffs, however, must be effected in two separate contexts: IP-addressing and (re-)connection latency. Excessive delays during an IP handoff degrades the seamlessness of IP transmission between the MN and its peers. Motivated by experimental results over heterogeneous wireless networks, we discuss why dynamic establishment of IP context-state can help address these limitations that seem inherent in heterogeneous environments. To this end, we provide an in-depth evaluation of Proactive Mobile IPv6 by means of simulations. Our study contrasts the efficiency of proactive context-state establishment, between candidate points of attachment (PoAs), against reactive MIPv6 standard practices over handoff delay, jitter and associated packet loss. Departing from reactive MIPv6 standards, this study brings new insights towards seamless handoffs by investigating the influential aspects of non-determinism in the mobility pattern of the MN. In addition, it demonstrates quantitatively the performance benefit attained by maximising MN's service utility through PoA selectivity embedded in MN IPv6 handoff decision. Copyright © 2006 John Wiley & Sons, Ltd.


Proceedings ArticleDOI
04 Dec 2006
TL;DR: In network coding, a router in the network mixes information from different flows to potentially increase the network capacity.
Abstract: In network coding, a router in the network mixes information from different flows In the seminal work by Ahlswede et al [1], network coding is established as a technique to potentially increase the network capacity

Proceedings ArticleDOI
21 Feb 2006
TL;DR: The results show that Vigilante can contain fast spreading worms that exploit unknown vulnerabilities without false positives, and can be used to protect software as it exists today in binary form.
Abstract: As we become increasingly dependent on computers connected to the Internet, we must protect them from worm attacks. Worms can gain complete control of millions of hosts in a few minutes, and they can use the infected hosts for malicious activities such as distributed denial of service attacks, relaying spam, corrupting data, and disclosing confidential information. Since worms spread too fast for humans to respond, systems that strive to contain worm epidemics must be completely automatic. We propose Vigilante, a new end-to-end architecture to contain worms automatically that addresses the limitations of network-centric systems. Vigilante relies on collaborative worm detection at end hosts, but does not require hosts to trust each other. In Vigilante, hosts run instrumented software to detect worms. We introduce dynamic dataflow analysis, a broad-coverage detection algorithm, and we show how to integrate other detection mechanisms into the Vigilante architecture. Upon worm detection, hosts generate self-certifying alerts (SCAs), a new type of security alert that can be inexpensively verified by any vulnerable host. SCAs are then broadcast over a resilient overlay network that can propagate alerts with high probability, even when under active attack. Finally, hosts receiving an SCA generate protective filters with dynamic data and control flow analysis of the vulnerable software. Our results show that Vigilante can contain fast spreading worms that exploit unknown vulnerabilities without false positives. Vigilante does not require any changes to hardware, compilers, operating systems or to the source code of vulnerable programs, and therefore can be used to protect software as it exists today in binary form

Patent
02 Feb 2006
TL;DR: In this paper, a self-authentication alarm is used for the alarm, so that its reliability can be independently verified by the computing system, and the alarm may include information certifying that a given program has vulnerability.
Abstract: PROBLEM TO BE SOLVED: To detect and warn of the spread of a worm inside a network computing system and/or to reduce the spread of the worm SOLUTION: This containment system can be provided with a step for making an alarm as a basis for safely sharing knowledge about a detected worm and/or a step for transmitting the alarm The alarm may include information certifying that a given program has vulnerability A self-authentication alarm is used for the alarm, so that its reliability can be independently verified by the computing system COPYRIGHT: (C)2006,JPO&NCIPI