scispace - formally typeset
Search or ask a question

Showing papers by "Jon Crowcroft published in 2004"


Journal ArticleDOI
01 Jan 2004
TL;DR: A system for automated generation of attack signatures for network intrusion detection systems that successfully created precise traffic signatures that otherwise would have required the skills and time of a security officer to inspect the traffic manually.
Abstract: This paper describes a system for automated generation of attack signatures for network intrusion detection systems. Our system applies pattern-matching techniques and protocol conformance checks on multiple levels in the protocol hierarchy to network traffic captured a honeypot system. We present results of running the system on an unprotected cable modem connection for 24 hours. The system successfully created precise traffic signatures that otherwise would have required the skills and time of a security officer to inspect the traffic manually.

708 citations


Journal ArticleDOI
01 Aug 2004
TL;DR: The model incorporates incentives for users to act as transit nodes on multi-hop paths and to be rewarded with their own ability to send traffic and illustrates the way in which network resources are allocated to users according to their geographical position.
Abstract: This paper explores a model for the operation of an ad hoc mobile network. The model incorporates incentives for users to act as transit nodes on multi-hop paths and to be rewarded with their own ability to send traffic. The paper explores consequences of the model by means of fluid-level simulations of a network and illustrates the way in which network resources are allocated to users according to their geographical position.

301 citations


Proceedings ArticleDOI
14 Mar 2004
TL;DR: An experimental study of inter-network mobility between GPRS Cellular and 802.11b-based WLAN hot-spots is presented, and a number of network-layer handover optimization techniques are proposed that improve performance during vertical handovers.
Abstract: Interworking heterogeneous wireless access technologies is an important step towards building the next generation, all-IP wireless access infrastructure. We present an experimental study of inter-network mobility between GPRS Cellular and 802.11b-based WLAN hot-spots, and deeply analyze its impact on active transport TCP flows. Our experiments were conducted over a loosely-coupled, Mobile IPv6-based, GPRS-WLAN experimental testbed. Detailed analysis from packet traces of inter-network (vertical) handovers reveals a number of performance bottlenecks. In particular, the disparity in the round trip time and bandwidth offered by GPRS and WLAN networks, and presence of deep buffers in GPRS, can aggravate performance during vertical handovers. This paper, therefore, summarizes practical experiences and challenges of providing transparent mobility in heterogeneous environments. Based on the observations, we propose a number of network-layer handover optimization techniques, e.g. fast router advertisements (RA), RA caching, binding update (BU) simulcasting and layer-3 based soft handovers that improve performance during vertical handovers. The paper concludes with our experiences of migrating TCP connections, thereby also improving application e.g. FTP, Web performance in this environment.

163 citations


Proceedings ArticleDOI
03 Sep 2004
TL;DR: It is argued that there may not be a need for incentive systems at all, especially in the early stages of adoption, where excessive complexity can only hurt the deployment of ad hoc networks.
Abstract: Without sufficient nodes cooperating to provide relaying functions, a mobile ad hoc network cannot function properly. Consequently various proposals have been made which provide incentives for individual users of an ad hoc mobile network to cooperate with each other. In this paper we examine this problem and analyse the drawbacks of currently proposed incentive systems. We then argue that there may not be a need for incentive systems at all, especially in the early stages of adoption, where excessive complexity can only hurt the deployment of ad hoc networks. We look at the needs of different customer segments at each stage of the technological adoption cycle and propose that incentive systems should not be used until ad hoc networks enter mainstream markets. Even then, incentive systems should be tailored to the needs of each individual application rather than adopting a generalised approach that may be flawed or too technically demanding to be implemented in reality.

75 citations


DOI
06 Jul 2004
TL;DR: The paper develops an architecture and gives reasons why currently it is practicable to offer guaranteed QoS only to consumers sharing Internet service providers (ISPs) directly with the service provider.
Abstract: The goal of monitoring contractual service level agreements (SLAs) is to measure the performance of a service, to evaluate whether its provider complies with the level of quality of the service (QoS) that the consumer expects. The aim of this paper is to bring to the system designer's attention the fundamental issues that monitoring of contractual SLAs involves: SLA specification, separation of the computation and communication infrastructure of the provider, service points of presence, metric collection approaches, measurement service and evaluation and violation detection service. The paper develops an architecture and give reasons why currently it is practicable to offer guaranteed QoS only to consumers sharing Internet service providers (ISPs) directly with the service provider.

68 citations


Proceedings Article
01 Aug 2004
TL;DR: Preliminary results are presented showing that Vigilante can effectively contain fast spreading worms that exploit unknown vulnerabilities, and is proposed as a new host centric approach for automatic worm containment.
Abstract: Worm containment must be automatic because worms can spread too fast for humans to respond. Recent work has proposed a network centric approach to automate worm containment: network traffic is analyzed to derive a packet classifier that blocks (or rate-limits) worm propagation. This approach has fundamental limitations because the analysis has no information about the application vulnerabilities exploited by worms. This paper proposes Vigilante, a new host centric approach for automatic worm containment that addresses these limitations. Vigilante relies on collaborative worm detection at end hosts in the Internet but does not require hosts to trust each other. Hosts detect worms by analysing attempts to infect applications and broadcast self-certifying alerts (SCAs) when they detect a worm. SCAs are automatically generated machine-verifiable proofs of vulnerability; they can be independently and inexpensively verified by any host. Hosts can use SCAs to generate filters or patches that prevent infection. We present preliminary results showing that Vigilante can effectively contain fast spreading worms that exploit unknown vulnerabilities.

47 citations


Proceedings ArticleDOI
20 Sep 2004
TL;DR: This paper presents the initial work on a novel paradigm for information security and privacy protection in the ubiquitous world through sets of contextual attributes and mitigate the projected risks through proactive and reactive data format transformations, subsetting and forced migrations while trying to maximize information availability.
Abstract: The vision of Ubiquitous Computing [22] creates the world in which information is omnipresent, migrating seamlessly through the environment to be accessible whenever and wherever needed. Such a vision poses substantial challenges to information security and privacy protection.Unlike in traditional, static, execution environments, information in the Ubiquitous world is exposed, throughout its lifetime, to constantly varying security and privacy threats caused by the inherent dynamicity and unpredictability of the new computing environment and its mobility. Existing data protection mechanisms, built for non- or predictably slowly-changing environments, are unable to strike the balance in the information availability vs. security and privacy threat trade-off in the Ubiquitous world thus hindering the feasibility of the overall vision.In this paper, we present our initial work on a novel paradigm for information security and privacy protection in the ubiquitous world. We model security and privacy threats through sets of contextual attributes and mitigate the projected risks through proactive and reactive data format transformations, subsetting and forced migrations while trying to maximize information availability. We also try to make the approach flexible, scalable and infrastructure independent, as required by the very vision of the Ubiquitous Computing.

24 citations


Book ChapterDOI
01 Jan 2004
TL;DR: This chapter discusses the working of scalable, self-organizing distributed systems that are often referred to as peer-to-peer (P2P) systems, which push the limits of scalability and robustness, but tend to focus on more homogeneous resources and slower network connections than do contemporary Grids.
Abstract: Publisher Summary This chapter discusses the working of scalable, self-organizing distributed systems that are often referred to as peer-to-peer (P2P) systems. P2P systems push the limits of scalability and robustness, but tend to focus on more homogeneous resources and slower network connections than do contemporary Grids. P2P systems are a potential source of resources for Grid applications; a peer-to-peer research can be a source of scalable and robust algorithms that can be applied to Grid services. P2P systems are Internet applications that harness the resources of a large number of autonomous participants. P2P and Grid computing are both concerned with enabling resource sharing within distributed communities. However, different base assumptions have led to distinct requirements and technical directions. P2P systems have focused on resource sharing in environments characterized by potentially millions of users, most with homogenous desktop systems and low bandwidth, intermittent connections to the Internet. P2P computing has had a dramatic effect on mainstream computing, even blurring the distinctions among computer science, engineering, and politics. An unfortunate side effect is that due consideration often has not been given to the classic research in distributed systems.

22 citations


Proceedings ArticleDOI
25 Aug 2004
TL;DR: This short paper proposes Highways to create clusters of nodes using a novel "location-aware" method, based on a scalable and distributed network coordinate system that helps to build overlay routing tables to achieve better proximity accuracy, thus, providing a mechanism to boost performance in application overlay routing.
Abstract: The "location-aware" construction of overlay networks requires the identification of nodes that are efficient with respect to network delay and available bandwidth. In this short paper, we propose Highways to create clusters of nodes using a novel "location-aware" method, based on a scalable and distributed network coordinate system. This helps to build overlay routing tables to achieve better proximity accuracy, thus, providing a mechanism to boost performance in application overlay routing.

22 citations


Journal ArticleDOI
TL;DR: It is shown how AIMD can be parametrized in order to allow the scaling of user allocations according to a given set of weights and the effects of different parameter choices on the performance and the oscillating behaviour of the system are analyzed.

10 citations


Proceedings ArticleDOI
03 Nov 2004
TL;DR: This work proposes randomly distributing a small number of allwireless dual radio virtual sinks throughout the sensor field that are capable of offering the existing low-power sensor network enhanced congestion avoidance support when persistent congestion is detected.
Abstract: Wireless sensor networks are emerging technologies that offer low cost, distributed monitoring solutions for a wide variety of applications and systems. One application driving the development of sensor networks is the reporting of conditions within a region of interest where the environment can abruptly change due to an sudden event, such as enemy and target movements on the battlefield, or biochemical attacks, fires, etc. Our work focuses on sensor systems that need to efficiently deliver information during and immediately following an event that triggers such an abrupt change. Congestion control and load balancing are critical issues in such sensor networks where the sensor field can move instantaneously from almost zero load to overload conditions. It is during these impulse or overload periods that the events in transit are of most importance and most likely to be lost due to congestion. Existing congestion control algorithms [1] [2] [3] are limited under these conditions because they rely on rate control or packet drop mechanisms at source or intermediate sensor nodes that can significantly impact the application’s fidelity, as measured at one or more physical sinks. We propose randomly distributing a small number of allwireless dual radio virtual sinks throughout the sensor field that are capable of offering the existing low-power sensor network enhanced congestion avoidance support when persistent congestion is detected. In essence virtual sinks operate as safety valves in the sensor field to selectively siphon off high load traffic in order to maintain the fidelity of the application signal at the physical sink and to alleviate the funneling effect, as discussed in Section 2.1. We call these specialized nodes virtual sinks to distinguish them from physical sinks, which typically have a wireline

Journal ArticleDOI
01 Jul 2004
TL;DR: The paper explores some of the trust questions that arise in this problem space and conjectures that the very structure of a peer organisation may have some hidden benefits for trust re-enforcement, that have been previously explored.
Abstract: This paper explores the extension of a model for the operation of an ad hoc mobile network to more general providerless networks, such as peer-to-peer systems. The model incorporates incentives for users to act as transit nodes on multi-hop paths and to be rewarded with their own ability to send traffic. The paper explores some of the trust questions that arise in this problem space and conjectures that the very structure of a peer organisation may have some hidden benefits for trust re-enforcement, that have not been previously explored (to our knowledge).

Book ChapterDOI
TL;DR: BterRoam is proposed, a new novel mobile and wireless roaming settlement model and clearance methodology, based on the concept of sharing and bartering excess capacities for usage in Visiting Wireless Internet Service Provider (WISP) coverage areas with Home WISP, to enable multiple service providers to trade their excess capacities via barter trade mode.
Abstract: This paper describes BarterRoam: a new novel mobile and wireless roaming settlement model and clearance methodology, based on the concept of sharing and bartering excess capacities for usage in Visiting Wireless Internet Service Provider (WISP) coverage areas with Home WISP. The methodology is not limited to WISPs; it is applicable to virtual WISPs and any Value-Added Services in the mobile and wireless access environments. In the Broadband Public Wireless Local Area Network (PWLAN) environments, every WISP provides its own coverage at various locations or Hotspots. The most desirable option to help WISPs to reduce cost in providing wider coverage area is for the Home and Visiting WISPs to collaborate for customer seamless access via bilateral or multilateral agreement and proxy RADIUS authentication [1]. This is termed a roaming agreement. Due to the large number of WISPs desiring to enter the market, the bilateral or multilateral roaming agreements become complex and unmanageable. The traditional settlement model is usually based on customer’s usage plus margin. In the broadband PWLAN environment, most WISPs and customers prefer flat-rated services so that they can budget expenses accordingly. The current settlement model will not be able to handle the preferred flat-rated settlement. Hence, a novel flat-rated settlement model and clearance methodology for wireless network environments is proposed to enable multiple service providers to trade their excess capacities and to minimize cash outflow among the service providers via barter trade mode. We are unaware of other comparative work in this area.

01 Jan 2004
TL;DR: Traditional computer applications expect a static execution environment, but this assumption is no longer realistic in the Ubiquitous world scenario, where the environment around a piece of information, contained on a device or within a communications channel, is frequently changing.
Abstract: Traditional computer applications expect a static execution environment. Such environments imply nonor slowlyevolving information security and privacy threat models. This assumption is no longer realistic in the Ubiquitous world scenario, where the environment around a piece of information, contained on a device or within a communications channel, is frequently changing. Thus, the information contained in the Ubiquitous world is exposed to varying threat models throughout its lifetime. Users expect high degree of information availability anytime and anywhere as needed, leading to serious security and privacy risks and access control problems.