scispace - formally typeset
Search or ask a question
Author

Andrew Twigg

Other affiliations: University of Oxford
Bio: Andrew Twigg is an academic researcher from University of Cambridge. The author has contributed to research in topics: Distributed algorithm & Trust management (information system). The author has an hindex of 13, co-authored 30 publications receiving 1173 citations. Previous affiliations of Andrew Twigg include University of Oxford.

Papers
More filters
Journal ArticleDOI
TL;DR: The SECURE project investigates the design of security mechanisms for pervasive computing based on trust, and addresses how entities in unfamiliar pervasive computing environments can overcome initial suspicion to provide secure collaboration.
Abstract: The SECURE project investigates the design of security mechanisms for pervasive computing based on trust. It addresses how entities in unfamiliar pervasive computing environments can overcome initial suspicion to provide secure collaboration.

381 citations

Proceedings ArticleDOI
02 Jun 2008
TL;DR: It is proved that the so-called random peer, latest useful chunk mechanism can achieve dissemination at an optimal rate and within an optimal delay, up to an additive constant term, suggesting that epidemic live streaming algorithms can achieve near-unbeatable rates and delays.
Abstract: Several peer-to-peer systems for live streaming have been recently deployed (e.g. CoolStreaming, PPLive, SopCast). These all rely on distributed, epidemic-style dissemination mechanisms. Despite their popularity, the fundamental performance trade-offs of such mechanisms are still poorly understood. In this paper we propose several results that contribute to the understanding of such trade-offs.Specifically, we prove that the so-called random peer, latest useful chunk mechanism can achieve dissemination at an optimal rate and within an optimal delay, up to an additive constant term. This qualitative result suggests that epidemic live streaming algorithms can achieve near-unbeatable rates and delays. Using mean-field approximations, we also derive recursive formulas for the diffusion function of two schemes referred to as latest blind chunk, random peer and latest blind chunk, random useful peer.Finally, we provide simulation results that validate the above theoretical results and allow us to compare the performance of various practically interesting diffusion schemes terms of delay, rate, and control overhead. In particular, we identify several peer/chunk selection algorithms that achieve near-optimal performance trade-offs. Moreover, we show that the control overhead needed to implement these algorithms may be reduced by restricting the neighborhood of each peer without substantial performance degradation.

239 citations

Proceedings ArticleDOI
01 May 2007
TL;DR: The first proof that whenever demand lambda + epsiv is feasible for epsv > 0, a simple local-control algorithm is stable under demand lambda, and as a corollary a famous theorem of Edmonds is given.
Abstract: We consider the problem of broadcasting a live stream of data in an unstructured network. The broadcasting problem has been studied extensively for edge-capacitated networks. We give the first proof that whenever demand lambda + epsiv is feasible for epsiv > 0, a simple local-control algorithm is stable under demand lambda, and as a corollary a famous theorem of Edmonds. We then study the node-capacitated case and show a similar optimality result for the complete graph. We study through simulation the delay that users must wait in order to playback a video stream with a small number of skipped packets, and discuss the suitability of our algorithms for live video streaming.

192 citations

Book ChapterDOI
22 Feb 2007
TL;DR: Graphs of tree width k are given a routing scheme using routing tables of size O(k2 log2 n) tables, and m-clique width is introduced, generalizing clique width, to show that graphs of m-Clique width k also have a routing schemes using size O-log2 n tables.
Abstract: We study labelling schemes for X-constrained path problems. Given a graph (V,E) and X ⊆ V, a path is X-constrained if all intermediate vertices avoid X. We study the problem of assigning labels J(x) to vertices so that given {J(x) : x ∈ X} for any X ⊆ V, we can route on the shortest X-constrained path between x, y ∈ X. This problem is motivated by Internet routing, where the presence of routing policies means that shortest-path routing is not appropriate. For graphs of tree width k, we give a routing scheme using routing tables of size O(k2 log2 n). We introduce m-clique width, generalizing clique width, to show that graphs of m-clique width k also have a routing scheme using size O(k2 log2 n) tables.

54 citations

Proceedings Article
Andrew Twigg, Andrew Byde, Grzegorz Miłoś, Tim Moreton, John Wilkes1, Tom Wilkie 
14 Jun 2011
TL;DR: This work describes the 'stratified B-tree', which is the first versioned dictionary offering fast updates and an optimal tradeoff between space, query and update costs.
Abstract: External-memory versioned dictionaries are fundamental to file systems, databases and many other algorithms. The ubiquitous data structure is the copy-on-write (CoW) B-tree. Unfortunately, it doesn't inherit the B-tree's optimality properties; it has poor space utilization, cannot offer fast updates, and relies on random IO to scale. We describe the 'stratified B-tree', which is the first versioned dictionary offering fast updates and an optimal tradeoff between space, query and update costs.

53 citations


Cited by
More filters
Journal ArticleDOI
01 Mar 2007
TL;DR: Trust and reputation systems represent a significant trend in decision support for Internet mediated service provision as mentioned in this paper, where the basic idea is to let parties rate each other, for example after the completion of a transaction, and use the aggregated ratings about a given party to derive a trust or reputation score.
Abstract: Trust and reputation systems represent a significant trend in decision support for Internet mediated service provision. The basic idea is to let parties rate each other, for example after the completion of a transaction, and use the aggregated ratings about a given party to derive a trust or reputation score, which can assist other parties in deciding whether or not to transact with that party in the future. A natural side effect is that it also provides an incentive for good behaviour, and therefore tends to have a positive effect on market quality. Reputation systems can be called collaborative sanctioning systems to reflect their collaborative nature, and are related to collaborative filtering systems. Reputation systems are already being used in successful commercial online applications. There is also a rapidly growing literature around trust and reputation systems, but unfortunately this activity is not very coherent. The purpose of this article is to give an overview of existing and proposed systems that can be used to derive measures of trust and reputation for Internet transactions, to analyse the current trends and developments in this area, and to propose a research agenda for trust and reputation systems.

3,493 citations

Book
02 Jan 1991

1,377 citations

Journal ArticleDOI
TL;DR: The current state of autonomic communications research is surveyed and significant emerging trends and techniques are identified.
Abstract: Autonomic communications seek to improve the ability of network and services to cope with unpredicted change, including changes in topology, load, task, the physical and logical characteristics of the networks that can be accessed, and so forth. Broad-ranging autonomic solutions require designers to account for a range of end-to-end issues affecting programming models, network and contextual modeling and reasoning, decentralised algorithms, trust acquisition and maintenance---issues whose solutions may draw on approaches and results from a surprisingly broad range of disciplines. We survey the current state of autonomic communications research and identify significant emerging trends and techniques.

690 citations

Proceedings ArticleDOI
03 Nov 2013
TL;DR: X-Stream is novel in using an edge-centric rather than a vertex-centric implementation of this model, and streaming completely unordered edge lists rather than performing random access, and competes favorably with existing systems for graph processing.
Abstract: X-Stream is a system for processing both in-memory and out-of-core graphs on a single shared-memory machine. While retaining the scatter-gather programming model with state stored in the vertices, X-Stream is novel in (i) using an edge-centric rather than a vertex-centric implementation of this model, and (ii) streaming completely unordered edge lists rather than performing random access. This design is motivated by the fact that sequential bandwidth for all storage media (main memory, SSD, and magnetic disk) is substantially larger than random access bandwidth.We demonstrate that a large number of graph algorithms can be expressed using the edge-centric scatter-gather model. The resulting implementations scale well in terms of number of cores, in terms of number of I/O devices, and across different storage media. X-Stream competes favorably with existing systems for graph processing. Besides sequential access, we identify as one of the main contributors to better performance the fact that X-Stream does not need to sort edge lists during preprocessing.

640 citations

01 Jan 2004
TL;DR: This paper proposes a fully distributed reputation system that can cope with false disseminated information and enables redemption and prevent the sudden exploitation of good reputation built over time by introducing re-evaluation and reputation fading.
Abstract: Reputation systems can be tricked by the spread of false reputation ratings, be it false accusations or false praise. Simple solutions such as exclusively relying on one’s own direct observations have drawbacks, as they do not make use of all the information available. We propose a fully distributed reputation system that can cope with false disseminated information. In our approach, everyone maintains a reputation rating and a trust rating about everyone else that they care about. From time to time first-hand reputation information is exchanged with others; using a modified Bayesian approach we designed and present in this paper, only second-hand reputation information that is not incompatible with the current reputation rating is accepted. Thus, reputation ratings are slightly modified by accepted information. Trust ratings are updated based on the compatibility of second-hand reputation information with prior reputation ratings. Data is entirely distributed: someone’s reputation and trust is the collection of ratings maintained by others. We enable redemption and prevent the sudden exploitation of good reputation built over time by introducing re-evaluation and reputation fading.

555 citations