scispace - formally typeset
Search or ask a question

Showing papers by "Anja Feldmann published in 2004"


Proceedings ArticleDOI
30 Aug 2004
TL;DR: This paper presents a methodology for identifying the autonomous system (or systems) responsible when a routing change is observed and propagated by BGP, and finds that it can pinpoint the origin to either a single AS or a session between two ASes in most cases.
Abstract: This paper presents a methodology for identifying the autonomous system (or systems) responsible when a routing change is observed and propagated by BGP. The origin of such a routing instability is deduced by examining and correlating BGP updates for many prefixes gathered at many observation points. Although interpreting BGP updates can be perplexing, we find that we can pinpoint the origin to either a single AS or a session between two ASes in most cases. We verify our methodology in two phases. First, we perform simulations on an AS topology derived from actual BGP updates using routing policies that are compatible with inferred peering/customer/provider relationships. In these simulations, in which network and router behavior are "ideal", we inject inter-AS link failures and demonstrate that our methodology can effectively identify most origins of instability. We then develop several heuristics to cope with the limitations of the actual BGP update propagation process and monitoring infrastructure, and apply our methodology and evaluation techniques to actual BGP updates gathered at hundreds of observation points. This approach of relying on data from BGP simulations as well as from measurements enables us to evaluate the inference quality achieved by our approach under ideal situations and how it is correlated with the actual quality and the number of observation points.

266 citations


Proceedings ArticleDOI
25 Oct 2004
TL;DR: This paper identifies and explores key factors with respect to resource management and efficient packet processing and highlight their impact using a set of real-world traces to gauge the trade-offs of tuning a NIDS.
Abstract: In large-scale environments, network intrusion detection systems (NIDSs) face extreme challenges with respect to traffic volume, traffic diversity, and resource management. While crucial for acceptance and operational deployment, the research literature mainly omits such practical difficulties. In this paper, we offer an evaluation based on extensive operational experience. More specifically, we identify and explore key factors with respect to resource management and efficient packet processing and highlight their impact using a set of real-world traces. On the one hand, these insights help us gauge the trade-offs of tuning a NIDS. On the other hand, they motivate us to explore several novel ways of reducing resource requirements. These enable us to improve the state management considerably as well as balance the processing load dynamically. Overall this enables us to operate a NIDS successfully in our high-volume network environments.

181 citations


Proceedings ArticleDOI
25 Oct 2004
TL;DR: This paper introduces a methodology for estimating interdomain Web traffic lows between all clients worldwide and the ervers belonging to over one housand content providers using the server logs from a large ontent Delivery Network to identify client downloads of content provider Web pages.
Abstract: This paper introduces a methodology for estimating interdomain Web traffic lows between all clients worldwide and the ervers belonging to over one housand content providers. The idea is to use the server logs from a large ontent Delivery Network (CDN) to identify client downloads of content provider (i.e., publisher) Web pages. For each of these Web pages, a client typically downloads some objects from the content provider, some from the CDN, and perhaps some from third parties such as banner advertisement agencies. The sizes and sources of the non-CDN downloads associated with each CDN download are estimated separately by examining Web accesses in packet traces collected at several universities.The methodology produces a (time-varying) interdomain HTTP traffic demand matrix pairing several hundred thousand blocks of client IP addresses with over ten thousand individual Web servers. When combined with geographical databases and routing tables, the matrix can be used to provide (partial) answers to questions such as "How do Web access patterns vary by country?", "Which autonomous systems host the most Web content?", and "How stable are Web traffic flows over time?".

56 citations


Book ChapterDOI
19 Apr 2004
TL;DR: The results suggest that while pass-through delays under normal conditions are rather small, under certain conditions, they can be a major contributing factor to slow convergence, and this paper presents a methodology for studying the relationship between BGP pass- through times and a number of operationally important variables.
Abstract: Fast routing convergence is a key requirement for services that rely on stringent QoS. Yet experience has shown that the standard inter-domain routing protocol, BGP4, takes, at times, more than one hour to converge. Previous work has focused on exploring if this stems from protocol interactions, timers, etc. In comparison only marginal attention has been payed to quantify the impact of individual router delays on the overall delay. Salient factors, such as CPU load, number of BGP peers, etc., may help explain unusually high delays and as a consequence BGP convergence times. This paper presents a methodology for studying the relationship between BGP pass-through times and a number of operationally important variables, along with some initial results. Our results suggest that while pass-through delays under normal conditions are rather small, under certain conditions, they can be a major contributing factor to slow convergence.

50 citations


01 Jan 2004
TL;DR: This paper asks the question how much does the neighborhood selection process of a P2P protocol such as Gnutella respect the underlying Internet topology.
Abstract: In this paper we ask the question how much does the neighborhood selection process of a P2P protocol such as Gnutella respect the underlying Internet topology.

26 citations


Proceedings ArticleDOI
25 Oct 2004
TL;DR: This work identifies a set of packet trace manipulation operations that enable us to generate a trace bottom-up and presents a framework within which these operations can be realized and shows an example configuration for the authors' prototype.
Abstract: Evaluating network components such as network intrusion detection systems, firewalls, routers, or switches suffers from the lack of available network traffic traces that on the one hand are appropriate for a specific test environment but on the other hand have the same characteristics as actual traffic. Instead of just capturing traffic and replaying the trace, we identify a set of packet trace manipulation operations that enable us to generate a trace bottom-up: our trace primitives can be traces from different environments or artificially generated ones; our basic operations include merging of two traces, moving a flow across time, duplicating a flow, and stretching a flow's time-scale. After discussing the potential as ell as the dangers of each operation with respect to analysis at different protocol layers, we present a framework within which these operations can be realized and show an example configuration for our prototype.

14 citations


01 Jan 2004
TL;DR: A sensibility analysis of convergence times and number of exchanged updates to the settings of BGP parameters is performed and the influence of the Minimum Route Advertisement Interval (MRAI) timer is investigated.
Abstract: The Border Gateway Protocol (BGP) is the quasi-standard for the routing between autonomous systems in the Internet. Instabilities in the topology like a failing link can lead to a considerable delay in convergence times. Therefore it is necessary to gain a better understanding of the global dynamics and underlying mechanisms of BGP. In this work we perform a sensibility analysis of convergence times and number of exchanged updates to the settings of BGP parameters. In particular, the influence of the Minimum Route Advertisement Interval (MRAI) timer is investigated. Further experiments serve to lighten the propagation of updates in succession to the failure of a link. Scalability questions like how many autonomous systems are affected by the instability and how far do update messages spread out from the broken link will be examined in this work. All experiments are conducted using the SSFNet network simulator.