scispace - formally typeset
Search or ask a question

Showing papers by "Jelena Mirkovic published in 2006"


Proceedings ArticleDOI
11 Dec 2006
TL;DR: This work proposes to harvest the strengths of existing defenses by organizing them into a collaborative overlay, called DefCOM, and augmenting them with communication and collaboration functionalities, and naturally lead to an Internet-wide response to DDoS threat.
Abstract: Increasing use of the Internet for critical services makes flooding distributed denial-of-service (DDoS) a top security threat. A distributed nature of DDoS suggests that a distributed mechanism is necessary for a successful defense. Three main DDoS defense functionalities -- attack detection, rate limiting and traffic differentiation -- are most effective when performed at the victim-end, core and sourceend respectively. Many existing systems are successful in one aspect of defense, but none offers a comprehensive solution and none has seen a wide deployment. We propose to harvest the strengths of existing defenses by organizing them into a collaborative overlay, called DefCOM, and augmenting them with communication and collaboration functionalities. Nodes collaborate during the attack to spread alerts and protect legitimate traffic, while rate limiting the attack. DefCOM can accommodate existing defenses, provide synergistic response to attacks and naturally lead to an Internet-wide response to DDoS threat.

99 citations


01 Jan 2006
TL;DR: The proposed clustering approach is applied to a data set from NLANRPMA Internet traffic archive with more than 60,000 active hosts and successfully identifies clusters with significant and interpretable features.
Abstract: Identifying groups of Internet hosts with a similar behavior is very useful for many applications of Internet security control, such as DDoS defense, worm and virus detection, detection of botnets, etc. There are two major difficulties for modeling host behavior correctly and efficiently: the huge number of overall entities, and the dynamics of each individual. In this paper, we present and formulate the Internet host profiling problem using the header data from public packet traces to select relevant features of frequently-seen hosts for profile creation, and using hierarchical clustering techniques on the profiles to build a dendrogram containing all the hosts. The well-known agglomerative algorithm is used to discover and combine similarly-behaved hosts into clusters, and domain-knowledge is used to analyze and evaluate clustering results. In this paper, we show the results of applying the proposed clustering approach to a data set from NLANRPMA Internet traffic archive with more than 60,000 active hosts. On this dataset, our approach successfully identifies clusters with significant and interpretable features. We next use the created host profiles to detect anomalous behavior during the Slammer worm spread. The experimental results show that our profiling and clustering approach can successfully detect Slammer outbreak and identify majority of infected hosts.

55 citations


Proceedings ArticleDOI
23 Oct 2006
TL;DR: A benchmark suite defining the elements necessary to recreate DDoS attack scenarios in a testbed setting, a set of performance metrics that express a defense system's effectiveness, cost, and security, and a specification of a testing methodology that provides guidelines on using benchmarks and summarizing and interpreting performance measures are described.
Abstract: There is a critical need for a common evaluation methodology for distributed denial-of-service (DDoS) defenses, to enable their independent evaluation and comparison. We describe our work on developing this methodology, which consists of: (i) a benchmark suite defining the elements necessary to recreate DDoS attack scenarios in a testbed setting, (ii) a set of performance metrics that express a defense system's effectiveness, cost, and security, and (iii) a specification of a testing methodology that provides guidelines on using benchmarks and summarizing and interpreting performance measures. We identify three basic elements of a test scenario: (i) the attack, (ii) the legitimate traffic, and (iii) the network topology including services and resources. The attack dimension defines the attack type and features, while the legitimate traffic dimension defines the mix of the background traffic that interacts with the attack and may experience a denial-of-service effect. The topology/resource dimension describes the limitations of the victim network that the attack targets or interacts with. It captures the physical topology, and the diversity and locations of important network services. We apply two approaches to develop relevant and comprehensive test scenarios for our benchmark suite: (1) we use a set of automated tools to harvest typical attack, legitimate traffic, and topology samples from the Internet, and (2) we study the effect that select features of the attack, legitimate traffic and topology/resources have on the attack impact and the defense effectiveness, and use this knowledge to automatically generate a comprehensive testing strategy for a given defense.

53 citations


Proceedings ArticleDOI
30 Oct 2006
TL;DR: A series of DoS impact metrics that are derived from traffic traces gathered at the source and the destination networks are proposed and show that they capture the doS impact more precisely then partial measures used in the past.
Abstract: Denial-of-service (DoS) attacks significantly degrade service quality experienced by legitimate users, by introducing large delays, excessive losses, and service interruptions. The main goal of DoS defenses is to neutralize this effect, and to quickly and fully restore quality of various services to levels acceptable by the users. To objectively evaluate a variety of proposed defenses we must be able to precisely measure damage created by an attack, i.e., the denial of service itself, in controlled testbed experiments. Current evaluation methodologies measure DoS damage superficially and partially by measuring a single traffic parameter, such as duration, loss or throughput, and showing divergence during the attack from the baseline case. These measures do not consider quality-of-service requirements of different applications and how they map into specific thresholds for various traffic parameters. They thus fail to measure the service quality experienced by the end users.We propose a series of DoS impact metrics that are derived from traffic traces gathered at the source and the destination networks. We segment a trace into higher-level user tasks, called transactions, that require a certain service quality to satisfy users' expectations. Each transaction is classified into one of several proposed application categories, and we define quality-of-service (QoS) requirements for each category via thresholds imposed on several traffic parameters. We measure DoS impact as a percentage of transactions that have not met their QoS requirements and aggregate this measure into several metrics that expose the level of service denial. We evaluate the proposed metrics on a series of experiments with a wide range of background traffic and our results show that our metrics capture the DoS impact more precisely then partial measures used in the past.

40 citations


Proceedings ArticleDOI
11 Oct 2006
TL;DR: A distributed worm spread simulator, called PAWS, that builds a realistic Internet model, including the AS-level topology, the limited link bandwidths, and the legitimate traffic patterns, that can be easily extended to simulate other Internet-scale events.
Abstract: Internet-scale security incidents are becoming increasingly common, and the researchers need tools to replicate and study them in a controlled setting. Current network simulators, mathematical event models and testbed emulation cannot faithfully replicate events at such a large scale. They either omit or simplify the relevant features of the Internet environment to meet the scale challenge, thus compromising fidelity. We present a distributed worm spread simulator, called PAWS, that builds a realistic Internet model, including the AS-level topology, the limited link bandwidths, and the legitimate traffic patterns. PAWS can support diversity of Internet participants at any desired granularity, because it simulates each vulnerable host individually. Faithful replication of Internet environment, its diversity and its interaction with the simulated event, all lead to a high-fidelity simulation that can be used to study event dynamics and evaluate possible defenses. While PAWS is customized for worm spread simulation, it is a modular large-scale simulator with a realistic Internet model, that can be easily extended to simulate other Internet-scale events.

23 citations


01 Jan 2006
TL;DR: This work presents the design and evaluation of the Clouseau system, which together with route-based filtering (RBF) acts as an effective and practical defense against IP spoofing and shows that RBF brings instant benefit to the deploying network.
Abstract: We present the design and evaluation of the Clouseau system, which together with route-based filtering (RBF) acts as an effective and practical defense against IP spoofing. RBF’s performance critically depends on the completeness and the accuracy of the information used for spoofed packet detection. Clouseau autonomously harvests this information and updates it promptly upon a route change. RBF information is inferred by filters applying randomized drops to TCP data traffic, which arrives from suspicious or previously unknown sources, and observing subsequent retransmissions. No communication is required with packet sources or other RBF routers, which makes Clouseau (and RBF) suitable for partial deployment. We show through experiments with a Clouseau prototype that the operation cost is reasonable and the legitimate TCP connections do not experience large delays because of randomized drops. The inference process is resilient to subversion by an attacker who is familiar with Clouseau. We motivate our work by showing that RBF brings instant benefit to the deploying network, and that it can drastically reduce the amount of spoofed traffic in the Internet if deployed at as few as 50 chosen autonomous systems.

13 citations