scispace - formally typeset
Search or ask a question
Author

Colleen Shannon

Bio: Colleen Shannon is an academic researcher from University of California, San Diego. The author has contributed to research in topics: The Internet & Computer worm. The author has an hindex of 12, co-authored 13 publications receiving 4543 citations.

Papers
More filters
Journal ArticleDOI
01 Jul 2003
TL;DR: The Slammer worm spread so quickly that human response was ineffective, and why was it so effective and what new challenges do this new breed of worm pose?
Abstract: The Slammer worm spread so quickly that human response was ineffective. In January 2003, it packed a benign payload, but its disruptive capacity was surprising. Why was it so effective and what new challenges do this new breed of worm pose?.

1,070 citations

Proceedings ArticleDOI
06 Nov 2002
TL;DR: The experience of the Code-Red worm demonstrates that wide-spread vulnerabilities in Internet hosts can be exploited quickly and dramatically, and that techniques other than host patching are required to mitigate Internet worms.
Abstract: On July 19, 2001, more than 359,000 computers connected to the Internet were infected with the Code-Red (CRv2) worm in less than 14 hours. The cost of this epidemic, including subsequent strains of Code-Red, is estimated to be in excess of $2.6 billion. Despite the global damage caused by this attack, there have been few serious attempts to characterize the spread of the worm, partly due to the challenge of collecting global information about worms. Using a technique that enables global detection of worm spread, we collected and analyzed data over a period of 45 days beginning July 2nd, 2001 to determine the characteristics of the spread of Code-Red throughout the Internet.In this paper, we describe the methodology we use to trace the spread of Code-Red, and then describe the results of our trace analyses. We first detail the spread of the Code-Red and CodeRedII worms in terms of infection and deactivation rates. Even without being optimized for spread of infection, Code-Red infection rates peaked at over 2,000 hosts per minute. We then examine the properties of the infected host population, including geographic location, weekly and diurnal time effects, top-level domains, and ISPs. We demonstrate that the worm was an international event, infection activity exhibited time-of-day effects, and found that, although most attention focused on large corporations, the Code-Red worm primarily preyed upon home and small business users. We also qualified the effects of DHCP on measurements of infected hosts and determined that IP addresses are not an accurate measure of the spread of a worm on timescales longer than 24 hours. Finally, the experience of the Code-Red worm demonstrates that wide-spread vulnerabilities in Internet hosts can be exploited quickly and dramatically, and that techniques other than host patching are required to mitigate Internet worms.

906 citations

Proceedings ArticleDOI
09 Jul 2003
TL;DR: The design space of worm containment systems is described using three key parameters - reaction time, containment strategy and deployment scenario - and the lower bounds that any such system must exceed to be useful today are demonstrated.
Abstract: It has been clear since 1988 that self-propagating code can quickly spread across a network by exploiting homogeneous security vulnerabilities. However, the last few years have seen a dramatic increase in the frequency and virulence of such "worm" outbreaks. For example, the Code-Red worm epidemics of 2001 infected hundreds of thousands of Internet hosts in a very short period - incurring enormous operational expense to track down, contain, and repair each infected machine. In response to this threat, there is considerable effort focused on developing technical means for detecting and containing worm infections before they can cause such damage. This paper does not propose a particular technology to address this problem, but instead focuses on a more basic question: How well will any such approach contain a worm epidemic on the Internet? We describe the design space of worm containment systems using three key parameters - reaction time, containment strategy and deployment scenario. Using a combination of analytic modeling and simulation, we describe how each of these design factors impacts the dynamics of a worm epidemic and, conversely, the minimum engineering requirements necessary to contain the spread of a given worm. While our analysis cannot provide definitive guidance for engineering defenses against all future threats, we demonstrate the lower bounds that any such system must exceed to be useful today. Unfortunately, our results suggest that there are significant technological and administrative gaps to be bridged before an effective defense can be provided in today's Internet.

759 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a new technique, called backscatter analysis, that provides a conservative estimate of worldwide denial-of-service activity, and quantitatively assess the number, duration and focus of attacks, and qualitatively characterize their behavior.
Abstract: In this article, we seek to address a simple question: “How prevalent are denial-of-service attacks in the Internet?” Our motivation is to quantitatively understand the nature of the current threat as well as to enable longer-term analyses of trends and recurring patterns of attacks. We present a new technique, called “backscatter analysis,” that provides a conservative estimate of worldwide denial-of-service activity. We use this approach on 22 traces (each covering a week or more) gathered over three years from 2001 through 2004. Across this corpus we quantitatively assess the number, duration, and focus of attacks, and qualitatively characterize their behavior. In total, we observed over 68,000 attacks directed at over 34,000 distinct victim IP addresses---ranging from well-known e-commerce companies such as Amazon and Hotmail to small foreign ISPs and dial-up connections. We believe our technique is the first to provide quantitative estimates of Internet-wide denial-of-service activity and that this article describes the most comprehensive public measurements of such activity to date.

735 citations

Journal ArticleDOI
01 Jul 2004
TL;DR: A global view of the Witty worm's spread, with particular attention to its features.
Abstract: On Friday, 19 March 2004, at approximately 8:45 pm Pacific Standard Time (PST), an Internet worm began to spread, targeting a buffer overflow vulnerability in several Internet Security Systems (ISS) products, including its RealSecure Network, RealSecure Server Sensor, RealSecure Desktop, and BlackICE The worm took advantage of a security flaw in these firewall applications that eEye Digital Security discovered earlier in March Once the Witty worm - so called because its payload contained the phrase, "(?,?)insert witty message here (?,?)" - infects a computer, it deletes a randomly chosen section of the hard drive, which, over time, renders the machine unusable We share a global view of the worm's spread, with particular attention to its features

404 citations


Cited by
More filters
Proceedings Article
01 Jan 2005
TL;DR: TaintCheck as mentioned in this paper performs dynamic taint analysis by performing binary rewriting at run time, which can reliably detect most types of exploits and produces no false positives for any of the many different programs that were tested.
Abstract: Software vulnerabilities have had a devastating effect on the Internet. Worms such as CodeRed and Slammer can compromise hundreds of thousands of hosts within hours or even minutes, and cause millions of dollars of damage [26, 43]. To successfully combat these fast automatic Internet attacks, we need fast automatic attack detection and filtering mechanisms. In this paper we propose dynamic taint analysis for automatic detection of overwrite attacks, which include most types of exploits. This approach does not need source code or special compilation for the monitored program, and hence works on commodity software. To demonstrate this idea, we have implemented TaintCheck, a mechanism that can perform dynamic taint analysis by performing binary rewriting at run time. We show that TaintCheck reliably detects most types of exploits. We found that TaintCheck produced no false positives for any of the many different programs that we tested. Further, we describe how TaintCheck could improve automatic signature generation in

1,557 citations

Proceedings Article
13 Aug 2001
TL;DR: This article presents a new technique, called “backscatter analysis,” that provides a conservative estimate of worldwide denial-of-service activity, and believes it is the first to provide quantitative estimates of Internet-wide denial- of- service activity.
Abstract: In this paper, we seek to answer a simple question: "How prevalent are denial-of-service attacks in the Internet today?". Our motivation is to understand quantitatively the nature of the current threat as well as to enable longer-term analyses of trends and recurring patterns of attacks. We present a new technique, called "backscatter analysis", that provides an estimate of worldwide denial-of-service activity. We use this approach on three week-long datasets to assess the number, duration and focus of attacks, and to characterize their behavior. During this period, we observe more than 12,000 attacks against more than 5,000 distinct targets, ranging from well known e-commerce companies such as Amazon and Hotmail to small foreign ISPs and dial-up connections. We believe that our work is the only publically available data quantifying denial-of-service activity in the Internet.

1,444 citations

Proceedings Article
05 Aug 2002
TL;DR: This work develops and evaluates several new, highly virulent possible techniques: hit-list scanning, permutation scanning, self-coordinating scanning, and use of Internet-sized hit-lists (which creates a flash worm).
Abstract: The ability of attackers to rapidly gain control of vast numbers of Internet hosts poses an immense risk to the overall security of the Internet. Once subverted, these hosts can not only be used to launch massive denial of service floods, but also to steal or corrupt great quantities of sensitive information, and confuse and disrupt use of the network in more subtle ways. We present an analysis of the magnitude of the threat. We begin with a mathematical model derived from empirical data of the spread of Code Red I in July, 2001. We discuss techniques subsequently employed for achieving greater virulence by Code Red II and Nimda. In this context, we develop and evaluate several new, highly virulent possible techniques: hit-list scanning (which creates a Warhol worm), permutation scanning (which enables self-coordinating scanning), and use of Internet-sized hit-lists (which creates a flash worm).

1,255 citations

Proceedings Article
16 Aug 2017
TL;DR: It is argued that Mirai may represent a sea change in the evolutionary development of botnets--the simplicity through which devices were infected and its precipitous growth, and that novice malicious techniques can compromise enough low-end devices to threaten even some of the best-defended targets.
Abstract: The Mirai botnet, composed primarily of embedded and IoT devices, took the Internet by storm in late 2016 when it overwhelmed several high-profile targets with massive distributed denial-of-service (DDoS) attacks. In this paper, we provide a seven-month retrospective analysis of Mirai's growth to a peak of 600k infections and a history of its DDoS victims. By combining a variety of measurement perspectives, we analyze how the botnet emerged, what classes of devices were affected, and how Mirai variants evolved and competed for vulnerable hosts. Our measurements serve as a lens into the fragile ecosystem of IoT devices. We argue that Mirai may represent a sea change in the evolutionary development of botnets--the simplicity through which devices were infected and its precipitous growth, demonstrate that novice malicious techniques can compromise enough low-end devices to threaten even some of the best-defended targets. To address this risk, we recommend technical and nontechnical interventions, as well as propose future research directions.

1,236 citations

Journal ArticleDOI
01 Jul 2003
TL;DR: The Slammer worm spread so quickly that human response was ineffective, and why was it so effective and what new challenges do this new breed of worm pose?
Abstract: The Slammer worm spread so quickly that human response was ineffective. In January 2003, it packed a benign payload, but its disruptive capacity was surprising. Why was it so effective and what new challenges do this new breed of worm pose?.

1,070 citations