scispace - formally typeset
Open AccessProceedings ArticleDOI

Internet quarantine: requirements for containing self-propagating code

TLDR
The design space of worm containment systems is described using three key parameters - reaction time, containment strategy and deployment scenario - and the lower bounds that any such system must exceed to be useful today are demonstrated.
Abstract
It has been clear since 1988 that self-propagating code can quickly spread across a network by exploiting homogeneous security vulnerabilities. However, the last few years have seen a dramatic increase in the frequency and virulence of such "worm" outbreaks. For example, the Code-Red worm epidemics of 2001 infected hundreds of thousands of Internet hosts in a very short period - incurring enormous operational expense to track down, contain, and repair each infected machine. In response to this threat, there is considerable effort focused on developing technical means for detecting and containing worm infections before they can cause such damage. This paper does not propose a particular technology to address this problem, but instead focuses on a more basic question: How well will any such approach contain a worm epidemic on the Internet? We describe the design space of worm containment systems using three key parameters - reaction time, containment strategy and deployment scenario. Using a combination of analytic modeling and simulation, we describe how each of these design factors impacts the dynamics of a worm epidemic and, conversely, the minimum engineering requirements necessary to contain the spread of a given worm. While our analysis cannot provide definitive guidance for engineering defenses against all future threats, we demonstrate the lower bounds that any such system must exceed to be useful today. Unfortunately, our results suggest that there are significant technological and administrative gaps to be bridged before an effective defense can be provided in today's Internet.

read more

Content maybe subject to copyright    Report

Internet Quarantine:
Requirements for Containing Self-Propagating Code
David Moore, Colleen Shannon, Geoffrey M. Voelker, Stefan Savage
University of California, San Diego
Abstract It has been clear since 1988 that self-propagating
code can quickly spread across a network by exploiting homoge-
neous security vulnerabilities. However, the last few years have
seen a dramatic increase in the frequency and virulence of such
“worm” outbreaks. For example, the Code-Red worm epidemics
of 2001 infected hundreds of thousands of Internet hosts in a very
short period incurring enormous operational expense to track
down, contain, and repair each infected machine. In response
to this threat, there is considerable effort focused on developing
technical means for detecting and containing worm infections
before they can cause such damage.
This paper does not propose a particular technology to address
this problem, but instead focuses on a more basic question: How
well will any such approach contain a worm epidemic on the
Internet? We describe the design space of worm containment
systems using three key parameters reaction time, contain-
ment strategy and deployment scenario. Using a combination of
analytic modeling and simulation, we describe how each of these
design factors impacts the dynamics of a worm epidemic and,
conversely, the minimum engineering requirements necessary to
contain the spread of a given worm. While our analysis cannot
provide definitive guidance for engineering defenses against all
future threats, we demonstrate the lower bounds that any such
system must exceed to be useful today. Unfortunately, our results
suggest that there are significant technological and administrative
gaps to be bridged before an effective defense can be provided
in today’s Internet.
I. INTRODUCTION
On July 19th, 2001, a self-propagating program, or worm,
was released into the Internet. The worm, dubbed “Code-Red
v2”, probed random Internet hosts for a documented vulnera-
bility in the popular Microsoft IIS Web server. As susceptible
hosts were infected with the worm, they too attempted to
subvert other hosts dramatically increasing the incidence of
the infection. Over fourteen hours, the worm infected almost
360,000 hosts, reaching an incidence of 2,000 hosts per minute
before peaking [1]. The direct costs of recovering from this
epidemic (including subsequent strains of Code-Red) have
been estimated in excess of $2.6 billion [2]. While Code-
Red was neither the first nor the last widespread computer
epidemic, it exemplifies the vulnerabilities present in today’s
Internet environment. A relatively homogeneous software base
coupled with high-bandwidth connectivity provides an ideal
climate for self-propagating attacks.
Unfortunately, as demonstrated by the Code-Red episode,
we do not currently have an effective defense against such
threats. While research in this field is nascent, traditional
epidemiology suggests that the most important factors deter-
mining the spread of an infectious pathogen are the vulner-
ability of the population, the length of the infectious period
and the rate of infection. These translate into three potential
interventions to mitigate the threat of worms: prevention,
treatment, and containment. This paper focuses exclusively on
the last approach, but we briefly discuss each to justify that
decision.
A. Prevention
Prevention technologies are those that reduce the size of
the vulnerable population, thereby limiting the spread of
a worm outbreak. In the Internet context, the vulnerability
of the population is a function of the software engineering
practices that produce security vulnerabilities as well as the
socio-economic conditions that ensure the homogeneity of the
software base. For example, a single vulnerability in a popular
software system can translate into millions of vulnerable hosts.
While there is an important research agenda, initiated in [3]–
[6], to increase the security and heterogeneity of software
systems on the Internet, we believe that widespread software
vulnerabilities will persist for the foreseeable future. There-
fore, pro-active prevention measures alone are unlikely to be
sufficient to counter the worm threat.
B. Treatment
Treatment technologies, as exemplified by the disinfection
tools found in commercial virus detectors [7] and the system
update features in popular operating systems [8], are an impor-
tant part of any long-term strategy against Internet pathogens.
By deploying such measures on hosts in response to a worm
outbreak, it is possible to reduce the vulnerable population
(by eliminating the vulnerability exploited by the worm) and
reduce the rate of infection (by removing the worm itself
from infected hosts). However, for practical reasons, these
solutions are unlikely to provide short-term relief during an
acute outbreak. The time required to design, develop and test
a security update is limited by human time scales usually
measured in days far too slow to have significant impact on
an actively spreading Internet worm. Worse, if the installation
of such updates is not automated, the response time can be sub-
stantially longer. For example, during the Code-Red epidemic
it took sixteen days for most hosts to eliminate the underlying
vulnerability and thousands had not patched their systems
six weeks later [1]. Finally, creating a central authority for
developing, distributing, and automatically installing security
updates across hundreds of thousands of organizations will

require a level of trust and coordination that does not currently
exist [9].
C. Containment
Finally, containment technologies, as exemplified by fire-
walls, content filters, and automated routing blacklists, can
be used to block infectious communication between infected
and uninfected hosts. In principal, this approach can quickly
reduce, or even stop, the spread of infection, thereby miti-
gating the overall threat and providing additional time for
more heavy-weight treatment measures to be developed and
deployed. During the Code-Red epidemic, ad-hoc containment
mechanisms were the primary means used to protect individual
networks (e.g., by blocking inbound access to TCP port 80,
or content filtering based on Code-Red specific signatures), or
isolating infected hosts (e.g., by blocking the host’s outbound
access to TCP port 80). These solutions were implemented
manually using existing routers, firewalls, and proxy servers.
While these limited quarantines did not halt the spread of
the worm, they provided limited protection to portions of the
Internet.
There are strong reasons to believe that containment is
the most viable of these strategies. First, there is hope that
containment can be completely automated, since detecting
and characterizing a worm required before any filtering or
blocking can be deployed is far easier than understanding the
worm itself or the vulnerability being exploited, let alone creat-
ing software to patch the problem. Second, since containment
can potentially be deployed in the network it is possible to
implement a solution without requiring universal deployment
on every Internet host.
In this paper, we investigate the use of widespread con-
tainment mechanisms as an approach for mitigating network-
borne epidemics. However, rather than proposing particular
technologies to detect or contain network worms, we have
focused on a more basic question: How effectively can any
containment approach counter a worm epidemic on the In-
ternet? We consider containment systems in terms of three
abstract properties: the time to detect and react, the strategy
used for identifying and containing the pathogen, and the
breadth and topological placement of the system’s deployment.
Using a vulnerable host population inferred from the Code-
Red epidemic and an empirical Internet topology data set, we
use simulation to analyze how such a worm would spread
under various defenses ranging from the existing Internet to
an Internet using idealized containment technology.
From our simulation experiments, we conclude that it will
be very challenging to build containment systems that prevent
widespread infection from worm epidemics. In particular, we
find that for such systems to be successful against realistic
worms they must react automatically in a matter of minutes
and must interdict nearly all Internet paths. Moreover, future
worms increase these requirements dramatically, and for most
realistic deployment scenarios there are aggressive worms that
cannot be effectively contained [9].
The remainder of this paper is organized as follows. Sec-
tion II discusses the background of worm epidemics, and
Section III develops our basic model and methodology for
simulating worm growth and worm containment systems.
Section IV evaluates this model in an idealized, universal
deployment scenario, while Section V extends this to realistic
deployment scenarios. Finally, we conclude in Section VI.
II. B
ACKGROUND
The term “worm” was first coined in 1982 by Shoch and
Hupp of Xerox PARC [10]. Inspired by the “tapeworm”
program described in John Brunner’s 1972 novel, “The Shock-
wave Rider”, Schoch and Hupp used the term to describe a
collection of benign programs that propagated through a local
area network performing system maintenance functions on
each workstation they encountered. The security implications
of self-replicating code were not explored by researchers
until 1984, when Fred Cohen described the initial academic
experiments with computer viruses in his 1984 paper “Com-
puter Viruses Theory and Experiments” [11]. However, the
Internet worm of 1988 was the first well-known replicating
program that self-propagated across a network by exploiting
security vulnerabilities in host software. This program, which
infected several thousand hosts and disrupted Internet-wide
communication due to its high growth rate, is the modern
archetype for contemporary Internet worms [12], [13].
There have been few studies of computer worms since 1988,
perhaps because there have been few outbreaks until recently.
However, in response to Code-Red several quantitative studies
of its growth have been developed. Staniford-Chen et al.
provide an analytic model of Code-Red’s growth matched
to empirical observations [9]. Moore and Shannon have also
published an empirical analysis of Code-Red’s growth, repair,
and geography based on observed probes [1] to a dedicated
class A network (similar to that described in [14]). Song et
al. reproduced parts of this study and further distinguished
between different worms simultaneously active [15].
Code-Red has also inspired several countermeasure tech-
nologies. One such project, La Brea, attempts to slow the
growth of TCP-based worms by intercepting probes to un-
allocated addresses and artificially placing such connections
in a persistent state [16]. In such a state, the thread that was
used to initiate the probe will be blocked (Code-Red and other
worms are typically multi-threaded) and therefore the worm’s
rate of infection will decrease. However, it is unclear how
effective this approach is even under idealized circumstances,
and it is unfortunately easily circumvented by modifying a
worm to operate asynchronously. A more compelling approach
for slowing the spread of a worm is the per-host “throttling”
described by Williamson [17]. Under this scheme, each host
restricts the rate at which connections to “new” hosts may be
issued. If universally deployed, this approach can reduce the
spreading rate of a worm by up to an order of magnitude,
while not unduly impacting most legitimate communications
however the overall exponential growth pattern of the worm
will remain unchanged. To contain the spread of a worm, Toth

et al. propose a system for automatically detecting infected
hosts within an enterprise network and using firewall filters
to prevent them from spreading further (by blocking access
to affected ports) [18]. While this strategy by itself is inef-
fective at containing an epidemic, the constituent technologies
could be used in other general containment solutions. Finally,
a network technology that was utilized to help block the
spread of Code-Red was Cisco’s Network Based Application
Recognition (NBAR) feature [19]. NBAR allows a router
to block particular TCP sessions based on the presence of
individual strings in the TCP stream. By filtering on the
stream’s contents rather than just the header, NBAR allowed
sites to block inbound worm probes while still providing
public access to their Web servers. Similar functionality is
increasingly available in modern switch and router designs
and could form the basis of a future containment system.
Several researchers have also examined alternative worm
spreading approaches. While Code-Red used a uniform ran-
dom probe strategy, Code-Red-II was designed to prefer hosts
in the same address prefix. A far more virulent approach
was proposed by Nicholas Weaver, who described “Warhol
Worms” that explicitly choose a set of foundation hosts to
infect (based on earlier reconnaissance) and partition this set
among replicas to infect the population more quickly [20].
Expanding on this study, Staniford et al. describe “Flash
Worms” that contain a complete list of hosts to infect [9].
Rough analytic estimates for these worms suggest that they
could exceed the degree of infection of Code-Red in a matter
of a few minutes or less (compared to over a dozen hours for
Code-Red). Finally, Stanford et al. also describe “Surreptitious
Worms” that hide in existing communications patterns to avoid
detection. While the behavior of these worms is consistent with
our analysis, the orders of magnitude increase in incidence of
Warhol and Flash worms and the stealthy nature of Surrep-
titious worms may invalidate most practical approaches for
detecting and responding to a new outbreak.
The work that is perhaps the closest to our own comes from
the epidemiological analysis of computer viruses. Kephart and
White provide perhaps the most complete analysis of computer
virus spread based on random graph topologies. They show
that limited defenses are effective as long as the infection
rate does not exceed a critical threshold [21]. More recently,
Wang et al. have analyzed the impact of immunization on the
spread of computer viruses [22] using a similar model. Our
work is distinct from these in several dimensions. First, we use
real empirical data about host susceptibility, network topology
and administrative structure to describe how worms spread
on the real Internet. Second, worms are qualitatively different
from viruses because they don’t require human intermediation
to spread. As a consequence, worms typically produce an
infection rate many orders of magnitude faster than tradi-
tional viruses, while any treatment mechanisms are applied at
roughly the same rate. This observation invalidates most of the
threshold assumptions in most previous work oriented towards
computer viruses. Finally, this same high-speed growth leads
us to focus on containment-based approaches that are not
N size of the total vulnerable population
S(t) susceptibles at time t
I(t) infectives at time t
β contact rate
s(t) susceptibles (S(t)) / population (N) at time t
i(t) infectives (I(t)) / population(N) at time t
TABL E I
C
OMPONENTS OF THE SI MODEL.
explored in the traditional computer virus literature.
III. B
ASIC MODEL
A. Modeling Worms
While computer worms represent a relatively new threat, the
mathematical foundations governing the spread of infectious
disease are well understood and are easily adapted to this task.
In particular, worms are well described using the classic SI
epidemic model that describes the growth of an infectious
pathogen spread through homogeneous random contacts be-
tween Susceptible and Infected individuals.
This model, described in considerably more detail in [23],
dictates that the number of new infections (or incidence)
caused by a pathogen is determined by the product of the
number infected individuals (infectives), the fraction of unin-
fected individuals (susceptibles) and an average contact rate.
More formally, using the terms defined in Table I, we say the
SI model is defined by:
dI
dt
= β
IS
N
dS
dt
= β
IS
N
which can be rewritten as follows:
di
dt
= βi(1 i)
Solving this equation, for some constant of integration T ,
describes the proportion of infected individuals at time t:
i(t)=
e
β(tT )
1+e
β(tT )
This function has the characteristic that, for small values
of t, the incidence grows exponentially until a majority of
individuals are infected. At this point the incidence slows
exponentially, reaching zero as all individuals are infected.
This result is well known in the public health community
and has been thoroughly applied to digital pathogens as far
back as 1991 [21]. To apply this result to Internet worms, the
variables simply take on specific meanings. The population, N,
describes the pool of Internet hosts vulnerable to the exploit
used by the worm. The susceptibles, S(t), are hosts that are
vulnerable but not yet exploited, and the infectives, I(t),are
computers actively spreading the worm. Finally, the contact
rate, β, can be expressed as a function of worm’s probe rate r
and the targeting algorithm used to select new host addresses
for infection.

0
10
20
30
40
50
60
70
80
90
100
0 1 2 3 4 5 6
% Population Infected
time (hours)
95th percentile
average
5th percentile
Fig. 1. The simulated propagation of Code-Red-like worms showing
the relationship between the average fraction of the vulnerable
population infected and the 5th and 95th percentiles.
In this paper, we assume that an infected host chooses
targets randomly, like Code-Red v2, from the 32-bit IPv4
address space. Consequently, β = r
N
2
32
, since a given probe
will reach a vulnerable host with probability N/2
32
. Note that,
for a fixed β, N and r are inversely proportional: the spread
of a worm in a population of aN vulnerable hosts sending at
rate r is the same as the spread of N hosts probing at rate
r/a.
There are two important caveats to this model, both arising
from the use of a single scalar variable β to represent the
transmission of the worm between infective and susceptible.
First, it does not directly account for preferential targeting
algorithms. Several recent worms, such as Code-Red II [1],
[24], [25] and Nimda [26], [27], preferentially select targets
from address ranges closer to the infected host (in the same
/24 or /16 network). Similarly, it is difficult to constructively
estimate β for the intentional targeting algorithms described
by Staniford et al [9]. However, in both cases these worms
produce results that can be simply modeled by a direct scaling
of β.
A second limitation is that β expresses the average contact
rate per unit time and does not capture the impact of early
variability in the targets selected. Consequently, while the
results estimate the growth of the average worm, a particular
epidemic may grow significantly more quickly by making a
few lucky targeting decisions early on. A worm propagates by
probing hosts at rate r and, on each probe, targets a susceptible
host with probability N/2
32
on average. However, for a given
worm outbreak, the worm might probe susceptible hosts with
higher probability merely by chance as it starts to spread.
If the worm manages to infect more susceptible hosts early
on than average, then it will spread at a higher rate than the
average rate. As a result, the worm will infect the susceptible
population more quickly than average. There even exists the
possibility, albeit with very low probability, that a worm could
always target a susceptible host on each probe as it spreads,
in which case the worm would spread at a maximum contact
rate of β = r.
The effects of variability in worm propagation can be
significant and a straightforward average-case analysis can
obscured them. For example, Figure 1 plots the results of
100 simulations of the propagation of a Code-Red-like worm.
The graph shows the percentage of susceptible hosts that a
worm infects as a function of the time the worm is allowed
to propagate. We plot three different summaries of the 100
simulations: the average case, and the 5th and 95th percentiles.
From the graph we see that, after four hours of propagation, the
worm infects 55% of susceptible hosts on average. In contrast
if we desire 95% confidence then we can only say that, in 95
out of 100 worm outbreaks, up to 80% of susceptible hosts
are infected, significantly more than the average case.
While no containment system can prevent all possible per-
mutations of a worm’s propagation, we believe that designing
for the average case is inadvisable since such a system will
fail with regularity. For this reason, the remainder of this
paper relies exclusively on simulation results that use the
95th percentile of population infected as determined from a
minimum of 100 simulations.
B. Modeling Containment Systems
To understand how various containment techniques influ-
ence the spread of self-propagating code, we simulate three
factors that determine the ultimate prevalence of the worm:
reaction time, containment strategy, and deployment scenario.
Reaction time. We define the reaction time of a containment
system to include the time necessary for detection of malicious
activity, propagation of the information to all hosts partici-
pating in the system, and the time required to activate any
containment strategy once this information has been received.
Containment strategy. The containment strategy refers to the
particular technology used to isolate the worm from suscepti-
ble hosts. We focus on two key strategies: address blacklisting
and content filtering. The former approach, similar to that used
by some anti-spam systems, requires a list of IP addresses
that have been identified as being infected. Packets arriving
from one of these addresses are dropped when received by
a member of the containment system. This strategy has the
advantage that it can be implemented with today’s filtering
technology, does not require the worm to be identified and
has a predictable effect on traffic from a given host. However,
it must be updated continuously to reflect newly infected hosts
and if the detection technology produces false positives then
this approach can unintentionally deny service to uninfected
nodes.
The second approach requires a database of content signa-
tures known to represent particular worms. Packets containing
one of these signatures are similarly dropped when a con-
tainment system member receives one. This approach requires
additional technology to characterize worm outbreaks and
automatically create appropriate content signatures. However,
it has the key advantage that a single update is sufficient
to describe any number of instances of a particular worm
implementation. This approach also includes the possibility
for unintended denial-of-service, although it is unlikely for

0
10
20
30
40
50
60
70
80
90
100
0 10 20 30 40 50 60
% Infected at 24 hours (95 perc.)
Reaction time (minutes)
(a) Address Blacklisting
0
10
20
30
40
50
60
70
80
90
100
0 1 2 3 4 5 6
% Infected at 24 hours (95 perc.)
Reaction time (hours)
(b) Content Filtering
Fig. 2.
Propagation of the Code-Red worm as a function of reaction time using the (a) address blacklisting and (b) content filtering strategies.
well-chosen signatures, and depends on the assumption that
the worm itself is not polymorphic
1
.
Deployment scenario. In an ideal world, every node in the
network would be a participating member of the containment
system. However, for practical reasons this is unlikely. Instead,
containment systems may be deployed at the edge of corporate
networks, like firewalls, or perhaps may be implemented by
Internet Service Providers (ISPs) at the access points and
exchange points in their network. Moreover, it would be
unreasonable to expect that even these deployments would be
universal. Consequently, we examine a range of different de-
ployment scenarios, ranging from small numbers of customer
edge networks to large numbers of highly connected ISPs.
Finally, while some combinations of parameters are suffi-
cient to stop the spread of a worm indefinitely, others simply
slow its growth. To capture the impact of this latter effect,
we must limit our analysis to some finite time period. In this
paper, we evaluate the success of each containment system
design based on the outcome occurring after 24 hours. While
this value is somewhat arbitrary, we believe it represents a
fair lower bound on the time for highly motivated specialists
to develop and deploy treatment protocols for eliminating the
worm from infected systems. Clearly, experimental evidence
collected during the Code-Red epidemic indicates that human
system administrators are not able to routinely intervene in
any less than a 24 hour period [1].
IV. I
DEALIZED DEPLOYMENT
In this section we explore the interaction of worm incidence
and containment parameters in an idealized baseline setting
in which the containment system is universally deployed and
information about worm infections is distributed everywhere
simultaneously. In this “best case” scenario, every non-infected
1
A polymorphic worm is one whose payload is transformed regularly, so no
single signature identifies it. In the limit, such a worm could require a unique
signature per infected host and content filtering would behave equivalently to
address blacklisting.
host implements the chosen containment strategy immediately
upon being notified of an infection. This simplified setting
allows us to understand the true lower bounds on containment
and better understand the fundamental tradeoffs. However, we
revisit and remove the universal deployment assumption in
Section V.
A. Simulation Parameters
For this baseline analysis, we chose worm parameters based
on the Code-Red v2 spread described in [1]: the simulator
manages 360,000 total vulnerable hosts out of a total popu-
lation 2
32
and the probe rate defaults to 10 per second. We
assume that any probe from an infected host to a susceptible
host produces an infection immediately. A probe to a non-
vulnerable host or a host that is already infected has no effect.
In simulating the containment system we model reaction
time as follows: The first “seed” hosts are infected at time 0
and begin to probe randomly. If a host is infected at time
t we assume that all susceptible hosts are notified of this
fact at time t + R, where R specifies the reaction time of
the system. When using address blacklisting, this notification
simply consists of the IP address of the infected host. Probes
from the infected hosts will be ignored from that time forward.
Similarly, in content filtering systems this notification simply
includes the signature of the worm, and all worm probes from
any host are ignored afterward. Our goals are to determine
the reaction times necessary to minimize worm propagation,
to compare the effectiveness between containment strategies,
and to understand the relationship between reaction time and
the aggressiveness of worm propagation.
B. Code-Red Case Study
As a first step, we examine the effectiveness of this idealized
containment system on a Code-Red-style worm. While future
worms are likely to be more severe, we argue that any
containment system must at least mitigate a worm of this
magnitude. We start with two basic questions: How short a

Citations
More filters
Proceedings Article

Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software

TL;DR: TaintCheck as mentioned in this paper performs dynamic taint analysis by performing binary rewriting at run time, which can reliably detect most types of exploits and produces no false positives for any of the many different programs that were tested.
Proceedings Article

How to Own the Internet in Your Spare Time

TL;DR: This work develops and evaluates several new, highly virulent possible techniques: hit-list scanning, permutation scanning, self-coordinating scanning, and use of Internet-sized hit-lists (which creates a flash worm).
Journal ArticleDOI

Inside the Slammer worm

TL;DR: The Slammer worm spread so quickly that human response was ineffective, and why was it so effective and what new challenges do this new breed of worm pose?
Book ChapterDOI

Anomalous Payload-Based Network Intrusion Detection

TL;DR: A payload-based anomaly detector, called PAYL, for intrusion detection that demonstrates the surprising effectiveness of the method on the 1999 DARPA IDS dataset and a live dataset the authors collected on the Columbia CS department network.
Proceedings Article

Autograph: toward automated, distributed worm signature detection

TL;DR: Autograph as mentioned in this paper is a system that automatically generates signatures for novel Internet worms that propagate using TCP transport, and it is designed to produce signatures that exhibit high sensitivity (high true positives) and high specificity (low false positives).
References
More filters
Journal ArticleDOI

The Mathematics of Infectious Diseases

Herbert W. Hethcote
- 01 Dec 2000 - 
TL;DR: Threshold theorems involving the basic reproduction number, the contact number, and the replacement number $R$ are reviewed for classic SIR epidemic and endemic models and results with new expressions for $R_{0}$ are obtained for MSEIR and SEIR endemic models with either continuous age or age groups.
Proceedings ArticleDOI

Proof-carrying code

TL;DR: It is shown in this paper how proof-carrying code might be used to develop safe assembly-language extensions of ML programs and the adequacy of concrete representations for the safety policy, the safety proofs, and the proof validation is proved.
Proceedings Article

StackGuard: automatic adaptive detection and prevention of buffer-overflow attacks

TL;DR: StackGuard is described: a simple compiler technique that virtually eliminates buffer overflow vulnerabilities with only modest performance penalties, and a set of variations on the technique that trade-off between penetration resistance and performance.
Proceedings Article

Inferring internet denial-of-service activity

TL;DR: This article presents a new technique, called “backscatter analysis,” that provides a conservative estimate of worldwide denial-of-service activity, and believes it is the first to provide quantitative estimates of Internet-wide denial- of- service activity.
Proceedings Article

How to Own the Internet in Your Spare Time

TL;DR: This work develops and evaluates several new, highly virulent possible techniques: hit-list scanning, permutation scanning, self-coordinating scanning, and use of Internet-sized hit-lists (which creates a flash worm).
Frequently Asked Questions (16)
Q1. What contributions have the authors mentioned in the paper "Internet quarantine: requirements for containing self-propagating code" ?

This paper does not propose a particular technology to address this problem, but instead focuses on a more basic question: The authors describe the design space of worm containment systems using three key parameters – reaction time, containment strategy and deployment scenario. Using a combination of analytic modeling and simulation, the authors describe how each of these design factors impacts the dynamics of a worm epidemic and, conversely, the minimum engineering requirements necessary to contain the spread of a given worm. While their analysis can not provide definitive guidance for engineering defenses against all future threats, the authors demonstrate the lower bounds that any such system must exceed to be useful today. Unfortunately, their results suggest that there are significant technological and administrative gaps to be bridged before an effective defense can be provided in today ’ s Internet. 

To support this capability, the authors encourage network equipment vendors to provide flexible high-speed packet classification and filtering services – extending into the application layer. As a result, cooperation and coordination among ISPs will need to be extensive. From these results, the authors conclude that it will be very challenging to build Internet containment systems that prevent widespread infection from worm epidemics. And the inevitable emergence of significantly more aggressive worms further complicates the problem. 

containment technologies, as exemplified by firewalls, content filters, and automated routing blacklists, can be used to block infectious communication between infected and uninfected hosts. 

at high probe rates the worm is able to infect enough vulnerable hosts before it is detected and blocked that it continues to exploit the 0.3%unblocked paths and spread further. 

The time required to design, develop and test a security update is limited by human time scales – usually measured in days – far too slow to have significant impact on an actively spreading Internet worm. 

the authors found that many pairs of ASes were connected by multiple equal-cost shortest paths (with an average of 6.3 equal-cost paths for every AS pair). 

Using a susceptible host population inferred from the Code-Red epidemic, and an empirical Internet topology data set, the authors use simulation to analyze how such a worm would spread under various defenses, ranging from the existing Internet to an Internet using idealized defense technology. 

The authors select a reaction time of 2 hours, which contains the worm to less than 1% of vulnerable hosts in the idealized deployment scenario described earlier. 

In particular, the authors find that for such systems to be successful against realistic worms they must react automatically in a matter of minutes and must interdict nearly all Internet paths. 

The reason why containment cannot achieve low infection rates for aggressive worms is due to the fact that the deployment scenarios do not cover all of the paths among all of the vulnerable hosts. 

With larger reaction times, however, the system crosses a threshold where the expected time to locate a new susceptible host is smaller than the reaction time, allowing the worm to continue spreading. 

In terms of effectiveness, content filtering prevents the worm from spreading with a reaction time of less than 2 hours, a factor of six difference compared to blacklisting. 

From these results, the authors see that a worm can be contained to a minority of hosts if the top 20 ISPs cooperate, and including the top 40 ISPs is sufficient to limit the worm to less than 5% of all hosts. 

With the ideal model in Section IV-C the authors found that using content filtering could contain an aggressive worm spreading at 100 probes/second to 1% of vulnerable hosts with a reaction time of 18 minutes. 

though, with a large enough reaction time the worm will infect all vulnerable hosts within the 24 hour period; although not shown, this happens with a reaction time of 2 hours or longer. 

The authors define the reaction time of a containment system to include the time necessary for detection of malicious activity, propagation of the information to all hosts participating in the system, and the time required to activate any containment strategy once this information has been received.