scispace - formally typeset
Search or ask a question
Author

Cliff C. Zou

Bio: Cliff C. Zou is an academic researcher from University of Central Florida. The author has contributed to research in topics: Vehicular ad hoc network & The Internet. The author has an hindex of 31, co-authored 108 publications receiving 5534 citations. Previous affiliations of Cliff C. Zou include University of Massachusetts Amherst.


Papers
More filters
Proceedings ArticleDOI
18 Nov 2002
TL;DR: This paper provides a careful analysis of Code Red propagation by accounting for two factors: one is the dynamic countermeasures taken by ISPs and users; the other is the slowed down worm infection rate because Code Red rampant propagation caused congestion and troubles to some routers.
Abstract: The Code Red worm incident of July 2001 has stimulated activities to model and analyze Internet worm propagation. In this paper we provide a careful analysis of Code Red propagation by accounting for two factors: one is the dynamic countermeasures taken by ISPs and users; the other is the slowed down worm infection rate because Code Red rampant propagation caused congestion and troubles to some routers. Based on the classical epidemic Kermack-Mckendrick model, we derive a general Internet worm model called the two-factor worm model. Simulations and numerical solutions of the two-factor worm model match the observed data of Code Red worm better than previous models do. This model leads to a better understanding and prediction of the scale and speed of Internet worm spreading.

829 citations

Proceedings Article
01 Jan 2006
TL;DR: A diurnal propagation model is created that uses diurnal shaping functions to capture regional variations in online vulnerable populations and lets one compare propagation rates for different botnets, and prioritize response.
Abstract: Time zones play an important and unexplored role in malware epidemics. To understand how time and location affect malware spread dynamics, we studied botnets, or large coordinated collections of victim machines (zombies) controlled by attackers. Over a six month period we observed dozens of botnets representing millions of victims. We noted diurnal properties in botnet activity, which we suspect occurs because victims turn their computers off at night. Through binary analysis, we also confirmed that some botnets demonstrated a bias in infecting regional populations. Clearly, computers that are offline are not infectious, and any regional bias in infections will affect the overall growth of the botnet. We therefore created a diurnal propagation model. The model uses diurnal shaping functions to capture regional variations in online vulnerable populations. The diurnal model also lets one compare propagation rates for different botnets, and prioritize response. Because of variations in release times and diurnal shaping functions particular to an infection, botnets released later in time may actually surpass other botnets that have an advanced start. Since response times for malware outbreaks is now measured in hours, being able to predict short-term propagation dynamics lets us allocate resources more intelligently. We used empirical data from botnets to evaluate the analytical model.

399 citations

Proceedings ArticleDOI
27 Oct 2003
TL;DR: This paper proposes to use a Kalman filter to detect a worm's propagation at its early stage in real-time, and can effectively predict the overall vulnerable population size, and correct the bias in the observed number of infected hosts.
Abstract: After the Code Red incident in 2001 and the SQL Slammer in January 2003, it is clear that a simple self-propagating worm can quickly spread across the Internet, infects most vulnerable computers before people can take effective countermeasures. The fast spreading nature of worms calls for a worm monitoring and early warning system. In this paper, we propose effective algorithms for early detection of the presence of a worm and the corresponding monitoring system. Based on epidemic model and observation data from the monitoring system, by using the idea of "detecting the trend, not the rate" of monitored illegitimated scan traffic, we propose to use a Kalman filter to detect a worm's propagation at its early stage in real-time. In addition, we can effectively predict the overall vulnerable population size, and correct the bias in the observed number of infected hosts. Our simulation experiments for Code Red and SQL Slammer show that with observation data from a small fraction of IP addresses, we can detect the presence of a worm when it infects only 1% to 2% of the vulnerable computers on the Internet.

369 citations

Proceedings Article
10 Apr 2007
TL;DR: In this paper, an advanced hybrid peer-to-peer botnet is proposed, which provides robust network connectivity, individualized encryption and control traffic dispersion, limited botnet exposure by each bot, and easy monitoring and recovery by its botmaster.
Abstract: A "botnet" consists of a network of compromised computers controlled by an attacker ("botmaster"). Recently botnets have become the root cause of many Internet attacks. To be well prepared for future attacks, it is not enough to study how to detect and defend against the botnets that have appeared in the past. More importantly, we should study advanced botnet designs that could be developed by botmasters in the near future. In this paper, we present the design of an advanced hybrid peer-to-peer botnet. Compared with current botnets, the proposed botnet is harder to be shut down, monitored, and hijacked. It provides robust network connectivity, individualized encryption and control traffic dispersion, limited botnet exposure by each bot, and easy monitoring and recovery by its botmaster. Possible defenses against this advanced botnet are suggested.

336 citations

Proceedings ArticleDOI
27 Oct 2003
TL;DR: The analysis shows that the dynamic quarantine can reduce a worm's propagation speed, which can give precious time to fight against a worm before it is too late, and will raise aworm's epidemic threshold, thus it will reduce the chance for a worm to spread out.
Abstract: Due to the fast spreading nature and great damage of Internet worms, it is necessary to implement automatic mitigation, such as dynamic quarantine, on computer networks. Enlightened by the methods used in epidemic disease control in the real world, we present a dynamic quarantine method based on the principle "assume guilty before proven innocent" --- we quarantine a host whenever its behavior looks suspicious by blocking traffic on its anomaly port. Then we will release the quarantine after a short time, even if the host has not been inspected by security staffs yet. We present mathematical analysis of three worm propagation models under this dynamic quarantine method. The analysis shows that the dynamic quarantine can reduce a worm's propagation speed, which can give us precious time to fight against a worm before it is too late. Furthermore, the dynamic quarantine will raise a worm's epidemic threshold, thus it will reduce the chance for a worm to spread out. The simulation results verify our analysis and demonstrate the effectiveness of the dynamic quarantine defense.

296 citations


Cited by
More filters
01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Proceedings Article
28 Jul 2008
TL;DR: This paper presents a general detection framework that is independent of botnet C&C protocol and structure, and requires no a priori knowledge of botnets (such as captured bot binaries and hence the botnet signatures, and C &C server names/addresses).
Abstract: Botnets are now the key platform for many Internet attacks, such as spam, distributed denial-of-service (DDoS), identity theft, and phishing. Most of the current botnet detection approaches work only on specific botnet command and control (C&C) protocols (e.g., IRC) and structures (e.g., centralized), and can become ineffective as botnets change their C&C techniques. In this paper, we present a general detection framework that is independent of botnet C&C protocol and structure, and requires no a priori knowledge of botnets (such as captured bot binaries and hence the botnet signatures, and C&C server names/addresses). We start from the definition and essential properties of botnets. We define a botnet as a coordinated group of malware instances that are controlled via C&C communication channels. The essential properties of a botnet are that the bots communicate with some C&C servers/peers, perform malicious activities, and do so in a similar or correlated way. Accordingly, our detection framework clusters similar communication traffic and similar malicious traffic, and performs cross cluster correlation to identify the hosts that share both similar communication patterns and similar malicious activity patterns. These hosts are thus bots in the monitored network. We have implemented our BotMiner prototype system and evaluated it using many real network traces. The results show that it can detect real-world botnets (IRC-based, HTTP-based, and P2P botnets including Nugache and Storm worm), and has a very low false positive rate.

1,204 citations

Proceedings Article
01 Jan 2008
TL;DR: This paper proposes an approach that uses network-based anomaly detection to identify botnet C&C channels in a local area network without any prior knowledge of signatures or C &C server addresses, and shows that BotSniffer can detect real-world botnets with high accuracy and has a very low false positive rate.
Abstract: Botnets are now recognized as one of the most serious security threats. In contrast to previous malware, botnets have the characteristic of a command and control (C&C) channel. Botnets also often use existing common protocols, e.g., IRC, HTTP, and in protocol-conforming manners. This makes the detection of botnet C&C a challenging problem. In this paper, we propose an approach that uses network-based anomaly detection to identify botnet C&C channels in a local area network without any prior knowledge of signatures or C&C server addresses. This detection approach can identify both the C&C servers and infected hosts in the network. Our approach is based on the observation that, because of the pre-programmed activities related to C&C, bots within the same botnet will likely demonstrate spatial-temporal correlation and similarity. For example, they engage in coordinated communication, propagation, and attack and fraudulent activities. Our prototype system, BotSniffer, can capture this spatial-temporal correlation in network traffic and utilize statistical algorithms to detect botnets with theoretical bounds on the false positive and false negative rates. We evaluated BotSniffer using many real-world network traces. The results show that BotSniffer can detect real-world botnets with high accuracy and has a very low false positive rate.

859 citations

Posted Content
TL;DR: In this article, the authors present a detailed and structured presentation of the publicly available information on SGX, a series of intelligent guesses about some important but undocumented aspects of SGX.
Abstract: Intel’s Software Guard Extensions (SGX) is a set of extensions to the Intel architecture that aims to provide integrity and confidentiality guarantees to securitysensitive computation performed on a computer where all the privileged software (kernel, hypervisor, etc) is potentially malicious. This paper analyzes Intel SGX, based on the 3 papers [14, 78, 137] that introduced it, on the Intel Software Developer’s Manual [100] (which supersedes the SGX manuals [94, 98]), on an ISCA 2015 tutorial [102], and on two patents [108, 136]. We use the papers, reference manuals, and tutorial as primary data sources, and only draw on the patents to fill in missing information. This paper’s contributions are a summary of the Intel-specific architectural and micro-architectural details needed to understand SGX, a detailed and structured presentation of the publicly available information on SGX, a series of intelligent guesses about some important but undocumented aspects of SGX, and an analysis of SGX’s security properties.

834 citations