scispace - formally typeset
Search or ask a question

Showing papers on "Denial-of-service attack published in 1999"


Book
15 Aug 1999
TL;DR: This chapter discusses the TCP/IP Internet Model, the back-to- Basics: DNS Theory, and an Overview of Running Snort Rules, which aims to clarify the role of snort in the security model.
Abstract: (NOTE: Each chapter concludes with a Summary.) I. TCP/IP. 1. IP Concepts. The TCP/IP Internet Model. Packaging (Beyond Paper or Plastic). Addresses. Service Ports. IP Protocols. Domain Name System. Routing: How You Get There from Here. 2. Introduction to TCPdump and TCP. TCPdump. Introduction to TCP. TCP Gone Awry. 3. Fragmentation. Theory of Fragmentation. Malicious Fragmentation. 4. ICMP. ICMP Theory. Mapping Techniques. Normal ICMP Activity. Malicious ICMP Activity. To Block or Not to Block. 5. Stimulus and Response. The Expected. Protocol Benders. Abnormal Stimuli. 6. DNS. Back to Basics: DNS Theory. Using DNS for Reconnaissance. Tainting DNS Responses. II. TRAFFIC ANALYSIS. 7. Packet Dissection Using TCPdump. Why Learn to Do Packet Dissection? Sidestep DNS Queries. Introduction to Packet Dissection Using TCPdump. Where Does the IP Stop and the Embedded Protocol Begin? Other Length Fields. Increasing the Snaplen. Dissecting the Whole Packet. Freeware Tools for Packet Dissection. 8. Examining IP Header Fields. Insertion and Evasion Attacks. IP Header Fields. The More Fragments (MF) Flag. 9. Examining Embedded Protocol Header Fields. TCP. UDP. ICMP. 10. Real-World Analysis. You've Been Hacked! Netbus Scan. How Slow Can you Go? RingZero Worm. 11. Mystery Traffic. The Event in a Nutshell. The Traffic. DDoS or Scan. Fingerprinting Participant Hosts. III. FILTERS/RULES FOR NETWORK MONITORING. 12. Writing TCPdump Filters. The Mechanics of Writing TCPdump Filters. Bit Masking. TCPdump IP Filters. TCPdump UDP Filters. TCPdump TCP Filters. 13. Introduction to Snort and Snort Rules. An Overview of Running Snort. Snort Rules. 14. Snort Rules-Part II. Format of Snort Options. Rule Options. Putting It All Together. IV. INTRUSION INFRASTRUCTURE. 15. Mitnick Attack. Exploiting TCP. Detecting the Mitnick Attack. Network-Based Intrusion-Detection Systems. Host-Based Intrusion-Detection Systems. Preventing the Mitnick Attack. 16. Architectural Issues. Events of Interest. Limits to Observation. Low-Hanging Fruit Paradigm. Human Factors Limit Detects. Severity. Countermeasures. Calculating Severity. Sensor Placement. Outside Firewall. Push/Pull. Analyst Console. Host- or Network-Based Intrusion Detection. 17. Organizational Issues. Organizational Security Model. Defining Risk. Risk. Defining the Threat. Risk Management Is Dollar Driven. How Risky Is a Risk? 18. Automated and Manual Response. Automated Response. Honeypot. Manual Response. 19. Business Case for Intrusion Detection. Part One: Management Issues. Part Two: Threats and Vulnerabilities. Part Three: Tradeoffs and Recommended Solution. Repeat the Executive 20. Future Directions. Increasing Threat. Defending Against the Threat. Defense in Depth. Emerging Techniques. V. APPENDIXES. Appendix A. Exploits and Scans to Apply Exploits. False Positives. IMAP Exploits. Scans to Apply Exploits. Single Exploit, Portmap. Summary. Appendix B. Denial of Service. Brute-Force Denial-of-Service Traces. Elegant Kills. nmap. Distributed Denial-of-Service Attacks. Summary. Appendix Ctection of Intelligence Gathering. Network and Host Mapping. NetBIOS-Specific Traces. Stealth Attacks. Measuring Response Time. Worms as Information Gatherers. Summary. Index

172 citations


Journal ArticleDOI
TL;DR: An examination of the vulnerabilities of the common protocols carried by TCP/IP (including SMTP, Telnet, NTP, Finger, NFS, FTP, WWW and X windows) and proposes configuration methods to limit their vulnerability.

163 citations


Proceedings ArticleDOI
22 Feb 1999
TL;DR: The paper describes the Escort architecture and its implementation in Scout, and reports a collection of experiments that measure the costs and benefits of using Escort to protect a web server from denial of service attacks.
Abstract: We describe a two-dimensional architecture for defending against denial of service attacks. In one dimension, the architecture accounts for all resources consumed by each I/O path in the system; this accounting mechanism is implemented as an extension to the path object in the Scout operating system. In the second dimension, the various modules that define each path can be configured in separate protection domains; we implement hardware enforced protection domains, although other implementations are possible. The resulting system-which we call Escort-is the first example of a system that simultaneously does end-to-end resource accounting (thereby protecting against resource based denial of service attacks where principals can be identified) and supports multiple protection domains (thereby allowing untrusted modules to be isolated from each other). The paper describes the Escort architecture and its implementation in Scout, and reports a collection of experiments that measure the costs and benefits of using Escort to protect a web server from denial of service attacks.

148 citations


Patent
15 Apr 1999
TL;DR: In this article, an active monitor detects and classifies messages transmitted on a network and includes a routine for classifying TCP packet source addresses as being of an acceptable, unacceptable, or suspect type.
Abstract: An active monitor detects and classifies messages transmitted on a network. In one form, the monitor includes a routine for classifying TCP packet source addresses as being of an acceptable, unacceptable, or suspect type. Suspect source addresses may be further processed in accordance with a state machine having a number of conditionally linked states including a good address state, a new address state, and a bad address state. For this form, the monitor selectively sends signals to targeted destination hosts for addresses in the unacceptable

138 citations


ReportDOI
01 Jan 1999
TL;DR: This work has developed the first objective, repeatable, and realistic measurement of intrusion detection system performance, and preliminary plans for the 1999 evaluation will be presented.
Abstract: : Intrusion detection systems monitor the use of computers and the network over which they communicate, searching for unauthorized use, anomalous behavior, and attempts to deny users, machines or portions of the network access to services. Potential users of such systems need information that is rarely found in marketing literature, including how well a given system finds intruders and how much work is required to use and maintain that system in a fully functioning network with significant daily traffic. Researchers and developers can specify which prototypical attacks can be found by their systems, but without access to the normal traffic generated by day-to-day work, they can not describe how well their systems detect real attacks while passing background traffic and avoiding false alarms. This information is critical: every declared intrusion requires time to review, regardless of whether it is a correct detection for which a real intrusion occurred, or whether it is merely a false alarm. To meet the needs of researchers, developers and ultimately system administrators we have developed the first objective, repeatable, and realistic measurement of intrusion detection system performance. Network traffic on an Air Force base was measured, characterized and subsequently simulated on an isolated network on which a few computers were used to simulate thousands of different Unix systems and hundreds of different users during periods of normal network traffic. Simulated attackers mapped the network, issued denial of service attacks, illegally gained access to systems, and obtained super-user privileges. Attack types ranged from old, well-known attacks, to new, stealthy attacks. Seven weeks of training data and two weeks of testing data were generated, filling more than 30 CD-ROMs. Methods and results from the 1998 DARPA intrusion detection evaluation will be highlighted, and preliminary plans for the 1999 evaluation will be presented.

77 citations


Journal ArticleDOI
01 Jan 1999
TL;DR: The present work discusses several threats and attacks related to TCP/IP and includes T/TCP in the discussion, and discusses the way they are carried out and the related means of prevention, detection and/or defense.
Abstract: The Internet put the rest of the world at the reach of our computers. In the same way it also made our computers reachable by the rest of the world. Good news and bad news! Over the last decade, the Internet has been subject to widespread security attacks. Besides the classical terms, new ones had to be found in order to designate a large collection of threats: Worms, break-ins, hackers, crackers, hijacking, phrackers, spoofing, man-in-the-middle, password-sniffing, denial-of-service, and so on.Since the Internet was born of academic efforts to share information, it never strove for high security measures. In fact in some of its components, security was consciously traded for easiness in sharing. Although the advent of electronic commerce has pushed for "real security" in the Internet, there are still a large number of users (including computer scientists) that are very vulnerable to attacks, mostly because they are not aware of the nature (and ease) of the attacks and still believe that a "good" password is all they need to be concerned about.Aiming for a better understanding of the subject, we wrote a first paper [1] in which we discussed several threats and attacks related to TCP/IP. The present work is an extension of the first one, and its main goal is to include T/TCP in the discussion. Additionally, in an effort to make this paper more comprehensive, we included some sections from the former.Besides the description of each attack (the what), we also discuss the way they are carried out (the how) and, when possible, the related means of prevention, detection and/or defense.

56 citations


Proceedings Article
01 Jan 1999
TL;DR: A real-time anomaly detection method based on the intensities of SYN segments which are measured on a network monitoring machine, in realtime, which can allow ISPs to determine their correct requirements to cope with this particular attack and provide more secure services to their clients.
Abstract: In this paper we propose a real-time anomaly detection method for detecting TCP SYN-flooding attacks. This method is based on the intensities of SYN segments which are measured on a network monitoring machine, in realtime. In the currently available solutions we note several important flaws such as the possibility of denying access to legitimate clients and/or causing service degradation at the potential target machines, therefore we aim to minimize such unwanted effects by acting only when it is necessary to do so: during an attack. In order to force the attackers to fall in a detectable region (hence, avoid false negatives) and determine the actual level of threat we are facing we also profit from a series of host based measures such as tuning TCP backlog queue lengths of our servers. Experience showed that complete avoidance from false positives is not possible with this method, however a significant decrease can be reasonably expected. Nevertheless, this requires an acceptable model for the legitimate use of services. We first explain why the Poisson model would fail in modeling TCP connection arrivals for our purpose and show that analyzing daily maximum arrival rates can be suitable for minimizing false positive probabilities. This method can allow ISPs to determine their correct requirements to cope with this particular attack and provide more secure services to their clients.

27 citations


Book ChapterDOI
09 Nov 1999
TL;DR: Two key agreement protocols which are resistant to a denial-of-service attack are constructed from a key agreement protocol in [9] provably secure against passive and active attacks.
Abstract: In this manuscript, two key agreement protocols which are resistant to a denial-of-service attack are constructed from a key agreement protocol in [9] provably secure against passive and active attacks. The denial-of-service attack considered is the resource-exhaustion attack on a responder. By the resource-exhaustion attack, a malicious initiator executes a key agreement protocol simultaneously as many times as possible to exhaust the responder’s resources and to disturb executions of it between honest initiators and the responder. The resources are the storage and the CPU. The proposed protocols are the first protocols resistant to both the storage-exhaustion attack and the CPU-exhaustion attack. The techniques used in the construction are stateless connection, weak key confirmation, and enforcement of heavy computation. The stateless connection is effective to enhancing the resistance to the storage-exhaustion attack. The weak key confirmation and the enforcement of heavy computation are effective to enhancing the resistance to the CPU-exhaustion attack.

18 citations


Book ChapterDOI
TL;DR: It is shown that when narrowed in detecting one specific type of the attack in large network, for example denial of service, virus, worm or privacy attack, the authors can induce much more prior knowledge into system regarding the attack.
Abstract: Intrusion Detection in large network must rely on use of many distributed agents instead to one large monolithic module. Agents should have some kind of artificial intelligence in order to cope successfully with different intrusion problems. In this paper, we suggested Bayesian alarm network to work as independent Network Intrusion Detection Agent. We have shown that when narrowed in detecting one specific type of the attack in large network, for example denial of service, virus, worm or privacy attack, we can induce much more prior knowledge into system regarding the attack. Different nodes of the network can develop their own model of Bayesian alarm network and agents could communicate between themselves and with common security data base. Networks should be organized hierarchically so on the higher level of hierarchy, Bayesian alarm network, thanks to interconnections with lower level networks and data, acts as a distributed Intrusion Detection System.

16 citations


Proceedings ArticleDOI
18 Feb 1999
TL;DR: The paper proposes a DoS-resistant version of three-pass ISAKMP/Oakley's Phase 1 where DoS attacks impose expensive computation on the attackers themselves.
Abstract: Key-agreement protocols will play an important role as an entrance to secure communication over the Internet. Specifically, ISAKMP (Internet Security Association and Key Management Protocol)/Oakley key-agreement is currently a leading approach for communication between two parties. The basic idea of ISAKMP/Oakley is an authenticated Diffie-Hellman (DH) key-agreement protocol. This authentication owes a lot to public key primitives whose implementation includes modular exponentiation. Since modular exponentiation is computationally expensive, attackers are motivated to abuse it for Denial-of-Service (DoS) attacks. In search of resistance against DoS attacks, the paper first describes a basic idea on the protection mechanism for authenticated DH key-agreement protocols against DoS attacks. The paper then proposes a DoS-resistant version of three-pass ISAKMP/Oakley's Phase 1 where DoS attacks impose expensive computation on the attackers themselves. The DoS resistance is evaluated in terms of: (1) the computational cost caused by bogus requests and (2) a server-blocking probability.

12 citations


Journal ArticleDOI
TL;DR: The sole objective of a DoS attack is to prevent the normal operation of a digital system in the manner required by its customers and intended by its designers.
Abstract: SCOPE AND DEFINITIONS The provision of any service requires the utilisation of resources. In a digital context these resources might be processor cycles, memory capacity, disk space or communications bandwidth. A Denial of Service (DoS) attack implies either the removal of those resources by some external event or their pre‐emption by a competing process; this should be understood to include rerouting or replacing a service. The sole objective of a DoS attack is thus to prevent the normal operation of a digital system in the manner required by its customers and intended by its designers. As such, DoS attacks on the mission‐critical or business‐critical infrastructure systems of financial, commercial or other enterprises offer the potential for sabotage, blackmail or extortion operations.

ReportDOI
21 Dec 1999
TL;DR: Trinoo and Tribe Flood Network are new forms of denial of Service (DOS) attacks designed to bring down a computer or network by overloading it with a large amount of network traffic using TCP, UDP, or ICMP.
Abstract: Trinoo and Tribe Flood Network (TFN) are new forms of denial of Service (DOS) attacks. attacks are designed to bring down a computer or network by overloading it with a large amount of network traffic using TCP, UDP, or ICMP. In the past, these attacks came from a single location and were easy to detect. Trinoo and TFN are distributed system intruder tools. These tools launch DoS attacks from multiple computer systems at a target system simultaneously. This makes the assault hard to detect and almost impossible to track to the original attacker. Because these attacks can be launched from hundreds of computers under the command of a single attacker, they are far more dangerous than any DoS attack launched from a single location. These distributed tools have only been seen on Solaris and Linux machines, but there is no reason why they could not be modified for UNIX machines. The target system can also be of any type because the attack is based on the TCP/IP architecture, not a flaw in any particular operating system (OS). CIAC considers the risks presented by these DoS tools to be high.

Proceedings Article
01 Jan 1999
TL;DR: In order to present large-scale malicious attacks on an ISP network to maintain network services, a method to record key packets classified by sessions is designed, which will define some packets as the indication of the session state.
Abstract: In order to present large-scale malicious attacks on an ISP network to maintain network services, we have designed a method to record key packets classified by sessions. Session is the service provided above the IP layer. We define a TCP connection a session, a UDP packet exchange a session, or echo and echo response of ICMP to be a session. The research of network attack/intrusion/information collection has shown that most of the illegal action performed would have something special ongoing in such sessions. For example, winnuke will send OOB packets to the 139 port of a host; most of the platform detection will use strange packets too. Not only the strange packets itself, but the sequence of such packets going through the network indicate the attack. For example, teardrop will transmit packets that have abnormal fragment offset in the second packet, then cause some platform to crash. Some patterns of sessions will be created by flood based attack/information collection. For example, the SYN flood will create a pile SYN-SYN ACK-RST packets in the network, and most of scan tools will create several kind of patterns in the network, all of these patterns indicate the failure of the connection, these include SYN-SYN ACK-RST and SYN-RST and SYNICMP Unreachable message. Based on this thought, we have designed the session-state transition analysis. We will define some packets as the indication of the session state. The happening of such packets causes the change of the session state. When comparing with the predefined rules, we will detect most of the DOS attacks. Another approach is to store these session states transition patterns into a database; thus we can calculate the happening rates of some specific patterns. Compared with the average level, abnormal high happening rates often indicate the possible attack or information collection. For example, we can collect a site’s all sessions' SYN-SYN ACK-RST pattern to decide whether a normal scan had happened. The implementation includes four parts. The first is the data collection part, which collects and unwraps packets passing through the network; the second part is the signature matching part, which will match the packet signature, to filter only the specified packets; the third part will cluster such pa ckets into sessions, and store the session specific signature chain and check whether a rule based match is satisfied; the fourth part will flush the session data into a database, and check whether a statistical based anomaly has happened. Using such kind of techniques has several basic advantages. The first is not to violate privacy, since we are interested in only packet header to know whether a state has changed, to inspect header only also make this implementation efficient and fit for a large scale network. The other advantage is to avoid the headache to set the threshold of a statistical approach. Most scan detection tools (For example, gabriel) will calculate the burst of connections. New scan technique has appeared to avoid burst of connections, for example, slow scan and stealthy scan. Set a proper threshold is much more difficult for a large-scale network. For rule based analysis, since we use the state transition to detect intrusion, we could predict the happening of some attacks in a premature stage. The future approach includes the content analysis based IDS, especially the remote buffer overflow detection. This part of research is underway.