scispace - formally typeset
Search or ask a question

Showing papers on "Vulnerability (computing) published in 2003"


Journal ArticleDOI
TL;DR: A structured view of research on information-flow security is given, particularly focusing on work that uses static program analysis to enforce information- flow policies, and some important open challenges are identified.
Abstract: Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.

2,058 citations


Proceedings ArticleDOI
25 Aug 2003
TL;DR: It is shown that maliciously chosen low-rate DoS traffic patterns that exploit TCP's retransmission time-out mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection.
Abstract: Denial of Service attacks are presenting an increasing threat to the global inter-networking infrastructure. While TCP's congestion control algorithm is highly robust to diverse network conditions, its implicit assumption of end-system cooperation results in a well-known vulnerability to attack by high-rate non-responsive flows. In this paper, we investigate a class of low-rate denial of service attacks which, unlike high-rate attacks, are difficult for routers and counter-DoS mechanisms to detect. Using a combination of analytical modeling, simulations, and Internet experiments, we show that maliciously chosen low-rate DoS traffic patterns that exploit TCP's retransmission time-out mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection. Moreover, as such attacks exploit protocol homogeneity, we study fundamental limits of the ability of a class of randomized time-out mechanisms to thwart such low-rate DoS attacks.

441 citations


Proceedings ArticleDOI
01 Jan 2003
TL;DR: This paper surveys the various types of buffer overflows, and survey the various defensive measures that mitigate buffer overflow vulnerabilities, including the authors' own StackGuard method, to consider which combinations of techniques can eliminate the problem of buffer overflow deficiencies, while preserving the functionality and performance of existing systems.
Abstract: Buffer overflows have been the most common form of security vulnerability for the last ten years. More over, buffer overflow vulnerabilities dominate the area of remote network penetration vulnerabilities, where an anonymous Internet user seeks to gain partial or total control of a host. If buffer overflow vulnerabilities could be effectively eliminated, a very large portion of the most serious security threats would also be eliminated. In this paper, we survey the various types of buffer overflow vulnerabilities and attacks, and survey the various defensive measures that mitigate buffer overflow vulnerabilities, including our own StackGuard method. We then consider which combinations of techniques can eliminate the problem of buffer overflow vulnerabilities, while preserving the functionality and performance of existing systems.

310 citations


Patent
01 Mar 2003
TL;DR: In this paper, a graphical user interface is contained on a computer screen and used for determining the vulnerability posture of a network, where a system design window displays network items of network map that are representative of different network elements contained within the network.
Abstract: A graphical user interface is contained on a computer screen and used for determining the vulnerability posture of a network. A system design window displays network items of a network map that are representative of different network elements contained within the network. The respective network icons are linked together in an arrangement corresponding to how network elements are interconnected within the network. Selected portions of the network map turn a different color indicative of a vulnerability that has been established for that portion of the network after a vulnerability posture of the network has been established.

241 citations


Patent
16 Jan 2003
TL;DR: In this paper, a method of identifying a software vulnerability in computer systems in a computer network includes a multiple level scanning process controlled from a management system connected to the network, which runs a root scanner which applies an interrogation program to remote systems having network addresses in a predefined address range.
Abstract: A method of identifying a software vulnerability in computer systems in a computer network includes a multiple level scanning process controlled from a management system connected to the network. The management system runs a root scanner which applies an interrogation program to remote systems having network addresses in a predefined address range. When a software vulnerability is detected, the interrogation program causes the respective remote system to scan topologically local systems, the remote system itself applying a second interrogation program to the local systems to detect and mitigate the vulnerability using an associated mitigation payload. Whilst that local scanning process is in progress, the root scanner can be applied to remote systems in other predefined address ranges.

195 citations


Book ChapterDOI
08 Sep 2003
TL;DR: This paper is the first to describe a setup to conduct power-analysis attacks on FPGAs, and provides strong evidence that implementations of elliptic curve cryptosystems without specific countermeasures are indeed vulnerable to simple power- analysis attacks.
Abstract: Field Programmable Gate Arrays (FPGAs) are becoming increasingly popular, especially for rapid prototyping. For implementations of cryptographic algorithms, not only the speed and the size of the circuit are important, but also their security against implementation attacks such as side-channel attacks. Power-analysis attacks are typical examples of side-channel attacks, that have been demonstrated to be effective against implementations without special countermeasures. The flexibility of FPGAs is an important advantage in real applications but also in lab environments. It is therefore natural to use FPGAs to assess the vulnerability of hardware implementations to power-analysis attacks. To our knowledge, this paper is the first to describe a setup to conduct power-analysis attacks on FPGAs. We discuss the design of our hand-made FPGA-board and we provide a first characterization of the power consumption of a Virtex 800 FPGA. Finally we provide strong evidence that implementations of elliptic curve cryptosystems without specific countermeasures are indeed vulnerable to simple power-analysis attacks.

184 citations


Proceedings ArticleDOI
01 Jan 2003
TL;DR: COCA as mentioned in this paper is a fault-tolerant and secure on-line certification authority that has been built and deployed both in a local area network and in the Internet.
Abstract: COCA is a fault-tolerant and secure on-line certification authority that has been built and deployed both in a local area network and in the Internet. Replication is used to achieve availability; proactive recovery with threshold cryptography is used for digitally signing certificates in a way that defends against mobile adversaries which attack, compromise, and control one replica for a limited period of time before moving on to another. Relatively weak assumptions characterize environments in which COCA''s protocols will execute correctly. No assumption is made about execution speed and message delivery delays; channels are expected to exhibit only intermittent reliability; and with 3t+1 COCA servers up to t may be faulty or compromised. The result is a system with inherent defenses to certain denial of service attacks because, by their very nature, weak assumptions are difficult for attackers to invalidate. In addition, traditional techniques, including request authorization, resource management based on segregation and scheduling different classes of requests, as well as caching results of expensive cryptographic operations further reduce COCA''s vulnerability to denial of service attacks. Results from experiments in a local area network and the Internet allow a quantitative evaluation of the various means COCA employs to resist denial of service attacks.

181 citations


Proceedings ArticleDOI
06 Jan 2003
TL;DR: This paper reports on a recent attempt to focus requirements in this area by examining those currently in use and suggests a categorization of information assurance metrics that may be tailored to an organization's needs.
Abstract: The term "assurance" has been used for decades in trusted system development as an expression of confidence that one has in the strength of mechanisms or countermeasures. One of the unsolved problems of security engineering is the adoption of measures or metrics that can reliably depict the assurance associated with a specific hardware and software system. This paper reports on a recent attempt to focus requirements in this area by examining those currently in use. It then suggests a categorization of information assurance (IA) metrics that may be tailored to an organization's needs. We believe that the provision of security mechanisms in systems is a subset of the systems engineering discipline having a large software-engineering correlation. There is general agreement that no single system metric or any "one-prefect" set of IA metrics applies across all systems or audiences. The set most useful for an organization largely depends on their IA goals, their technical, organizational and operational needs, and the financial, personnel, and technical resources that are available.

149 citations


Patent
30 Oct 2003
TL;DR: The history status and key technologies ofIDS is reviewed, then the future trends in the development of IDS technologies is discussed, and the history status of Intrusion Detection System is reviewed.
Abstract: An intrusion detection system monitors the rate and characteristics of Internet attacks on a computer network and filters attack alerts based upon various rates and frequencies of the attacks The intrusion detection system monitors attacks on other hosts and determines if the attacks are random or general attacks or attacks directed towards a specific computer network and generates a corresponding signal The intrusion detections system also tests a computer network's vulnerability to attacks detected on the other monitored hosts

141 citations


Proceedings ArticleDOI
11 May 2003
TL;DR: This work describes a possible defense that comes from breaking the assumption of uniformly random path selection and shows that the defense improves anonymity in the static model, but fails in a dynamic model, in which nodes leave and join.
Abstract: We study the threat that passive logging attacks pose to anonymous communications. Previous work analyzed these attacks under limiting assumptions. We first describe a possible defense that comes from breaking the assumption of uniformly random path selection. Our analysis shows that the defense improves anonymity in the static model, where nodes stay in the system, but fails in a dynamic model, in which nodes leave and join. Additionally, we use the dynamic model to show that the intersection attack creates a vulnerability in certain peer-to-peer systems for anonymous communications. We present simulation results that show that attack times are significantly lower in practice than the upper bounds given by previous work. To determine whether users' Web traffic has communication patterns required by the attacks, we collected and analyzed the Web requests of users. We found that, for our study frequent and repeated communication to the same Web site is common.

130 citations


Proceedings Article
01 Jan 2003
TL;DR: This work addresses the question of how much security is required to protect a packaged system, installed in a large number of organizations, from thieves who would exploit a single vulnerability to attack multiple installations.
Abstract: We address the question of how much security is required to protect a packaged system, installed in a large number of organizations, from thieves who would exploit a single vulnerability to attack multiple installations. While our work is motivated by the need to help organizations make decisions about how to defend themselves, we also show how they can better protect themselves by helping to protect each other.

Journal ArticleDOI
01 Sep 2003
TL;DR: The agent-based architecture presented here continuously monitors network vulnerability metrics providing new ways to measure the impact of faults and attacks.
Abstract: Monitoring and quantifying component behavior is key to, making networks reliable and robust. The agent-based architecture presented here continuously monitors network vulnerability metrics providing new ways to measure the impact of faults and attacks.

Proceedings ArticleDOI
25 May 2003
TL;DR: This paper presents the preliminary results of the use of fuzzy clustering to detect anomalies within low level kernel data streams and explores how fuzzy data mining and concepts introduced by the semantic Web can operate in synergy to perform distributed intrusion detection.
Abstract: The newly formed Department of Homeland Security has been mandated to reduce America's vulnerability to terrorism. In addition to being charged with physical protection, this newly formed department is also responsible for protecting the nation's critical infrastructure. Protecting computer systems from intrusions is an important aspect of securing the nation's infrastructure. We are exploring how fuzzy data mining and concepts introduced by the semantic Web can operate in synergy to perform distributed intrusion detection. The underlying premise of our intrusion detection model is to describe attacks as instances of an ontology using a semantically rich language, reason over them and subsequently classify them as instances of an attack of a specific type. However, before an abnormality can be specified as an instance of the ontology, it first needs to be detected. Hence, our intrusion detection model is two phased, where the first phase uses data mining techniques to analyze low level data streams that capture process, system and network states and to detect anomalous behavior. The second phase reasons over instances of anomalous behavior specified according to our ontology. This paper focuses on the initial phase of our model: outlier detection within low level data streams. Accordingly, we present the preliminary results of the use of fuzzy clustering to detect anomalies within low level kernel data streams.

Book
01 Jul 2003
TL;DR: The Vulnerability Cycle, Why Good People Write Bad Code, and Automation and Testing: Good General Practices Good Practices Through the Lifecycle Risk Assessment Methodologies Case Studies.
Abstract: Preface 1. No Straight Thing The Vulnerability Cycle What is an Attack? Why Good People Write Bad Code A Call to Arms 2. Architecture What Is Security Architecture? Principles of Security Architecture Case Study: The Java Sandbox 3. Design Why Does Good Design Matter? Secure Design Steps Special Design Issues Bad Practices Case Studies 4. Implementation Good Practices Bad Practices Case Studies 5. Operations Security Is Everybody's Problem Good Practices Bad Practices Case Studies 6. Automation and Testing Why Test? Good General Practices Good Practices Through the Lifecycle Risk Assessment Methodologies Case Studies Appendix:. Resources Index

Book ChapterDOI
27 Jan 2003
TL;DR: In this paper, the authors address the question of how much security is required to protect a packaged system, installed in a large number of organizations, from thieves who would exploit a single vulnerability to attack multiple installations.
Abstract: We address the question of how much security is required to protect a packaged system, installed in a large number of organizations, from thieves who would exploit a single vulnerability to attack multiple installations. While our work is motivated by the need to help organizations make decisions about how to defend themselves, we also show how they can better protect themselves by helping to protect each other.

Journal Article
TL;DR: A traffic analysis based vulnerability in SafeWeb, an encrypting web proxy, is presented that allows someone monitoring the traffic of a SafeWeb user to determine if the user is visiting certain websites.
Abstract: I present a traffic analysis based vulnerability in SafeWeb, an encrypting web proxy. This vulnerability allows someone monitoring the traffic of a SafeWeb user to determine if the user is visiting certain websites. I also describe a successful implementation of the attack. Finally, I discuss methods for improving the attack and for defending against the attack.


Book ChapterDOI
Per Oscarson1
01 Jan 2003
TL;DR: The included concepts are information asset, confidentiality, integrity, availability, threat, incident, damage, security mechanism, vulnerability and risk, which are modeled graphically in order to increase the understanding of conceptual fundamentals within the area of information security.
Abstract: This paper deals with some fundamental concepts within the area of information security, both their definitions and their relationships. The included concepts are information asset, confidentiality, integrity, availability, threat, incident, damage, security mechanism, vulnerability and risk. The concepts and their relations are modeled graphically in order to increase the understanding of conceptual fundamentals within the area of information security.

Journal ArticleDOI
TL;DR: The authors discusses the ways in which the sharply increased danger of bio-terrorism has made infectious diseases a priority in defence and intelligence circles, and sets out a central principle of global public health security: a strengthened capacity to detect and contain naturally caused outbreaks is the only rational way to defend the world against the threat of a bio-terrorist attack.
Abstract: This paper discusses the ways in which the sharply increased danger of bio-terrorism has made infectious diseases a priority in defence and intelligence circles. Against this background, the author sets out a central principle of global public health security: a strengthened capacity to detect and contain naturally caused outbreaks is the only rational way to defend the world against the threat of a bio-terrorist attack. He then discusses the three trends that underscore this point: vulnerability of all nations to epidemics, the capacity of a disease such as AIDS to undermine government and society, and the way in which the determinants of national security have been re-defined in the post-Cold War era.

Proceedings ArticleDOI
17 Nov 2003
TL;DR: The white cell developed the scenarios and anomalies, established the scoring criteria, refereed the exercise, and determined the winner based on the effectiveness of each academy to minimize the impact to their networks from the red forces network intelligence gathering, intrusion, attack and evaluation as discussed by the authors.
Abstract: This paper describes the effort involve in executing cyber defense exercise while focusing on the white cell and red forces activities during the 2003 inter-academy cyber defense exercise (CDE) These exercise components were led by the National Security Agency and were comprised of security professionals from Carnegie Mellon University's CERT, the United States Air Force, and the United States Army This hands-on experience provided the capstone educational experience for information assurance students at the US service academies The white cell developed the scenarios and anomalies, established the scoring criteria, refereed the exercise, and determined the winner based on the effectiveness of each academy to minimize the impact to their networks from the red forces network intelligence gathering, intrusion, attack and evaluation To understand better all that is involved, this paper takes advantage of the authors three years of experience in directing the activities associated with the planning and execution of the 2003 exercise

01 Jan 2003
TL;DR: A review of some of the vulnerability risks that actual electric power systems face, showing some implementation issues of it and some the steps that NERC is leading to ensure a secure energy sourcing to the U.S. Economy.
Abstract: Security of supply has been always a key factor in the development of the electric industry. Adequacy, quality of supply, stability, reliability and voltage collapse along with costs have been always carefully considered when planning the future of the electric power system. Since 1982, when world's deregulation process started, the introduction of competition at generation level brought new challenges, while the proper operation of the electric power system still require physical coordination between non cooperative agents. The increasing development of SCADA/EMS systems, the growing number of market participants, and the development of more complex market schemes have been more and more relying on Information Technologies, making the physical system more vulnerable to cyber security risks. Now cyber security risks look bigger than the physical ones. We developed a review of some of the vulnerability risks that actual electric power systems face, showing some implementation issues of it. We also comment some the steps that NERC is leading to ensure a secure energy sourcing to the U.S. Economy.

Journal ArticleDOI
TL;DR: The techniques proposed in this paper are able to evaluate voltage stability status efficiently in both pre-contingency and post- Contingency states with considering the effect of active and reactive power limits.

Proceedings ArticleDOI
20 Oct 2003
TL;DR: The paper describes the formal approach and software tool "Attack Simulator" intended for active vulnerability assessment of computer network security policy at the stages of design and deployment of network security systems.
Abstract: The paper describes the formal approach and software tool "Attack Simulator" intended for active vulnerability assessment of computer network security policy at the stages of design and deployment of network security systems. The suggested approach is based on stochastic grammar-based models of attacks and is realized via automatic imitation of remote computer network attacks of different complexity. The paper characterizes the Attack Simulator architecture and the processes of generating malicious actions against computer network model and real-life computer networks. The results of experiments that demonstrate the Attack Simulator efficiency are described in detail.

Proceedings Article
01 Jan 2003
TL;DR: The concept and definition of network vulnerability is led to and the development of techniques to identify specific 'weak spots', critical infrastructure, in a network, where failure of some part of the transport infrastructure would have the most serious effects on access to specific locations and overall system performance is discussed.
Abstract: This paper reviews previous research in the field of network reliability, and discusses extensions and adaptations to the reliability concepts that are more appropriate for strategic-level multi-modal transport systems. This leads to the concept and definition of network vulnerability. The paper then discusses the development of techniques to identify specific 'weak spots', critical infrastructure, in a network, where failure of some part of the transport infrastructure would have the most serious effects on access to specific locations and overall system performance. The Australian National Highway System (NHS) network is used as a case study, but the concepts and techniques described in this paper have much wider application. (a) For the covering entry of this conference, please see ITRD abstract no. E210413.

Book ChapterDOI
26 May 2003
TL;DR: The PINPAS tool as mentioned in this paper supports the testing of algorithms for vulnerability to SPA, DPA, etc. at the software level, which allows for the identification of weaknesses in the implementation in an early stage of development.
Abstract: This paper describes the PINPAS tool, a tool for the simulation of power analysis and other side-channel attacks on smartcards. The PINPAS tool supports the testing of algorithms for vulnerability to SPA, DPA, etc. at the software level. Exploitation of the PINPAS tool allows for the identification of weaknesses in the implementation in an early stage of development. A toy algorithm is discussed to illustrate the usage of the tool.

01 Mar 2003
TL;DR: A better understanding on how computer attacks are performed, including how to gain illicit access, the types of attacks, as well as the potential damage that they can cause is hoped to be obtained.
Abstract: Computer systems and internet are becoming pervasive in our everyday life. Being online brings the consequence that such systems are prone to malicious attack. This vulnerability, along with our reliance on these systems, implies that it is important for us to do our best in securing them to ensure their proper functioning. In this paper, we are trying to tackle the security issues from both technical and human perspectives. From this dual standpoint, we hope to obtain a better understanding on how computer attacks are performed, including how to gain illicit access, the types of attacks, as well as the potential damage that they can cause. We also uncover sociological and psychological traits of the attackers, including their community, taxonomy, motives and work ethics. This survey paper will not provide a concrete solution on how to secure computer systems, but it highlights the socio-technical approach that we must take in order to obtain that goal.

Patent
26 Nov 2003
TL;DR: A computer assisted system, medium and method of providing a risk assessment of a target system (220) is described in this paper, which includes receiving at the computer at least one of a newly encountered hardware, software and/or operating system threat.
Abstract: A computer-assisted system, medium and method of providing a risk assessment of a target system (220). The method includes receiving at the computer at least one of a newly encountered hardware, software and/or operating system threat, updating a requirements repository (318) to account for the threat, updating one or more target system test procedures to account for the threat (316), and conducting a risk assessment of the target system (220).

Patent
James P. Goddard1
21 Oct 2003
TL;DR: In this article, a system, method and program product for evaluating a security risk of an application is presented, where a determination is made whether unauthorized access or loss of data maintained or accessed by the application would cause substantial damage.
Abstract: A system, method and program product for evaluating a security risk of an application. A determination is made whether unauthorized access or loss of data maintained or accessed by the application would cause substantial damage. A determination is made whether the application is shared by different customers. A determination is made whether a vulnerability in the application can be exploited by a person or program which has not been authenticated to the application or a system in which the application runs. A numerical value or weight is assigned to each of the foregoing determinations. Each of the numerical values or weights corresponds to a significance of the determination in evaluating said security risk. The numerical values or weights are combined to evaluate the security risk. Other factors can also be considered in evaluating the security risk.

01 Jan 2003
TL;DR: Multiple platform Ethernet Network Interface Card (NIC) device drivers incorrectly handle frame padding, allowing an attacker to view slices of previously transmitted packets or portions of kernel memory, causing information leakage.
Abstract: Multiple platform Ethernet Network Interface Card (NIC) device drivers incorrectly handle frame padding, allowing an attacker to view slices of previously transmitted packets or portions of kernel memory. This vulnerability is the result of incorrect implementations of RFC requirements and poor programming practices, the combination of which results in several variations of this information leakage vulnerability. This bug is explored in its various manifestations through code examples and packet captures. Solutions to this flaw are provided.

Journal ArticleDOI
TL;DR: In this paper, a system protection that sustains system voltage stability and incorporates a simple emergency strategy is outlined, which allows practical and reliable, modest cost schemes that can gain considerable financial benefits in the operation and control of the grid.
Abstract: Interconnected grid operation, control, and security will be revolutionized by a system protection scheme that safeguards the grid's integrity by adapting its responses to the most severe, unforeseen disturbances. The new approach takes advantage of power system resilience to the initial impact of even the most severe disturbances. The disturbance causes changes of system vulnerability parameters that are used by the protection to direct preselected measures to affected locations. A system protection that sustains system voltage stability and incorporates a simple emergency strategy is outlined. The approach allows practical and reliable, modest cost schemes that can gain considerable financial benefits in the operation and control of the grid.