scispace - formally typeset
Search or ask a question

Showing papers on "Vulnerability (computing) published in 2005"


Journal ArticleDOI
TL;DR: A general method to find the critical components of an infrastructure network, i.e., the nodes and the links fundamental to the perfect functioning of the network, can be used as an improvement analysis to better shape a planned expansion of thenetwork.
Abstract: Infrastructure systems are a key ingredient of modern society. We discuss a general method to find the critical components of an infrastructure network, i.e., the nodes and the links fundamental to the perfect functioning of the network. Such nodes, and not the most connected ones, are the targets to protect from terrorist attacks. The method, used as an improvement analysis, can also help to better shape a planned expansion of the network.

375 citations


Journal ArticleDOI
01 Jan 2005
TL;DR: The analysis in this article represents the best-case scenario, consistent with the data and my ability to analyze it, for the vulnerability finding's usefulness.
Abstract: Despite the large amount of effort that goes toward finding and patching security holes, the available data does not show a clear improvement in software quality as a result. This article aims to measure the effect of vulnerability finding. Any attempt to measure this kind of effect is inherently rough, depending as it does on imperfect data and several simplifying assumptions. Because I'm looking for evidence of usefulness, where possible, I bias such assumptions in favor of a positive result - explicitly calling out those assumptions biased in the opposite direction. Thus, the analysis in this article represents the best-case scenario, consistent with the data and my ability to analyze it, for the vulnerability finding's usefulness

277 citations


Journal ArticleDOI
TL;DR: The HB+ authentication protocol was recently proposed and claimed to be secure against both passive and active attacks, but a linear-time active attack against HB+.
Abstract: Much research has focused on providing RFID tags with lightweight cryptographic functionality. The HB+ authentication protocol was recently proposed and claimed to be secure against both passive and active attacks. A linear-time active attack against HB+ is proposed.

274 citations


Proceedings ArticleDOI
Michael McIntosh1, Paula Austel1
11 Nov 2005
TL;DR: The general vulnerability and several related exploits are described and appropriate countermeasures are proposed, and the guidance necessary to prevent these attacks is provided.
Abstract: Naive use of XML Signature may result in signed documents remaining vulnerable to undetected modification by an adversary. In the typical usage of XML Signature to protect SOAP messages, an adversary may be capable of modifying valid messages in order to gain unauthorized access to protected resources.This paper describes the general vulnerability and several related exploits, and proposes appropriate countermeasures. While the attacks described herein may se obvious to security experts once they are explained, effective countermeasures require careful security policy specification and correct implentation by signed message providers and consumers. Since these implenters are not always security experts, this paper provides the guidance necessary to prevent these attacks.

192 citations


Patent
19 Dec 2005
TL;DR: In this article, the authors provide a security vulnerability assessment for wireless networks by simulating an attack upon the wireless network, capturing the response from the wireless networks, and identifying a vulnerability associated with the WSN after analyzing the response.
Abstract: Security vulnerability assessment for wireless networks is provided. Systems and methods for security vulnerability assessment simulate an attack upon the wireless network, capture the response from the wireless network, and identify a vulnerability associated with the wireless network after analyzing the response from the wireless network.

183 citations


Proceedings ArticleDOI
10 Oct 2005
TL;DR: A taxonomy of online game cheating is defined with respect to the underlying vulnerability, consequence, consequence and the cheating principal, which provides a systematic introduction to the characteristics of cheats in online games and how they can arise.
Abstract: Cheating is rampant in current game play on the Internet. However, it is not as well understood as one might expect. In this paper, we summarize the various known methods of cheating, and we define a taxonomy of online game cheating with respect to the underlying vulnerability (what is exploited?), consequence (what type of failure can be achieved?) and the cheating principal (who is cheating?). This taxonomy provides a systematic introduction to the characteristics of cheats in online games and how they can arise. It is intended to be comprehensible and useful not only to security specialists, but also to game developers, operators and players who are less knowledgeable and experienced in security. One of our findings is that although cheating in online games is largely due to various security failures, the four traditional aspects of security -- confidentiality, integrity, availability and authenticity -- are insufficient to explain it. Instead, fairness becomes a vital additional aspect, and its enforcement provides a convincing perspective for understanding the role of security techniques in developing and operating online games.

164 citations


Journal ArticleDOI
TL;DR: It is demonstrated that an active unregulated market-based mechanism for vulnerabilities almost always underperforms a passive CERT-type mechanism, and it is extended to show that a proposed mechanism--federally funded social planner--always performs better than a market- based mechanism.
Abstract: Software vulnerability disclosure has become a critical area of concern for policymakers. Traditionally, a Computer Emergency Response Team (CERT) acts as an infomediary between benign identifiers (who voluntarily report vulnerability information) and software users. After verifying a reported vulnerability, CERT sends out a public advisory so that users can safeguard their systems against potential exploits. Lately, firms such as iDefense have been implementing a new market-based approach for vulnerability information. The market-based infomediary provides monetary rewards to identifiers for each vulnerability reported. The infomediary then shares this information with its client base. Using this information, clients protect themselves against potential attacks that exploit those specific vulnerabilities.The key question addressed in our paper is whether movement toward such a market-based mechanism for vulnerability disclosure leads to a better social outcome. Our analysis demonstrates that an active unregulated market-based mechanism for vulnerabilities almost always underperforms a passive CERT-type mechanism. This counterintuitive result is attributed to the market-based infomediary's incentive to leak the vulnerability information inappropriately. If a profit-maximizing firm is not allowed to (or chooses not to) leak vulnerability information, we find that social welfare improves. Even a regulated market-based mechanism performs better than a CERT-type one, but only under certain conditions. Finally, we extend our analysis and show that a proposed mechanism--federally funded social planner--always performs better than a market-based mechanism.

155 citations


Journal ArticleDOI
09 May 2005
TL;DR: A new concept for bargaining by multiagents to identify the decision options to reduce the system vulnerability is included and the concept of a flexible configuration of the wide-area grid is substantiated with an area-partitioning algorithm.
Abstract: This paper provides a comprehensive state-of-the-art overview on power infrastructure defense systems. A review of the literature on the subjects of critical infrastructures, threats to the power grids, defense system concepts, and the special protection systems is reported. The proposed Strategic Power Infrastructure Defense (SPID) system methodology is a real-time, wide-area, adaptive protection and control system involving the power, communication, and computer infrastructures. The SPID system performs the failure analysis, vulnerability assessment, and adaptive control actions to avoid catastrophic power outages. This paper also includes a new concept for bargaining by multiagents to identify the decision options to reduce the system vulnerability. The concept of a flexible configuration of the wide-area grid is substantiated with an area-partitioning algorithm. A 179-bus system is used to illustrate the area partitioning method that is intended to minimize the total amount of load shedding.

151 citations


Journal ArticleDOI
TL;DR: The authors' simulation of a flood in the centre of Holland reveals the vulnerability of a densely populated delta, which is then visualized in a GIS to create maps of economic hotspots.
Abstract: We simulate a large-scale flooding in the province of South-Holland in the economic centre of the Netherlands. In traditional research, damage due to flooding is computed with a unit loss method coupling land use information to depth-damage functions. Normally only direct costs are incorporated as an estimate of damage to infrastructure, property and business disruption. We extend this damage concept with the indirect economic effects on the rest of the regional and national economy on basis of a bi-regional input output table.We broaden this damage estimation to the concept of vulnerability. Vulnerability is defined as a function of dependence, redundancy and susceptibility. Susceptibility is the probability and extent of flooding. Dependency is the degree to which an activity relates to other economic activities in the rest of the country. Input–output multipliers form representations of this dependency. Redundancy is the ability of an economic activity to respond to a disaster by deferring, using substitutes or relocating. We measure redundancy as the degree of centrality of an economic activity in a network. The more central an activity is, the less it encounters possibilities to transfer production and the more vulnerable it is for flooding. Vulnerability of economic activities is then visualized in a GIS. Kernel density estimation is applied to generalize point information on inundated firms to sectoral information in space. We apply spatial interpolation techniques for the whole of the province of South-Holland. Combining information of sectoral data on dependency and redundancy, we are able to create maps of economic hotspots. Our simulation of a flood in the centre of Holland reveals the vulnerability of a densely populated delta.

151 citations


Book ChapterDOI
12 Sep 2005
TL;DR: In this paper, the authors propose a framework that uses an attack tree to identify malicious activities from authorized insiders and generate an alarm if the user's activities progress sufficiently up along the branches of the attack tree towards the goal of system compromise.
Abstract: A major concern for computer systems security is the threat from malicious insiders who execute perfectly legitimate operations to compromise system security. Unfortunately, most currently available intrusion detection systems (which include anomaly and misuse detection systems) fail to address this problem in a comprehensive manner. In this work we propose a framework that uses an attack tree to identify malicious activities from authorized insiders. We develop algorithms to generate minimal forms of attack tree customized for each user such that it can be used efficiently to monitor the user's activities. If the user's activities progress sufficiently up along the branches of the attack tree towards the goal of system compromise, we generate an alarm. Our system is not intended to replace existing intrusion detection and prevention technology, but rather is intended to complement current and future technology.

135 citations


Proceedings ArticleDOI
08 Nov 2005
TL;DR: The models for the vulnerability discovery process are examined both analytically and using actual data on vulnerabilities discovered in three widely-used systems.
Abstract: Security vulnerabilities in servers and operating systems are software defects that represent great risks. Both software developers and users are struggling to contain the risk posed by these vulnerabilities. The vulnerabilities are discovered by both developers and external testers throughout the life-span of a software system. A few models for the vulnerability discovery process have just been published recently. Such models will allow effective resource allocation for patch development and are also needed for evaluating the risk of vulnerability exploitation. Here we examine these models for the vulnerability discovery process. The models are examined both analytically and using actual data on vulnerabilities discovered in three widely-used systems. The applicability of the proposed models and significance of the parameters involved are discussed. The limitations of the proposed models are examined and major research challenges are identified

Proceedings ArticleDOI
05 Dec 2005
TL;DR: A graphical technique is introduced that shows multiple-step attacks by matching rows and columns of the clustered adjacency matrix that allows attack impact/responses to be identified and prioritized according to the number of attack steps to victim machines, and allows attack origins to be determined.
Abstract: We apply adjacency matrix clustering to network attack graphs for attack correlation, prediction, and hypothesizing We self-multiply the clustered adjacency matrices to show attacker reachability across the network for a given number of attack steps, culminating in transitive closure for attack prediction over all possible number of steps This reachability analysis provides a concise summary of the impact of network configuration changes on the attack graph Using our framework, we also place intrusion alarms in the context of vulnerability-based attack graphs, so that false alarms become apparent and missed detections can be inferred We introduce a graphical technique that shows multiple-step attacks by matching rows and columns of the clustered adjacency matrix This allows attack impact/responses to be identified and prioritized according to the number of attack steps to victim machines, and allows attack origins to be determined Our techniques have quadratic complexity in the size of the attack graph

Proceedings ArticleDOI
14 Mar 2005
TL;DR: A new aspect system is presented to provide a solution to the problem of how to modularize the replacement of network protocols and prevent buffer overflows and an implementation of the language as an extension of Arachne, a dynamic weaver for C applications is presented.
Abstract: C applications, in particular those using operating system level services, frequently comprise multiple crosscutting concerns: network protocols and security are typical examples of such concerns. While these concerns can partially be addressed during design and implementation of an application, they frequently become an issue at runtime, e.g., to avoid server downtime. A deployed network protocol might not be efficient enough and may thus need to be replaced. Buffer overflows might be discovered that imply critical breaches in the security model of an application. A prefetching strategy may be required to enhance performance.While aspect-oriented programming seems attractive in this context, none of the current aspect systems is expressive and efficient enough to address such concerns. This paper presents a new aspect system to provide a solution to this problem. While efficiency considerations have played an important part in the design of the aspect language, the language allows aspects to be expressed more concisely than previous approaches. In particular, it allows aspect programmers to quantify over sequences of execution points as well as over accesses through variable aliases. We show how the former can be used to modularize the replacement of network protocols and the latter to prevent buffer overflows. We also present an implementation of the language as an extension of Arachne, a dynamic weaver for C applications. Finally, we present performance evaluations supporting that Arachne is fast enough to extend high performance applications, such as the Squid web cache.

Proceedings ArticleDOI
06 Jun 2005
TL;DR: The resilience of p2p file sharing systems against DoS attacks, in which malicious nodes respond to queries with erroneous responses, is studied by means of analytical modeling and simulation and a new class of p 2p-network-targeted attacks is introduced.
Abstract: Peer-to-peer (p2p) file sharing systems are characterized by highly replicated content distributed among nodes with enormous aggregate resources for storage and communication. These properties alone are not sufficient, however, to render p2p networks immune to denial-of-service (DoS) attack. In this paper, we study, by means of analytical modeling and simulation, the resilience of p2p file sharing systems against DoS attacks, in which malicious nodes respond to queries with erroneous responses. We consider the file-targeted attacks in current use in the Internet, and we introduce a new class of p2p-network-targeted attacks.In file-targeted attacks, the attacker puts a large number of corrupted versions of a single file on the network. We demonstrate that the effectiveness of these attacks is highly dependent on the clients' behavior. For the attacks to succeed over the long term, clients must be unwilling to share files, slow in removing corrupted files from their machines, and quick to give up downloading when the system is under attack.In network-targeted attacks, attackers respond to queries for any file with erroneous information. Our results indicate that these attacks are highly scalable: increasing the number of malicious nodes yields a hyperexponential decrease in system goodput, and a moderate number of attackers suffices to cause a near-collapse of the entire system. The key factors inducing this vulnerability are (i) hierarchical topologies with misbehaving "supernodes," (ii) high path-length networks in which attackers have increased opportunity to falsify control information, and (iii) power-law networks in which attackers insert themselves into high-degree points in the graph.Finally, we consider the effects of client counter-strategies such as randomized reply selection, redundant and parallel download, and reputation systems. Some counter-strategies (e.g., randomized reply selection) provide considerable immunity to attack (reducing the scaling from hyperexponential to linear), yet significantly hurt performance in the absence of an attack. Other counter-strategies yield little benefit (or penalty). In particular, reputation systems show little impact unless they operate with near perfection.

Book ChapterDOI
28 Sep 2005
TL;DR: A new side-channel vulnerability of cryptosystems implementation based on BRIP or square-multiply-always algorithm is pointed out by exploiting specially chosen input message of order two and further extension of the proposed attack is possible to develop more powerful attacks.
Abstract: In this paper, we will point out a new side-channel vulnerability of cryptosystems implementation based on BRIP or square-multiply-always algorithm by exploiting specially chosen input message of order two. A recently published countermeasure, BRIP, against conventional simple power analysis (SPA) and differential power analysis (DPA) will be shown to be vulnerable to the proposed SPA in this paper. Another well known SPA countermeasure, the square-multiply-always algorithm, will also be shown to be vulnerable to this new attack. Further extension of the proposed attack is possible to develop more powerful attacks.

Journal ArticleDOI
01 Jan 2005
TL;DR: This paper presents a meta-modelling framework that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of manually cataloging and evaluating the security of individual computer systems.
Abstract: Security risk models have successfully estimated the likelihood of attack for simple security threats such as burglary and auto theft. Before we can forecast the risks to computer systems, we must first learn to measure the strength of their security

Journal ArticleDOI
TL;DR: The results suggest that the decisions an economic entity makes concerning information security investment depend on vulnerability, and lends empirical support to prior economic research.

Journal ArticleDOI
TL;DR: In this article, the authors point out that the conventional digital signature schemes are vulnerable to additional sanitizing attack and show how this vulnerability can be eliminated by using a new digitally signed document sanitising scheme with disclosure condition control.
Abstract: A digital signature does not allow any alteration of the document to which it is attached. Appropriate alteration of some signed documents, however, should be allowed because there are security requirements other than that for the integrity of the document. In the disclosure of official information, for example, sensitive information such as personal information or national secrets is masked when an official document is sanitized so that its nonsensitive information can be disclosed when it is demanded by a citizen. If this disclosure is done digitally by using the current digital signature schemes, the citizen cannot verify the disclosed information correctly because the information has been altered to prevent the leakage of sensitive information. That is, with current digital signature schemes, the confidentiality of official information is incompatible with the integrity of that information. This is called the digital document sanitizing problem, and some solutions such as digital document sanitizing schemes and content extraction signatures have been proposed. In this paper, we point out that the conventional digital signature schemes are vulnerable to additional sanitizing attack and show how this vulnerability can be eliminated by using a new digitally signed document sanitizing scheme with disclosure condition control.

Journal ArticleDOI
M. Sahinoglu1
01 May 2005
TL;DR: The author's design provides a quantitative technique with an updated repository on vulnerabilities, threats, and countermeasures to calculate risk.
Abstract: Several security risk templates employ nonquantitative attributes to express a risk's severity, which is subjective and void of actual figures. The author's design provides a quantitative technique with an updated repository on vulnerabilities, threats, and countermeasures to calculate risk.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: A hierarchical multi-level modeling approach to modeling vulnerability using model composition and refinement techniques, a data-centric, quantitative metrics mechanism, and multidimensional assessment capturing both process and product elements in a formalized framework are proposed.
Abstract: Security assessment is largely ad hoc today due to its inherent complexity. The existing methods are typically experimental in nature highly dependent of the assessor's experience, and the security metrics are usually qualitative. We propose to address the dual problems of experimental analysis and qualitative metrics by developing two complementary approaches for security assessment: (1) analytical modeling, and (2) metrics-based assessment. To avoid experimental evaluation, we put forward a formal model that permits the accurate and scientific analysis of different security attributes and security flaws. To avoid qualitative metrics leading to ambiguous conclusions, we put forward a collection of mathematical formulas based on which quantitative metrics can be derived. The vulnerability analysis model responses to the need for a theoretical foundation for modeling information security, and security metrics are the cornerstone of risk analysis and security management. In addition to the security analysis approach, we discuss security testing methods as well. A Relative Complete Coverage (RCC) principle is proposed along with an example of applying the RCC principle. The innovative ideas proposed in this paper include a hierarchical multi-level modeling approach to modeling vulnerability using model composition and refinement techniques, a data-centric, quantitative metrics mechanism, and multidimensional assessment capturing both process and product elements in a formalized framework.

Proceedings ArticleDOI
TL;DR: The procedure for conducting a thorough assessment of the process control networks to evaluate these risks is presented and methods to determine and reduce the vulnerability of networked control systems to unintended and malicious intrusions are presented.
Abstract: Many automation and modernization programs are now employing Intranet/Internet technologies in industrial control strategies. The ensuing systems are a mixture of state-of-the-art and legacy installations and create challenges in the implementation and enforcement of security measures. Control system intrusions can cause environmental damage, safety risks, poor quality and lost production. This paper presents methods to determine and reduce the vulnerability of networked control systems to unintended and malicious intrusions. The procedure for conducting a thorough assessment of the process control networks to evaluate these risks is presented. Security issues are identified, as are technical and procedural countermeasures to mitigate these risks. Examples are drawn from past assessments and incidents. Once complete, the assessment results allow the network designer to plan infrastructure expansion with confidence in the security and reliability of the network's operation.

Journal ArticleDOI
01 May 2005
TL;DR: A methodology facilitating the generation of information assurance strategies and implementing measures to assess them is developed, and value focused thinking is used to develop an information assurance analysis framework.
Abstract: The information revolution has provided new and improved capabilities to rapidly disseminate and employ information in decision-making. Enhancing and enabling for today's modern industry, these capabilities are critical to our national infrastructures. These capabilities, however, often rely upon systems interconnected throughout the world, resulting in potentially increased vulnerability to attack and compromise of data by globally dispersed threats.This paper develops a methodology facilitating the generation of information assurance strategies and implementing measures to assess them. Upon reviewing key factors and features of information assurance, value focused thinking is used to develop an information assurance analysis framework.

Book ChapterDOI
01 Jan 2005
TL;DR: A better understanding of the design principles and implementation techniques for building high-speed NNIDS has been provided, along with reliable, and scalable network intrusion detection systems.
Abstract: The need for building high-speed NIDS that can reliably generate alerts as intrusions occur and have the intrinsic ability to scale as network infrastructure and attack sophistication evolves has been discussed in this chapter. The key design principles are analyzed and it has been argued that network intrusion-detection functions should be carried out by distributed and collaborative NNIDS at the end hosts. It is shown that an NNIDS running on the network interface instead of the host operating system can provide increased protection, reduced vulnerability to circumvention, and much lower overhead. The chapter also describes the experience in implementing a prototype NNIDS, based on Snort, an Intel IXP 1200, and a Xilinx Virtex-1000 FPGA. These experiments help to identify the performance bottlenecks and give insights on how to improve the design. System stress tests shows that the embedded NNIDS can handle high-speed traffic without packet drops and achieve the same performance as the Snort software running on a dedicated high-end computer system. Ongoing work includes optimizing the performance of NNIDS, developing strategies for sustainable operation of the NNIDS under attacks through adaptation and active countermeasures, studying algorithms for distributed and collaborative intrusion detection, and further developing the analytical models for buffer and processor allocation. Also tested were FPGA pattern-matching designs that approach 10 Gbps throughput with the entire Snort ruleset using a Xilinx Virtex2 device. A better understanding of the design principles and implementation techniques for building high-speed has been provided, along with reliable, and scalable network intrusion detection systems.

Journal ArticleDOI
TL;DR: A number of attacks against the security of keyless-entry systems of vehicles are described and analyzes of several attacks are presented and the vulnerability of the system under different attacks are compared.
Abstract: Remote control of vehicle functions using a handheld electronic device became a popular feature for vehicles. Such functions include, but are not limited to, locking, unlocking, remote start, window closures, and activation of an alarm. As consumers enjoy the remote access and become more comfortable with the remote functions, original equipment manufacturers (OEMs) have started looking for new features to simplify and reduce the user interface for vehicle access. These new features will provide users with an additional level of comfort without requiring them to touch or press any button on any remote devices to gain access to the vehicle. While this extra level of comfort is a desirable feature, it introduces several security threats against the vehicle's keyless-entry system. This paper describes a number of attacks against the security of keyless-entry systems of vehicles and also presents analyzes of several attacks and compares the vulnerability of the system under different attacks. At the end, some suggestions for improved design are proposed.

ReportDOI
01 Oct 2005
TL;DR: The results of this evaluation indicate that historical evidence provides insight into control system related incidents or failures; however, that the limited available information provides little support to future risk estimates.
Abstract: The Analysis Function of the US-CERT Control Systems Security Center (CSSC) at the Idaho National Laboratory (INL) has prepared this report to document cyber security incidents for use by the CSSC. The description and analysis of incidents reported herein support three CSSC tasks: establishing a business case; increasing security awareness and private and corporate participation related to enhanced cyber security of control systems; and providing informational material to support model development and prioritize activities for CSSC. The stated mission of CSSC is to reduce vulnerability of critical infrastructure to cyber attack on control systems. As stated in the Incident Management Tool Requirements (August 2005) ''Vulnerability reduction is promoted by risk analysis that tracks actual risk, emphasizes high risk, determines risk reduction as a function of countermeasures, tracks increase of risk due to external influence, and measures success of the vulnerability reduction program''. Process control and Supervisory Control and Data Acquisition (SCADA) systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. New research indicates this confidence is misplaced--the move to open standards such as Ethernet, Transmission Control Protocol/Internet Protocol, and Web technologiesmore » is allowing hackers to take advantage of the control industry's unawareness. Much of the available information about cyber incidents represents a characterization as opposed to an analysis of events. The lack of good analyses reflects an overall weakness in reporting requirements as well as the fact that to date there have been very few serious cyber attacks on control systems. Most companies prefer not to share cyber attack incident data because of potential financial repercussions. Uniform reporting requirements will do much to make this information available to Department of Homeland Security (DHS) and others who require it. This report summarizes the rise in frequency of cyber attacks, describes the perpetrators, and identifies the means of attack. This type of analysis, when used in conjunction with vulnerability analyses, can be used to support a proactive approach to prevent cyber attacks. CSSC will use this document to evolve a standardized approach to incident reporting and analysis. This document will be updated as needed to record additional event analyses and insights regarding incident reporting. This report represents 120 cyber security incidents documented in a number of sources, including: the British Columbia Institute of Technology (BCIT) Industrial Security Incident Database, the 2003 CSI/FBI Computer Crime and Security Survey, the KEMA, Inc., Database, Lawrence Livermore National Laboratory, the Energy Incident Database, the INL Cyber Incident Database, and other open-source data. The National Memorial Institute for the Prevention of Terrorism (MIPT) database was also interrogated but, interestingly, failed to yield any cyber attack incidents. The results of this evaluation indicate that historical evidence provides insight into control system related incidents or failures; however, that the limited available information provides little support to future risk estimates. The documented case history shows that activity has increased significantly since 1988. The majority of incidents come from the Internet by way of opportunistic viruses, Trojans, and worms, but a surprisingly large number are directed acts of sabotage. A substantial number of confirmed, unconfirmed, and potential events that directly or potentially impact control systems worldwide are also identified. Twelve selected cyber incidents are presented at the end of this report as examples of the documented case studies (see Appendix B).« less

Journal ArticleDOI
TL;DR: In this paper, a novel method of rights protection for categorical data through watermarking is introduced, which is designed to survive important attacks, such as subset selection and random alterations.
Abstract: A novel method of rights protection for categorical data through watermarking is introduced in this paper. New watermark embedding channels are discovered and associated novel watermark encoding algorithms are proposed. While preserving data quality requirements, the introduced solution is designed to survive important attacks, such as subset selection and random alterations. Mark detection is fully "blind" in that it doesn't require the original data, an important characteristic, especially in the case of massive data. Various improvements and alternative encoding methods are proposed and validation experiments on real-life data are performed. Important theoretical bounds including mark vulnerability are analyzed. The method is proved (experimentally and by analysis) to be extremely resilient to both alteration and data loss attacks, for example, tolerating up to 80 percent data loss with a watermark alteration of only 25 percent.

Book ChapterDOI
TL;DR: This work investigates if it is possible to predict the number of vulnerabilities that can potentially be identified in a future release of a software system and considers the vulnerability-discovery rate to see if models can be developed to project future trends.
Abstract: Security and reliability are important attributes of complex software systems. It is now common to use quantitative methods for evaluating and managing reliability. In this work we examine the feasibility of quantitatively characterizing some aspects of security.In particular, we investigate if it is possible to predict the number of vulnerabilities that can potentially be identified in a future release of a software system. We use several major operating systems as representatives of complex software systems. The data on vulnerabilities discovered in some of the popular operating systems is analyzed. We examine this data to determine if the density of vulnerabilities in a program is a useful measure. We try to identify what fraction of software defects are security related, i.e., are vulnerabilities. We examine the dynamics of vulnerability discovery hypothesizing that it may lead us to an estimate of the magnitude of the undiscovered vulnerabilities still present in the system. We consider the vulnerability-discovery rate to see if models can be developed to project future trends. Finally, we use the data for both commercial and open-source systems to determine whether the key observations are generally applicable. Our results indicate that the values of vulnerability densities fall within a range of values, just like the commonly used measure of defect density for general defects. Our examination also reveals that vulnerability discovery may be influenced by several factors including sharing of codes between successive versions of a software system.

Journal ArticleDOI
TL;DR: An accurate and computable definition of network vulnerability is introduced which is directly connected with its topology and its basic properties are discussed and its relationship with other parameters of the network is discussed.
Abstract: The study of the security and stability of complex networks plays a central role in reducing the risk and consequences of attacks or disfunctions of any type. The concept of vulnerability helps to measure the response of complex networks subjected to attacks on vertices and edges and it allows to spot the critical component of a network in order to improve its security. We introduce an accurate and computable definition of network vulnerability which is directly connected with its topology and we analyze its basic properties. We discuss the relationship of the vulnerability with other parameters of the network and we illustrate this with some examples.

Proceedings ArticleDOI
30 Aug 2005
TL;DR: In this article, the authors present a taxonomy of attacks based on an adversary's goals, focusing on how to evaluate the vulnerability of sensor network protocols to eavesdropping, with respect to sensor data distributions and network topologies.
Abstract: With respect to security, sensor networks have a number of considerations that separate them from traditional distributed systems. First, sensor devices are typically vulnerable to physical compromise. Second, they have significant power and processing constraints. Third, the most critical security issue is protecting the (statistically derived) aggregate output of the system, even if individual nodes may be compromised. We suggest that these considerations merit a rethinking of traditional security techniques: rather than depending on the resilience of cryptographic techniques, in this paper we develop new techniques to tolerate compromised nodes and to even mislead an adversary. We present our initial work on probabilistically quantifying the security of sensor network protocols, with respect to sensor data distributions and network topologies. Beginning with a taxonomy of attacks based on an adversary's goals, we focus on how to evaluate the vulnerability of sensor network protocols to eavesdropping. Different topologies and aggregation functions provide different probabilistic guarantees about system security, and make different trade-offs in power and accuracy.

Journal ArticleDOI
TL;DR: In this article, the authors summarized the results of ongoing research to investigate economical, unobtrusive, and effective methods to mitigate the risk of terrorist attacks against critical bridges and outlined a recommended plan to reduce these threats through proven risk management techniques, lists possible cost-effective security measures, discusses blast effects on bridges, and provides structural design and retrofit guidelines.
Abstract: In the aftermath of the September 11th tragedies, the vulnerability of the United States' transportation infrastructure to terrorist attack has gained national attention. In light of this vulnerability, various governmental agencies are looking into ways to improve the design of structures to better withstand extreme loadings. Until recently, little attention has been given to bridges which are critical to our economy and transportation network. This paper summarizes the results of ongoing research to investigate economical, unobtrusive, and effective methods to mitigate the risk of terrorist attacks against critical bridges. It outlines a recommended plan to reduce these threats through proven risk management techniques, lists possible cost-effective security measures, discusses blast effects on bridges, and provides structural design and retrofit guidelines. It also discusses ongoing research oriented towards the development of a performance-based design methodology. In using proper risk management techniques, transportation managers and bridge engineers can mitigate the risk of terrorist attacks against critical bridges to an acceptable level, while ensuring efficient use of limited resources.