scispace - formally typeset
Search or ask a question

Showing papers on "Vulnerability (computing) published in 2008"


Proceedings ArticleDOI
27 Oct 2008
TL;DR: This paper presents a new variation on CSRF attacks, login CSRF, in which the attacker forges a cross-site request to the login form, logging the victim into the honest web site as the attacker.
Abstract: Cross-Site Request Forgery (CSRF) is a widely exploited web site vulnerability. In this paper, we present a new variation on CSRF attacks, login CSRF, in which the attacker forges a cross-site request to the login form, logging the victim into the honest web site as the attacker. The severity of a login CSRF vulnerability varies by site, but it can be as severe as a cross-site scripting vulnerability. We detail three major CSRF defense techniques and find shortcomings with each technique. Although the HTTP Referer header could provide an effective defense, our experimental observation of 283,945 advertisement impressions indicates that the header is widely blocked at the network layer due to privacy concerns. Our observations do suggest, however, that the header can be used today as a reliable CSRF defense over HTTPS, making it particularly well-suited for defending against login CSRF. For the long term, we propose that browsers implement the Origin header, which provides the security benefits of the Referer header while responding to privacy concerns.

462 citations


Proceedings ArticleDOI
18 May 2008
TL;DR: In this paper, the authors propose techniques for automatic patch-based exploit generation, and show that their techniques can automatically generate exploits for 5 Microsoft programs based upon patches provided via Windows Update.
Abstract: The automatic patch-based exploit generation problem is: given a program P and a patched version of the program P', automatically generate an exploit for the potentially unknown vulnerability present in P but fixed in P'. In this paper, we propose techniques for automatic patch-based exploit generation, and show that our techniques can automatically generate exploits for 5 Microsoft programs based upon patches provided via Windows Update. Although our techniques may not work in all cases, a fundamental tenant of security is to conservatively estimate the capabilities of attackers. Thus, our results indicate that automatic patch-based exploit generation should be considered practical. One important security implication of our results is that current patch distribution schemes which stagger patch distribution over long time periods, such as Windows Update, may allow attackers who receive the patch first to compromise the significant fraction of vulnerable hosts who have not yet received the patch.

289 citations



Journal ArticleDOI
TL;DR: In its assessment of the earthquake-induced damage of a municipal water system, this paper includes the impact of damage to the supporting electrical power system using a fault tree analysis and a shortest-path algorithm and the effect of uncertainty of seismic intensity and component fragility on network integrity is evaluated.

227 citations


Journal ArticleDOI
Wolfgang Kröger1
TL;DR: An object-oriented, hybrid modeling approach promising to overcome some of the shortcomings of traditional methods is presented and the need to extend current modeling and simulation techniques in order to cope with the increasing system complexity is elaborated.

224 citations


Patent
24 Mar 2008
TL;DR: In this article, the authors propose a method of securing a network from vulnerability exploits, including the steps of a traffic analysis engine receiving a plurality of packets destined for an internal operating system, selectively forwarding the packets to at least one virtual machine, processing each forwarded packet, identifying a malicious packet from the processed packets, and the rapid analysis engine creating a new signature to identify the malicious packet.
Abstract: A method of securing a network from vulnerability exploits, including the steps of a traffic analysis engine receiving a plurality of packets destined for an internal operating system; the traffic analysis engine selectively forwarding the packets to at least one virtual machine emulating the internal operating system; the virtual machine processing each forwarded packet; a rapid analysis engine identifying a malicious packet from the processed packets; and the rapid analysis engine creating a new signature to identify the malicious packet.

224 citations


Journal ArticleDOI
TL;DR: More fundamental limitations of the product formula are addressed, including its failure to adjust for correlations among its components, nonadditivity of risks estimated using the formula, inability to use risk-scoring results to optimally allocate defensive resources, and intrinsic subjectivity and ambiguity of Threat, Vulnerability, and Consequence numbers.
Abstract: Several important risk analysis methods now used in setting priorities for protecting U.S. infrastructures against terrorist attacks are based on the formula: Risk = Threat x Vulnerability x Consequence. This article identifies potential limitations in such methods that can undermine their ability to guide resource allocations to effectively optimize risk reductions. After considering specific examples for the Risk Analysis and Management for Critical Asset Protection (RAMCAP) framework used by the Department of Homeland Security, we address more fundamental limitations of the product formula. These include its failure to adjust for correlations among its components, nonadditivity of risks estimated using the formula, inability to use risk-scoring results to optimally allocate defensive resources, and intrinsic subjectivity and ambiguity of Threat, Vulnerability, and Consequence numbers. Trying to directly assess probabilities for the actions of intelligent antagonists instead of modeling how they adaptively pursue their goals in light of available information and experience can produce ambiguous or mistaken risk estimates. Recent work demonstrates that two-level (or few-level) hierarchical optimization models can provide a useful alternative to Risk = Threat x Vulnerability x Consequence scoring rules, and also to probabilistic risk assessment (PRA) techniques that ignore rational planning and adaptation. In such two-level optimization models, defender predicts attacker's best response to defender's own actions, and then chooses his or her own actions taking into account these best responses. Such models appear valuable as practical approaches to antiterrorism risk analysis.

213 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a framework to analyze the optimal timing of disclosure and showed that the vendor typically releases the patch less expeditiously than is socially optimal. But they also showed that a longer protected period does not always result in a better patch quality.
Abstract: Software vulnerabilities represent a serious threat to cybersecurity, most cyberattacks exploit known vulnerabilities. Unfortunately, there is no agreed-upon policy for their disclosure. Disclosure policy (which sets a protected period given to a vendor to release the patch for the vulnerability) indirectly affects the speed and quality of the patch that a vendor develops. Thus, CERT/CC and similar bodies acting in the public interest can use disclosure to influence the behavior of vendors and reduce social cost. This paper develops a framework to analyze the optimal timing of disclosure. We formulate a model involving a social planner who sets the disclosure policy and a vendor who decides on the patch release. We show that the vendor typically releases the patch less expeditiously than is socially optimal. The social planner optimally shrinks the protected period to push the vendor to deliver the patch more quickly, and sometimes the patch release time coincides with disclosure. We extend the model to allow the proportion of users implementing patches to depend upon the quality (chosen by the vendor) of the patch. We show that a longer protected period does not always result in a better patch quality. Another extension allows for some fraction of users to use “work-arounds.” We show that the possibility of work-arounds can provide the social planner with more leverage, and hence the social planner shrinks the protected period. Interestingly, the possibility of work-arounds can sometimes increase the social cost due to the negative externalities imposed by the users who are able to use the work-arounds on the users who are not.

127 citations


Journal ArticleDOI
TL;DR: In this article, a game-theoretic approach to the analysis of road network vulnerability is presented, where a mixed route strategy is adopted, meaning that the use of the road network is determined by the worst scenario probabilities.
Abstract: The reliability of road networks depends directly on their vulnerability to disruptive incidents, ranging in severity from minor disruptions to terrorist attacks. This paper presents a game theoretic approach to the analysis of road network vulnerability. The approach posits predefined disruption, attack or failure scenarios and then considers how to use the road network so as to minimize the maximum expected loss in the event of one of these scenarios coming to fruition. A mixed route strategy is adopted, meaning that the use of the road network is determined by the worst scenario probabilities. This is equivalent to risk-averse route choice. A solution algorithm suitable for use with standard traffic assignment software is presented, thereby enabling the use of electronic road navigation networks. A variant of this algorithm suitable for risk-averse assignment is developed. A numerical example relating to the central London road network is presented. The results highlight points of vulnerability in the road network. Applications of this form of network vulnerability analysis together with improved solution methods are discussed.

126 citations


Proceedings ArticleDOI
10 Aug 2008
TL;DR: This paper proposes a new attack on square and multiply, based on a manipulation of the control flow, and shows how to realize this attack in practice using non-invasive spike attacks and discusses impacts of different side channel analysis countermeasures on the attack.
Abstract: In order to provide security for a device, cryptographic algorithms are implemented on them. Even devices using a cryptographically secure algorithm may be vulnerable to implementation attacks like side channel analysis or fault attacks. Most fault attacks on RSA concentrate on the vulnerability of the Chinese Remainder Theorem to fault injections. A few other attacks on RSA which do not use this speed-up technique have been published. Nevertheless, these attacks require a quite precise fault injection like a bit flip or target a special operation without any possibility to check if the fault was injected in the intended way, like in safe-error attacks.In this paper we propose a new attack on square and multiply, based on a manipulation of the control flow. Furthermore, we show how to realize this attack in practice using non-invasive spike attacks and discuss impacts of different side channel analysis countermeasures on our attack. The attack was performed using low cost equipment.

118 citations


Journal ArticleDOI
TL;DR: The method combines dual graph modelling with connectivity analysis and topological measures and has been tested on the street network of the Helsinki Metropolitan Area in Finland and the vulnerability risk of the network elements was experimentally defined.
Abstract: Effective management of infrastructural networks in the case of a crisis requires a prior analysis of the vulnerability of spatial networks and identification of critical locations where an interdiction would cause damage and disruption. This article presents a mathematical method for modelling the vulnerability risk of network elements which can be used for identification of critical locations in a spatial network. The method combines dual graph modelling with connectivity analysis and topological measures and has been tested on the street network of the Helsinki Metropolitan Area in Finland. Based on the results of this test the vulnerability risk of the network elements was experimentally defined. Further developments are currently under consideration for eventually developing a risk model not only for one but for a group of co-located spatial networks.

Journal ArticleDOI
TL;DR: In this article, the authors present an analysis of information security investment from the perspective of a risk-averse decision maker following common economic principles, and find that for a risk averse decision-maker, the maximum security investment increases with, but never exceeds, the potential loss from a security breach, and there exists a minimum potential loss below which the optimal investment is zero.

Journal ArticleDOI
TL;DR: In this article, a new network performance/efficiency measure, which captures demands, flows, costs, and behavior on networks, can be used to assess the importance of network components and their rankings.
Abstract: In this paper, we demonstrate how a new network performance/efficiency measure, which captures demands, flows, costs, and behavior on networks, can be used to assess the importance of network components and their rankings. We provide new results regarding the measure, which we refer to as the Nagurney---Qiang measure, or, simply, the N---Q measure, and a previously proposed one, which did not explicitly consider demands and flows. We apply both measures to such critical infrastructure networks as transportation networks and the Internet and further explore the new measure through an application to an electric power generation and distribution network in the form of a supply chain. The Nagurney and Qiang network performance/efficiency measure that captures flows and behavior can identify which network components, that is, nodes and links, have the greatest impact in terms of their removal and, hence, are important from both vulnerability as well as security standpoints.

Proceedings ArticleDOI
13 Apr 2008
TL;DR: This paper proposes a novel security metric framework that identifies and quantifies objectively the most significant security risk factors, which include existing vulnerabilities, historical trend of vulnerability of the remotely accessible services, prediction of potential vulnerabilities for any general network service and their estimated severity and finally policy resistance to attack propagation within the network.
Abstract: Evaluation of network security is an essential step in securing any network. This evaluation can help security professionals in making optimal decisions about how to design security countermeasures, to choose between alternative security architectures, and to systematically modify security configurations in order to improve security. However, the security of a network depends on a number of dynamically changing factors such as emergence of new vulnerabilities and threats, policy structure and network traffic. Identifying, quantifying and validating these factors using security metrics is a major challenge in this area. In this paper, we propose a novel security metric framework that identifies and quantifies objectively the most significant security risk factors, which include existing vulnerabilities, historical trend of vulnerability of the remotely accessible services, prediction of potential vulnerabilities for any general network service and their estimated severity and finally policy resistance to attack propagation within the network. We then describe our rigorous validation experiments using real- life vulnerability data of the past 6 years from National Vulnerability Database (NVD) [10] to show the high accuracy and confidence of the proposed metrics. Some previous works have considered vulnerabilities using code analysis. However, as far as we know, this is the first work to study and analyze these metrics for network security evaluation using publicly available vulnerability information and security policy configuration.

Proceedings ArticleDOI
26 May 2008
TL;DR: This work designs an efficient algorithm to assign the appropriate version of software to each sensor, so that sensor worms are restrained from propagation, and examines the impact of sensor node deployment errors on worm propagation.
Abstract: Because of cost and resource constraints, sensor nodes do not have a complicated hardware architecture or operating system to protect program safety. Hence, the notorious buffer-overflow vulnerability that has caused numerous Internet worm attacks could also be exploited to attack sensor networks. We call the malicious code that exploits a buffer-overflow vulnerability in a sensor program sensor worm. Clearly, sensor worm will be a serious threat, if not the most dangerous one, when an attacker could simply send a single packet to compromise the entire sensor network. Despite its importance, so far little work has been focused on sensor worms.In this work, we first illustrate the feasibility of launching sensor worms through real experiments on Mica2 motes. Inspired by the survivability through heterogeneity philosophy, we then explore the technique of software diversity to combat sensor worms. Given a limited number of software versions, we design an efficient algorithm to assign the appropriate version of software to each sensor, so that sensor worms are restrained from propagation. We also examine the impact of sensor node deployment errors on worm propagation, which directs the selection of our system parameters based on percolation theory. Finally, extensive analytical and simulation results confirm the effectiveness of our scheme in containing sensor worms.

01 Jan 2008
TL;DR: This paper revisits the topic of RFID eavesdropping attacks, surveying previous work and explaining why the feasibility of practical attacks is still a relevant and novel research topic.
Abstract: RFID systems often use near-field magnetic coupling to implement communication channels. The advertised operational range of these channels is less than 10 cm and therefore several implemented systems assume that the communication channel is location limited and therefore relatively secure. Nevertheless, there have been repeated questions raised about the vulnerability of these near-field systems against eavesdropping and skimming attacks. In this paper I revisit the topic of RFID eavesdropping attacks, surveying previous work and explaining why the feasibility of practical attacks is still a relevant and novel research topic. I present a brief overview of the radio characteristics for popular HF RFID standards and present some practical results for eavesdropping experiments against tokens adhering to the ISO 14443 and ISO 15693 standards. Finally, I discuss how an attacker could construct a low-cost eavesdropping device using easy to obtain parts and reference designs.

Journal ArticleDOI
TL;DR: To help quantitatively measure the level of cybersecurity for a computer-based information system, two indices are presented, the threat-impact index and the cyber-vulnerability index, based on vulnerability trees.

01 Jan 2008
TL;DR: The Topological Vulnerability Analysis (TVA) system is described, which analyzes vulnerability to multistep network penetration and shows how TVA attack graphs can be used to compute actual sets of hardening measures that guarantee the safety of given critical resources.
Abstract: This chapter examines issues and methods for survivability of systems under malicious penetrating attacks. To protect from such attacks, it is necessary to take steps to prevent them from succeeding. At the same time, it is important to recognize that not all attacks can be averted at the outset; those that are partially successful may be unavoidable, and comprehensive support is required for identifying and responding to such attacks. We describe our Topological Vulnerability Analysis (TVA) system, which analyzes vulnerability to multistep network penetration. At the core of the TVA system are graphs that represent known exploit sequences that attackers can use to penetrate computer networks. We show how TVA attack graphs can be used to compute actual sets of hardening measures that guarantee the safety of given critical resources. TVA can also correlate received alerts, hypothesize missing alerts, and predict future alerts. Thus, TVA offers a promising solution for administrators to monitor and predict the progress of an intrusion, and take quick appropriate countermeasures.

Journal ArticleDOI
TL;DR: The conception of the cryptographic ASIC is described, devised to foster side-channel cryptanalyses, in a view to model the strongest possible attacker, and it is shown that WDDL is more vulnerable than SecLib.
Abstract: Logic styles with constant power consumption are promising solutions to counteract side-channel attacks on sensitive cryptographic devices. Recently, one vulnerability has been identified in a standard-cell-based power-constant logic called WDDL. Another logic, nicknamed SecLib, is considered and does not present the flaw of WDDL. In this paper, we evaluate the security level of WDDL and SecLib. The methodology consists in embedding in a dedicated circuit one unprotected DES coprocessor along with two others, implemented in WDDL and in SecLib. One essential part of this paper is to describe the conception of the cryptographic ASIC, devised to foster side-channel cryptanalyses, in a view to model the strongest possible attacker. The same analyses are carried out successively on the three DES modules. We conclude that, provided that the back-end of the WDDL module is carefully designed, its vulnerability cannot be exploited by the state-of-the-art attacks. Similarly, the SecLib DES module resists all assaults. However, using a principal component analysis, we show that WDDL is more vulnerable than SecLib. The statistical dispersion of WDDL, which reflects the correlation between the secrets and the power dissipation, is proved to be an order of magnitude higher than that of SecLib.

Journal ArticleDOI
TL;DR: This work presents an avalanche risk estimation procedure that combines statistical analysis of snowfall record, iterative simulations of avalanche dynamics and empirically-based vulnerability relations, and provides a risk estimate flexible to boundary and initial condition changes.

Journal ArticleDOI
M. Johnson1
TL;DR: This work characterize the extent of the security risk for a group of large financial institutions using a direct analysis of leaked documents and finds a statistically significant link between leakage and leak sources including the firm employment base and the number of retail accounts.
Abstract: Firms face many different types of information security risk. Inadvertent disclosure of sensitive business information represents one of the largest classes of recent security breaches. We examine a specific instance of this problem-inadvertent disclosures through peer-to-peer file-sharing networks. We characterize the extent of the security risk for a group of large financial institutions using a direct analysis of leaked documents. We also characterize the threat of loss by examining search patterns in peer-to-peer networks. Our analysis demonstrates both a substantial threat and vulnerability for large financial firms. We find a statistically significant link between leakage and leak sources including the firm employment base and the number of retail accounts. We also find a link between firm visibility and threat activity. Finally, we find that firms with more leaks also experience increased threat.

Journal ArticleDOI
TL;DR: This paper identifies the privacy weakness of the third generation partnership project - authentication and key agreement (3GPP-AKA) by showing a practical attack to it, and proposes a scheme that meets these requirements, and does not introduce security vulnerability to the underlying authentication scheme.
Abstract: With the widespread use of mobile devices, the privacy of mobile location information becomes an important issue. In this paper, we present the requirements on protecting mobile privacy in wireless networks, and identify the privacy weakness of the third generation partnership project - authentication and key agreement (3GPP-AKA) by showing a practical attack to it. We then propose a scheme that meets these requirements, and this scheme does not introduce security vulnerability to the underlying authentication scheme. Another feature of the proposed scheme is that on each use of wireless channel, it uses a one-time alias to conceal the real identity of the mobile station with respect to both eavesdroppers and visited (honest or false) location registers. Moreover, the proposed scheme achieves this goal of identity concealment without sacrificing authentication efficiency.

Journal ArticleDOI
TL;DR: In this article, a new method, named Vulnerability of towers (VULNeT), is presented for assessing the seismic vulnerability of tower structures according to two different levels of accuracy.
Abstract: The methods commonly pursued for vulnerability or risk analysis, when carried out at large scale (territorial, regional), are mainly based on qualitative parameters, due to the necessity of processing a huge number of structures. As a matter of fact, the final target of these methods is the correlation of the most representative parameters of the structural behaviour, so as to finally rank the same structures according to their level of vulnerability or associated risk. These methods are undoubtedly effective for large-scale analyses and statistical elaborations, although affected by a certain level of subjectivity. Moreover, the vulnerability is commonly defined by hybrid indices (more often obtained as score assignment or expert opinions) that do not represent physical entities. A validation of these methods can be pursued by data on earthquake damage, which are very helpful for calibrating the analysis and formulating fragility curves. Experimental data and in situ investigations can also help to enhance the effectiveness of the method and to finally calibrate the same results. These are particularly needed for very slender structures, as towers and similar structures, the behaviour of which is strongly influenced by the dynamic performance. This paper presents a new method, named Vulnerability of Towers (VULNeT), for assessing the seismic vulnerability of slender structures (particularly towers) according to two different levels of accuracy. The method is based on qualitative parameters, collected through a new survey form, on purpose developed with an online data storage, capable of running speedy analyses. Once the method has been introduced, some applications on different samples are shown and the result compared with those of other recent works obtained from the literature on the topic. The final part of this paper provides a general framework on possible enhancements of VULNeT. Data on the dynamic behaviour of the structure obtained through experimental campaigns and structural modelling are introduced, and these are conceived for validating the results achieved as well as to check the feasibility of the possible failure mechanisms included in the form. Finally, the first results obtained on a pilot application on the Febonio tower of Trasacco (Abruzzo, Italy, where in situ investigations and F.E. structural modelling were carried out), are presented, and some conclusions are drawn on the future development of the work. Copyright © 2008 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
05 May 2008
TL;DR: This work adapts the techniques of ACE Analysis to develop a new software-level vulnerability metric called the Program Vulnerability Factor (PVF), which allows insight into the vulnerability of a software resource to hardware faults in a micro-architecture independent way, and can be used to make judgments about the relative reliability of different programs.
Abstract: The technique known as ACE Analysis allows researchers to quantify a hardware structure's Architectural Vulnerability Factor (AVF) using simulation. This allows researchers to understand a hardware structure's vulnerability to soft errors and consider design tradeoffs when running specific workloads. AVF is only applicable to hardware, however, and no corresponding concept has yet been introduced for software. Quantifying vulnerability to hardware faults at a software, or program, level would allow researchers to gain a better understanding of the reliability of a program as run on a particular architecture (e.g., X86, PowerPC), independent of the micro-architecture on which it is executed. This ability can provide a basis for future research into reliability techniques at a software level.In this work, we adapt the techniques of ACE Analysis to develop a new software-level vulnerability metric called the Program Vulnerability Factor (PVF). This metric allows insight into the vulnerability of a software resource to hardware faults in a micro-architecture independent way, and can be used to make judgments about the relative reliability of different programs. We describe in detail how to calculate the PVF of a software resource, and show that the PVF of the architectural register file closely correlates with the AVF of the underlying physical register file and can serve as a good predictor of relative AVF when comparing the AVF of two different programs.

Patent
Navjot Singh1, Timothy Tsai1
30 Sep 2008
TL;DR: In this paper, a method and apparatus for automatically determining whether a security vulnerability alert is relevant to a device (e.g., personal computer, server, personal digital assistant [PDA], etc.), and automatically retrieving the associated software patches for relevant alerts, are disclosed.
Abstract: A method and apparatus for automatically determining whether a security vulnerability alert is relevant to a device (e.g., personal computer, server, personal digital assistant [PDA], etc.), and automatically retrieving the associated software patches for relevant alerts, are disclosed. The illustrative embodiment intelligently determines whether the software application specified by a security vulnerability alert is resident on the device, whether the version of the software application on the device matches that of the security vulnerability alert, and whether the device's hardware platform and operating system match those of the security vulnerability alert.

Patent
28 Mar 2008
TL;DR: In this article, a computer system comprising at least one controlled execution space hosting an operating system and an application program, a vulnerability monitoring agent coupled to the controlled execution spaces, one or more vulnerability profiles coupled with the monitoring agent, wherein each of the vulnerability profiles comprises an application-program identifier, an operating-system identifier, and a vulnerability specification describing a vulnerability of an application that the application program identifier indicates when executed with an operating operating system that the operating system identifier indicates, and remedial action which when executed will remediate the vulnerability.
Abstract: A computer system, comprising at least one controlled execution space hosting an operating system and an application program; a vulnerability monitoring agent coupled to the controlled execution space; one or more vulnerability profiles coupled to the vulnerability monitoring agent, wherein each of the vulnerability profiles comprises an application program identifier, an operating system identifier, a vulnerability specification describing a vulnerability of an application program that the application program identifier indicates when executed with an operating system that the operating system identifier indicates, and a remedial action which when executed will remediate the vulnerability; wherein the vulnerability monitoring agent is configured to monitor execution of the operating system and the application program in the controlled execution space, to detect an anomaly associated with the vulnerability, to determine the remedial action for the operating system and application program based on one of the vulnerability profiles, and to cause the remedial action.

ProceedingsDOI
12 May 2008
TL;DR: The goal of the workshop was to challenge, establish and debate a far-reaching agenda that broadly and comprehensively outlined a strategy for cyber security and information intelligence that is founded on sound principles and technologies.
Abstract: As our dependence on the cyber infrastructure grows ever larger, more complex and more distributed, the systems that compose it become more prone to failures and/or exploitation. Intelligence is information valued for its currency and relevance rather than its detail or accuracy. Information explosion describes the pervasive abundance of (public/private) information and the effects of such. Gathering, analyzing, and making use of information constitutes a business- / sociopolitical- / military-intelligence gathering activity and ultimately poses significant advantages and liabilities to the survivability of "our" society. The combination of increased vulnerability, increased stakes and increased threats make cyber security and information intelligence (CSII) one of the most important emerging challenges in the evolution of modern cyberspace "mechanization." The goal of the workshop was to challenge, establish and debate a far-reaching agenda that broadly and comprehensively outlined a strategy for cyber security and information intelligence that is founded on sound principles and technologies. We aimed to discuss novel theoretical and applied research focused on different aspects of software security/dependability, as software is at the heart of the cyber infrastructure.

Proceedings ArticleDOI
04 Mar 2008
TL;DR: It is concluded that the VEA-bility can be used to accurately estimate the comparative desirability of a specific network configuration, which can then be use to explore alternate possible configurations and allows an administrator to select one among the given options.
Abstract: In this work, we propose a novel quantitative security metric, VEA-bility, which measures the desirability of different network configurations. An administrator can then use the VEA-bility scores of different configurations to configure a secure network. Based on our findings, we conclude that the VEA-bility can be used to accurately estimate the comparative desirability of a specific network configuration. This information can then be used to explore alternate possible configurations and allows an administrator to select one among the given options. These tools are important to network administrators as they strive to provide secure, yet functional, network configurations.

Book ChapterDOI
01 May 2008
TL;DR: It is shown that for many applications, successful Sybil attacks may be expensive even when the Sybil attack cannot be prevented, and the use of a recurring fee as a deterrent against theSybil attack is proposed.
Abstract: Sybil attacks have been shown to be unpreventable except under the protection of a vigilant central authority. We use an economic analysis to show quantitatively that some applications and protocols are more robust against the attack than others. In our approach, for each distributed application and an attacker objective, there is a critical value that determines the cost-effectiveness of the attack. A Sybil attack is worthwhile only when the critical value is exceeded by the ratio of the value of the attacker's goal to the cost of identities. We show that for many applications, successful Sybil attacks may be expensive even when the Sybil attack cannot be prevented. Specifically, we propose the use of a recurring fee as a deterrent against the Sybil attack. As a detailed example, we look at four variations of the Sybil attack against a recurring fee based onion routing anonymous routing network and quantify its vulnerability.

Posted Content
TL;DR: In this paper, a non-uniformity factor was proposed to quantify the unevenness of a vulnerable-host distribution. And the authors showed that a representative network-aware malware can increase the spreading speed by exactly or nearly a nonuniformy factor when compared to a random-scanning malware at an early stage of malware propagation.
Abstract: This work investigates three aspects: (a) a network vulnerability as the non-uniform vulnerable-host distribution, (b) threats, i.e., intelligent malwares that exploit such a vulnerability, and (c) defense, i.e., challenges for fighting the threats. We first study five large data sets and observe consistent clustered vulnerable-host distributions. We then present a new metric, referred to as the non-uniformity factor, which quantifies the unevenness of a vulnerable-host distribution. This metric is essentially the Renyi information entropy and better characterizes the non-uniformity of a distribution than the Shannon entropy. Next, we analyze the propagation speed of network-aware malwares in view of information theory. In particular, we draw a relationship between Renyi entropies and randomized epidemic malware-scanning algorithms. We find that the infection rates of malware-scanning methods are characterized by the Renyi entropies that relate to the information bits in a non-unform vulnerable-host distribution extracted by a randomized scanning algorithm. Meanwhile, we show that a representative network-aware malware can increase the spreading speed by exactly or nearly a non-uniformity factor when compared to a random-scanning malware at an early stage of malware propagation. This quantifies that how much more rapidly the Internet can be infected at the early stage when a malware exploits an uneven vulnerable-host distribution as a network-wide vulnerability. Furthermore, we analyze the effectiveness of defense strategies on the spread of network-aware malwares. Our results demonstrate that counteracting network-aware malwares is a significant challenge for the strategies that include host-based defense and IPv6.