scispace - formally typeset
Search or ask a question

Showing papers on "Vulnerability (computing) published in 2006"


Journal ArticleDOI
01 Nov 2006
TL;DR: The Common Vulnerability Scoring System is a public initiative designed to address this issue by presenting a framework for assessing and quantifying the impact of software vulnerabilities.
Abstract: Historically, vendors have used their own methods for scoring software vulnerabilities, usually without detailing their criteria or processes. This creates a major problem for users, particularly those who manage disparate IT systems and applications. The Common Vulnerability Scoring System (CVSS) is a public initiative designed to address this issue by presenting a framework for assessing and quantifying the impact of software vulnerabilities. Organizations currently generating CVSS scores include Cisco, US National Institute of Standards and Technology (through the US National Vulnerability Database; NVD), Qualys, Oracle, and Tenable Network Security. CVSS offers the following benefits: 1) standardized vulnerability scores, 2) contextual scoring and 3) open framework. The goal is for CVSS to facilitate the generation of consistent scores that accurately represent the impact of vulnerabilities

628 citations


Proceedings ArticleDOI
09 Dec 2006
TL;DR: This paper proposes a low overhead, software-only information flow tracking system, called LIFT, which minimizes run-time overhead by exploiting dynamic binary instrumentation and optimizations/or detecting various types of security attacks without requiring any hardware changes.
Abstract: Computer security is severely threatened by software vulnerabilities. Prior work shows that information flow tracking (also referred to as taint analysis) is a promising technique to detect a wide range of security attacks. However, current information flow tracking systems are not very practical, because they either require program annotations, source code, non-trivial hardware extensions, or incur prohibitive runtime overheads. This paper proposes a low overhead, software-only information flow tracking system, called LIFT, which minimizes run-time overhead by exploiting dynamic binary instrumentation and optimizations for detecting various types of security attacks without requiring any hardware changes. More specifically, LIFT aggressively eliminates unnecessary dynamic information flow tracking, coalesces information checks, and efficiently switches between target programs and instrumented information flow tracking code. We have implemented LIFT on a dynamic binary instrumentation framework on Windows. Our real-system experiments with two real-world server applications, one client application and eighteen attack benchmarks show that LIFT can effectively detect various types of security attacks. LIFT also incurs very low overhead, only 6.2% for server applications, and 3.6 times on average for seven SPEC INT2000 applications. Our dynamic optimizations are very effective in reducing the overhead by a factor of 5-12 times.

435 citations


Proceedings ArticleDOI
21 May 2006
TL;DR: The work departs from previous approaches by focusing on the semantics of the program and vulnerability exercised by a sample exploit instead of the semantics or syntax of the exploit itself, and can automatically generate a vulnerability signature using a single exploit of much higher quality than previous exploit-based signatures.
Abstract: In this paper we explore the problem of creating vulnerability signatures. A vulnerability signature matches all exploits of a given vulnerability, even polymorphic or metamorphic variants. Our work departs from previous approaches by focusing on the semantics of the program and vulnerability exercised by a sample exploit instead of the semantics or syntax of the exploit itself. We show the semantics of a vulnerability define a language which contains all and only those inputs that exploit the vulnerability. A vulnerability signature is a representation (e.g., a regular expression) of the vulnerability language. Unlike exploit-based signatures whose error rate can only be empirically measured for known test cases, the quality of a vulnerability signature can be formally quantified for all possible inputs. We provide a formal definition of a vulnerability signature and investigate the computational complexity of creating and matching vulnerability signatures. We also systematically explore the design space of vulnerability signatures. We identify three central issues in vulnerability-signature creation: how a vulnerability signature represents the set of inputs that may exercise a vulnerability, the vulnerability coverage (i.e., number of vulnerable program paths) that is subject to our analysis during signature creation, and how a vulnerability signature is then created for a given representation and coverage. We propose new data-flow analysis and novel adoption of existing techniques such as constraint solving for automatically generating vulnerability signatures. We have built a prototype system to test our techniques. Our experiments show that we can automatically generate a vulnerability signature using a single exploit which is of much higher quality than previous exploit-based signatures. In addition, our techniques have several other security applications, and thus may be of independent interest.

324 citations


Journal ArticleDOI
TL;DR: It is shown that maliciously chosen low-rate DoS traffic patterns that exploit TCP's retransmission timeout mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection.
Abstract: Denial of Service attacks are presenting an increasing threat to the global inter-networking infrastructure. While TCP's congestion control algorithm is highly robust to diverse network conditions, its implicit assumption of end-system cooperation results in a well-known vulnerability to attack by high-rate non-responsive flows. In this paper, we investigate a class of low-rate denial of service attacks which, unlike high-rate attacks, are difficult for routers and counter-DoS mechanisms to detect. Using a combination of analytical modeling, simulations, and Internet experiments, we show that maliciously chosen low-rate DoS traffic patterns that exploit TCP's retransmission timeout mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection. Moreover, as such attacks exploit protocol homogeneity, we study fundamental limits of the ability of a class of randomized timeout mechanisms to thwart such low-rate DoS attacks.

215 citations


Book ChapterDOI
17 Aug 2006
TL;DR: This work shows that access-driven cache-based attacks are becoming easier to understand and analyze, and when such attacks are mounted against systems performing AES, only a very limited number of encryptions are required to recover the whole key with a high probability of success.
Abstract: An access-driven attack is a class of cache-based side channel analysis. Like the time-driven attack, the cache's timings are under inspection as a source of information leakage. Access-driven attacks scrutinize the cache behavior with a finer granularity, rather than evaluating the overall execution time. Access-driven attacks leverage the ability to detect whether a cache line has been evicted, or not, as the primary mechanism for mounting an attack. In this paper we focus on the case of AES and we show that the vast majority of processors suffer from this cache-based vulnerability. Our best results are indeed performed on a processor without the multi-threading capabilities -- in contrast to previous works in this area that had suggested that multi-threading actually improved, or even made possible, this class of attack. Despite some technical difficulties required to mount such attacks, our work shows that access-driven cache-based attacks are becoming easier to understand and analyze. Also, when such attacks are mounted against systems performing AES, only a very limited number of encryptions are required to recover the whole key with a high probability of success, due to our last round analysis from the ciphertext.

208 citations


Proceedings ArticleDOI
21 May 2006
TL;DR: A novel vulnerability is presented which allows an attacker to send arbitrary data on a WEP network after having eavesdropped a single data packet and techniques for real-time decryption of data packets are presented, which may be used under common circumstances.
Abstract: The 802.11 encryption standard Wired Equivalent Privacy (WEP) is still widely used today despite the numerous discussions on its insecurity. In this paper, we present a novel vulnerability which allows an attacker to send arbitrary data on a WEP network after having eavesdropped a single data packet. Furthermore, we present techniques for real-time decryption of data packets, which may be used under common circumstances. Vendor produced mitigation techniques which cause frequent WEP re-keying prevent traditional attacks, whereas our attack remains effective even in such scenarios. We implemented a fully automatic version of this attack which demonstrates its practicality and feasibility in real networks. As even rapidly re-keyed networks can be quickly compromised, we believe WEP must now be abandoned rather than patched yet again.

205 citations


Proceedings ArticleDOI
11 Sep 2006
TL;DR: This paper examines how vulnerabilities are handled in large-scale, analyzing more than 80,000 security advisories published since 1995 and quantifies the performance of the security industry as a whole.
Abstract: The security level of networks and systems is determined by the software vulnerabilities of its elements. Defending against large scale attacks requires a quantitative understanding of the vulnerability lifecycle. Specifically, one has to understand how exploitation and remediation of vulnerabilities, as well as the distribution of information thereof is handled by industry.In this paper, we examine how vulnerabilities are handled in large-scale, analyzing more than 80,000 security advisories published since 1995. Based on this information, we quantify the performance of the security industry as a whole. We discover trends and discuss their implications. We quantify the gap between exploit and patch availability and provide an analytical representation of our data which lays the foundation for further analysis and risk management.

194 citations


Proceedings ArticleDOI
20 Apr 2006
TL;DR: It is shown how the mixed qualitative and quantitative approach can be used to evaluate effectiveness and economic profitability of countermeasures as well as their deterrent effect on attackers, thus providing decision makers with a useful tool for performing better evaluation of IT security investments during the risk management process.
Abstract: In this paper we present a mixed qualitative and quantitative approach for evaluation of information technology (IT) security investments. For this purpose, we model security scenarios by using defense trees, an extension of attack trees with attack countermeasures and we use economic quantitative indexes for computing the defender's return on security investment and the attacker's return on attack. We show how our approach can be used to evaluate effectiveness and economic profitability of countermeasures as well as their deterrent effect on attackers, thus providing decision makers with a useful tool for performing better evaluation of IT security investments during the risk management process.

163 citations


Journal ArticleDOI
TL;DR: A quantitative hierarchical threat evaluation model is developed and can provide the intuitive security threat status in three hierarchies: services, hosts and local networks so that system administrators are freed from tedious analysis tasks based on the alarm datasets to have overall security status of the entire system.
Abstract: Evaluating security threat status is very important in network security management and analysis. A quantitative hierarchical threat evaluation model is developed in this paper to evaluate security threat status of a computer network system and the computational method is developed based on the structure of the network and the importance of services and hosts. The evaluation policy from bottom to top and from local to global is adopted in this model. The threat indexes of services, hosts and local networks are calculated by weighting the importance of services and hosts based on attack frequency, severity and network bandwidth consumption, and the security threat status is then evaluated. The experiment results show that this model can provide the intuitive security threat status in three hierarchies: services, hosts and local networks so that system administrators are freed from tedious analysis tasks based on the alarm datasets to have overall security status of the entire system. It is also possible for them to find the security behaviors of the system, to adjust the security strategies and to enhance the performance on system security. This model is valuable for guiding the security engineering practice and developing the tool of security risk evaluation.

156 citations


Journal ArticleDOI
23 Jan 2006
TL;DR: This paper presents recent results of attacks attempted against standard encryption algorithms, provides a theoretical estimation of these attacks based on simple statistical parameters and evaluates the cost and security of different possible countermeasures.
Abstract: Since their introduction by Kocher in 1998, power analysis attacks have attracted significant attention within the cryptographic community. While early works in the field mainly threatened the security of smart cards and simple processors, several recent publications have shown the vulnerability of hardware implementations as well. In particular, field programmable gate arrays are attractive options for hardware implementation of encryption algorithms,but their security against power analysis is a serious concern, as we discuss in this paper. For this purpose, we present recent results of attacks attempted against standard encryption algorithms, provide a theoretical estimation of these attacks based on simple statistical parameters and evaluate the cost and security of different possible countermeasures.

152 citations


Journal ArticleDOI
TL;DR: The analysis of weighted properties shows that centrality driven attacks are capable of shattering the network's communication or transport properties even at a very low level of damage in the connectivity pattern and the inclusion of weight and traffic provides evidence for the extreme vulnerability of complex networks to any targeted strategy.
Abstract: In real networks complex topological features are often associated with a diversity of interactions as measured by the weights of the links. Moreover, spatial constraints may also play an important role, resulting in a complex interplay between topology, weight, and geography. In order to study the vulnerability of such networks to intentional attacks, these attributes must therefore be considered along with the topological quantities. In order to tackle this issue, we consider the case of the worldwide airport network, which is a weighted heterogeneous network whose evolution and structure are influenced by traffic and geographical constraints. We first characterize relevant topological and weighted centrality measures and then use these quantities as selection criteria for the removal of vertices. We consider different attack strategies and different measures of the damage achieved in the network. The analysis of weighted properties shows that centrality driven attacks are capable of shattering the network's communication or transport properties even at a very low level of damage in the connectivity pattern. The inclusion of weight and traffic therefore provides evidence for the extreme vulnerability of complex networks to any targeted strategy and the need for them to be considered as key features in the finding and development of defensive strategies.

Journal ArticleDOI
TL;DR: In this article, the authors present a methodology to estimate the risk of an aquifer to be polluted from concentrated and/or dispersed sources, which applies an overlay and index method involving several parameters.
Abstract: The assessment of groundwater vulnerability to pollution aims at highlighting areas at a high risk of being polluted. This study presents a methodology, to estimate the risk of an aquifer to be polluted from concentrated and/or dispersed sources, which applies an overlay and index method involving several parameters. The parameters are categorized into three factor groups: factor group 1 includes parameters relevant to the internal aquifer system’s properties, thus determining the intrinsic aquifer vulnerability to pollution; factor group 2 comprises parameters relevant to the external stresses to the system, such as human activities and rainfall effects; factor group 3 incorporates specific geological settings, such as the presence of geothermal fields or salt intrusion zones, into the computation process. Geographical information systems have been used for data acquisition and processing, coupled with a multicriteria evaluation technique enhanced with fuzzy factor standardization. Moreover, besides assigning weights to factors, a second set of weights, i.e., order weights, has been applied to factors on a pixel by pixel basis, thus allowing control of the level of risk in the vulnerability determination and the enhancement of local site characteristics. Individual analysis of each factor group resulted in three intermediate groundwater vulnerability to pollution maps, which were combined in order to produce the final composite groundwater vulnerability map for the study area. The method has been applied in the region of Eastern Macedonia and Thrace (Northern Greece), an area of approximately 14,000 km2. The methodology has been tested and calibrated against the measured nitrate concentration in wells, in the northwest part of the study area, providing results related to the aggregation and weighting procedure.

Patent
Christina Woody Mercier1
24 May 2006
TL;DR: In this article, out-of-band information from a storage area network (SAN) device is analyzed to identify a vulnerability in the SAN and the analysis can be conducted by a policy-based data path analyzer device.
Abstract: Characterizing a storage area network (SAN). Out-of-band information can be received from a SAN device. The information describes a SAN device type to which the SAN device belongs. Out-of-band information is received from the SAN device describing a performance characteristic of the SAN device. Relationships between the SAN device and other devices within the SAN are identified based on the out-of-band information received. The out-of-band information received is analyzed to identify a vulnerability in the SAN. In-band-data can also be received and analyzed to identify the vulnerability. The analysis can be conducted by a policy based data path analyzer device. Automated provisioning can be conducted based on the vulnerability identified.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: This paper describes a secure and efficient implementation of instruction-set randomization (ISR) using software dynamic translation and describes an implementation that uses a strong cipher algorithm--the Advanced Encryption Standard (AES), to perform randomization.
Abstract: One of the most common forms of security attacks involves exploiting a vulnerability to inject malicious code into an executing application and then cause the injected code to be executed. A theoretically strong approach to defending against any type of code-injection attack is to create and use a process-specific instruction set that is created by a randomization algorithm. Code injected by an attacker who does not know the randomization key will be invalid for the randomized processor effectively thwarting the attack. This paper describes a secure and efficient implementation of instruction-set randomization (ISR) using software dynamic translation. The paper makes three contributions beyond previous work on ISR. First, we describe an implementation that uses a strong cipher algorithm--the Advanced Encryption Standard (AES), to perform randomization. AES is generally believed to be impervious to known attack methodologies. Second, we demonstrate that ISR using AES can be implemented practically and efficiently (considering both execution time and code size overheads) without requiring special hardware support. The third contribution is that our approach detects malicious code before it is executed. Previous approaches relied on probabilistic arguments that execution of non-randomized foreign code would eventually cause a fault or runtime exception.

Proceedings ArticleDOI
04 Sep 2006
TL;DR: A P2P-based overlay for intrusion detection (overlay IDS) that addresses the insider threat by means of a trust-aware engine for correlating alerts and an adaptive scheme for managing trust is proposed.
Abstract: Collaborative intrusion detection systems (IDSs) have a great potential for addressing the challenges posed by the increasing aggressiveness of current Internet attacks. However, one of the major concerns with the proposed collaborative IDSs is their vulnerability to the insider threat. Malicious intruders, infiltrating such a system, could poison the collaborative detectors with false alarms, disrupting the intrusion detection functionality and placing at risk the whole system. In this paper, we propose a P2P-based overlay for intrusion detection (Overlay IDS) that addresses the insider threat by means of a trust-aware engine for correlating alerts and an adaptive scheme for managing trust. We have implemented our system using JXTA framework and we have evaluated its effectiveness for preventing the spread of a real Internet worm over an emulated network. The evaluation results show that our Overlay IDS significantly increases the overall survival rate of the network.

Journal ArticleDOI
26 Jun 2006
TL;DR: An overview of phishing education is provided, focusing on context aware attacks and a new strategy for educating users by combining phishing IQ tests and class discussions is introduced.
Abstract: Phishing, e-mails sent out by hackers to lure unsuspecting victims into giving up confidential information, has been the cause of countless security breaches and has experienced in the last year an increase in frequency and diversity. While regular phishing attacks are easily thwarted, designing the attack to include user context information could potentially increase the user's vulnerability. To prevent this, phishing education needs to be considered. In this paper we provide an overview of phishing education, focusing on context aware attacks and introduce a new strategy for educating users by combining phishing IQ tests and class discussions. The technique encompasses displaying both legitimate and fraudulent e-mails to users and having them identify the phishing attempts from the authentic e-mails. Proper implementation of this system helps teach users what to look for in e-mails, and how to protect their confidential information from being caught in the nets of phishers. The strategy was applied in Introduction to Computing courses as part of the computer security component. Class assessment indicates an increased level of awareness and better recognition of attacks.

Patent
02 Mar 2006
TL;DR: In this article, a service-level security risk analysis system, methods, and Graphical User Interfaces (GUI) is described, and at least one security risk to the service is determined by analyzing security vulnerabilities associated with the identified assets.
Abstract: Information system service-level security risk analysis systems, methods, and Graphical User Interfaces are disclosed. Assets of an information system that have relationships with a service provided by the information system are identified, and at least one security risk to the service is determined by analyzing security vulnerabilities associated with the identified assets. A consolidated representation of the service is provided, and includes an indication of the determined security risk(s) and an indication of a relationship between the service and at least one of the identified assets. The security risk indication may include indications of multiple security parameters. Security risks may be represented differently depending on whether they arise from a security vulnerability of an asset that has a relationship with the service or a security vulnerability of an asset that has a relationship with the service only through a relationship with an asset that has a relationship with the service.

Proceedings ArticleDOI
30 Oct 2006
TL;DR: A packet vaccine mechanism that randomizes address-like strings in packet payloads to carry out fast exploit detection, vulnerability diagnosis and signature generation to shield the underlying vulnerability from further attacks is proposed.
Abstract: In biology,a vaccine is a weakened strain of a virus or bacterium that is intentionally injected into the body for the purpose of stimulating antibody production.Inspired by this idea, we propose a packet vaccine mechanism that randomizes address-like strings in packet payloads to carry out fast exploit detection, vulnerability diagnosis and signature generation. An exploit with a randomized jump address behaves like a vaccine: it will likely cause an exception in a vulnerable program's process when attempting to hijack the control flow,and thereby expose itself. Taking that exploit as a template, our signature generator creates a set of new vaccines to probe the program, in an attempt to uncover the necessary conditions for the exploit to happen. A signature is built upon these conditions to shield the underlying vulnerability from further attacks. In this way, packet vaccine detects and fllters exploits in a black-box fashion,i.e., avoiding the expense of tracking the program's execution flow. We present the design of the packet vaccine mechanism and an example of its application. We also describe our proof-of-concept implementation and the evaluation of our technique using real exploits.

Journal ArticleDOI
TL;DR: Using a novel data set, estimates on attack propensity and how it changes with disclosure and patching of vulnerabilities are provided and suggest that on an average both secret and published vulnerabilities attract fewer attacks than patched (published and patched) vulnerabilities.
Abstract: Research in information security, risk management and investment has grown in importance over the last few years. However, without reliable estimates on attack probabilities, risk management is difficult to do in practice. Using a novel data set, we provide estimates on attack propensity and how it changes with disclosure and patching of vulnerabilities. Disclosure of software vulnerability has been controversial. On one hand are those who propose full and instant disclosure whether the patch is available or not and on the other hand are those who argue for limited or no disclosure. Which of the two policies is socially optimal depends critically on how attack frequency changes with disclosure and patching. In this paper, we empirically explore the impact of vulnerability information disclosure and availability of patches on attacks targeting the vulnerability. Our results suggest that on an average both secret (non-published) and published (published and not patched) vulnerabilities attract fewer attacks than patched (published and patched) vulnerabilities. When we control for time since publication and patches, we find that patching an already known vulnerability decreases the number of attacks, although attacks gradually increase with time after patch release. Patching an unknown vulnerability, however, causes a spike in attacks, which then gradually decline after patch release. Attacks on secret vulnerabilities slowly increase with time until the vulnerability is published and then attacks rapidly decrease with time after publication.

Journal ArticleDOI
01 Mar 2006
TL;DR: A Genetic Algorithm (GA)-based approach enabling organizations to choose the minimal-cost security profile providing the maximal vulnerability coverage is presented and this approach is compared to an enumerative approach for a given test set.
Abstract: Organizations are making substantial investments in information security to reduce the risk presented by vulnerabilities in their information technology (IT) infrastructure. However, each security technology only addresses specific vulnerabilities and potentially creates additional vulnerabilities. The objective of this research is to present and evaluate a Genetic Algorithm (GA)-based approach enabling organizations to choose the minimal-cost security profile providing the maximal vulnerability coverage. This approach is compared to an enumerative approach for a given test set. The GA-based approach provides favorable results, eventually leading to improved tools for supporting information security investment decisions.

Patent
02 Mar 2006
TL;DR: In this article, security vulnerability information aggregation techniques are disclosed, where the vulnerability information associated with one or more security vulnerabilities is obtained from multiple sources and aggregated into respective unified vulnerability definitions.
Abstract: Security vulnerability information aggregation techniques are disclosed. Vulnerability information associated with one or more security vulnerabilities is obtained from multiple sources and aggregated into respective unified vulnerability definitions for the one or more security vulnerabilities. Aggregation may involve format conversion, content aggregation, or both in some embodiments. Unified vulnerability definitions may be distributed to vulnerability information consumers in accordance with consumer-specific policies. Storage of vulnerability information received from the sources may allow the aggregation process to be performed on existing vulnerability information "retro-actively". Related data structures and Graphical User Interfaces (GUIs) are also disclosed.

Book ChapterDOI
TL;DR: This paper provides a first attempt to structure the field by proposing a terminology for distinct concepts and defining criteria to allow for a better comparability between different approaches.
Abstract: Practical computer (in)security is largely driven by the existence of and knowledge about vulnerabilities, which can be exploited to breach security mechanisms. Although the discussion on details of responsible vulnerability disclosure is controversial, there is a sort of consensus that better information sharing is socially beneficial. In the recent years we observe the emerging of “vulnerability markets” as means to stimulate exchange of information. However, this term subsumes a broad range of different concepts, which are prone to confusion. This paper provides a first attempt to structure the field by (1) proposing a terminology for distinct concepts and (2) defining criteria to allow for a better comparability between different approaches. An application of this framework on four market types shows notable differences between the approaches.

Journal ArticleDOI
TL;DR: In contrast to the usual emphasis on coordination and capacity, the authors argue for conceptualizing local imperatives attendant to homeland security as collective action problems requiring the construction of local performance regimes Performance regimes must engage three challenges: (1) to enlist diverse stakeholders around a collective local security goal despite varying perceptions of its immediacy; (2) to persuade participants to sustain their involvement in the face of competing demands, and (3) to overcome collective actions to create a durable coalition around performance goals necessary to reducing local vulnerability.
Abstract: Paradoxically, the greater the national security threats, the more important the role of local policy in the United States In this article we examine homeland security initiatives—particularly the tension between risk and vulnerability—and the governance dilemmas they pose for local communities In contrast to the usual emphasis on coordination and capacity, we argue for conceptualizing local imperatives attendant to homeland security as collective action problems requiring the construction of local performance regimes Performance regimes must engage three challenges: (1) to enlist diverse stakeholders around a collective local security goal despite varying perceptions of its immediacy; (2) to persuade participants to sustain their involvement in the face of competing demands, and (3) to create a durable coalition around performance goals necessary for reducing local vulnerability Using these analytic categories casts local homeland security issues in strategic terms; it also encourages comparisons of local governance arrangements to respond to risk and vulnerability Regardless of the national character of homeland security policy, the reality is that all terrorism is local Ultimately so are all security initiatives Paradoxically, the greater the national security threats, the more important the local role in the United States In this article we examine the attributes of homeland security initiatives—particularly the tension between risk and vulnerability—and the governance dilemmas they pose for local communities In contrast to the usual emphasis on coordination and capacity, we argue for conceptualizing local imperatives attendant to homeland security as collective action problems requiring the construction of local performance regimes These local governance arrangements are capable of bringing about systemic changes in how security issues are addressed at the local level, giving a strategic direction and priority to vulnerability issues This is more than a matter of coordination: these performance regimes must engage three challenges: (1) to overcome asymmetrical incentives and enlist diverse stakeholders around a collective local security goal despite varying perceptions of its immediacy; (2) to persuade participants to sustain their involvement in the face of competing demands, and (3) to overcome collective action problems to create a durable coalition around performance goals necessary to reducing local vulnerability Using these analytic categories casts local homeland security issues in strategic terms; it also encourages comparisons with similar initiatives in other cities—not in terms of funding but in terms of improving local responsiveness and resilience in dealing with risk and vulnerability While the need for coordination and capacity is irrefutable, an exclusive focus on these functionalist and managerial emphases limits our ability to develop concepts and theories useful for the analysis of homeland security policies, much less the politics of

Proceedings ArticleDOI
23 Apr 2006
TL;DR: This paper presents a simulationbased study of the impacts of different types of attacks on mesh-based multicast in MANETs under various security threats, and studies how the processing delay of legitimate nodes, the number of attackers and their positions affect the performance metrics of a multicast session.
Abstract: Security is an essential requirement in mobile ad hoc networks (MANETs). Compared to wired networks, MANETs are more vulnerable to security attacks due to the lack of a trusted centralized authority, easy eavesdropping, dynamic network topology, and limited resources. The security issue of MANETs in group communications is even more challenging because of the involvement of multiple senders and multiple receivers. In this paper, we present a simulationbased study of the impacts of different types of attacks on mesh-based multicast in MANETs. We consider the most common types of attacks, namely rushing attack, blackhole attack, neighbor attack and jellyfish attack. Specifically we study how the processing delay of legitimate nodes, the number of attackers and their positions affect the performance metrics of a multicast session such as packet delivery ratio, throughput, end-to-end delay, and delay jitter. To the best of our knowledge, this is the first paper that studies the vulnerability and the performance of multicast in MANETs under various security threats.

Patent
31 Mar 2006
TL;DR: In this paper, a system and method for managing security testing from plural vendors is presented, which relates to maintaining a security database by correlating multiple sources of vulnerability data and also to managing security test from plural vendor.
Abstract: The subject matter relates generally to a system and method for managing security testing. Particularly, this invention relates to maintaining a security database by correlating multiple sources of vulnerability data and also to managing security testing from plural vendors. This invention also relates to providing secure session tracking by performing plural authentications of a user.

Proceedings ArticleDOI
30 Oct 2006
TL;DR: This work uses the National Vulnerability Database, NVD, provided by NIST, to find the vulnerability history of the services running on the system, and from the frequency and severity of the past vulnerabilities, it measures the historical vulnerability of the policy using a decay factor.
Abstract: Evaluation of security policies, specifically access control policies, plays an important part in securing the network by ensuring that policies are correct and consistent. Quality of protection (QoP) of a policy depends on a number of factors. Thus it is desirable to have one unified score based on these factors to judge the quality of the policy and to compare policies. In this context, we present our method of calculating a metric based on a number of factors like the vulnerabilities present in the system, vulnerability history of the services and their exposure to the network, and traffic patterns. We measure the existing vulnerability by combining the severity scores of the vulnerabilities present in the system. We mine the National Vulnerability Database, NVD, provided by NIST, to find the vulnerability history of the services running on the system, and from the frequency and severity of the past vulnerabilities, we measure the historical vulnerability of the policy using a decay factor. In both cases, we take into account the exposure of the service to the network and the traffic volume handled by the service. Finally, we combine these scores into one unified score - the Policy Security Score.

Patent
02 Mar 2006
TL;DR: In this paper, a definition of a security vulnerability, which includes multiple asset characteristics such as an asset platform that may be exploited via the security vulnerability and an asset platforms that is affected when the exploited asset platform is exploited by the security vulnerabilities, is compared with definitions of one or more assets of an information system.
Abstract: Systems and methods of associating security vulnerabilities and assets, and related Graphical User Interfaces (GUIs) and data structures, are disclosed. A definition of a security vulnerability, which includes multiple asset characteristics such as an asset platform that may be exploited via the security vulnerability and an asset platform that is affected when the exploited asset platform is exploited via the security vulnerability, is compared with definitions of one or more assets of an information system. An association between the security vulnerability and an asset is made if the definition of the asset includes a first asset characteristic of the security vulnerability definition and either the definition of the asset or the definition of another asset that has a relationship with the asset includes a second asset characteristic of the security vulnerability definition. The security vulnerability definition may also identify an asset platform that protects against the vulnerability.

Proceedings ArticleDOI
06 Mar 2006
TL;DR: Results show that the L2 tag array can be as susceptible as first-level instruction and data caches (IL1/DL1) to soft errors.
Abstract: Memory elements are the most vulnerable system component to soft errors. Since memory elements in cache arrays consume a large fraction of the die in modern microprocessors, the probability of particle strikes in these elements is high and can significantly impact overall processor reliability. Previous work (Asadi et al., 2005) has developed effective metrics to accurately measure the vulnerability of cache memory elements. Based on these metrics, we have developed a reliability-performance evaluation framework, which has been built upon the Simplescalar simulator. In this work, we focus on the reliability aspects of L1 and L2 caches. Specifically, we present algorithms for tag vulnerability computation and investigate and report in detail on the vulnerability of data, tag, and status bits in the L2 array. Experiments on SPECint2K and SPECfp2K benchmarks show that one class of error, replacement error, makes up almost 85% of the total tag vulnerability of a 1MB write-back L2 cache. In addition, the vulnerability of L2 tag-addresses significantly increases as the size of the memory address space increases. Results show that the L2 tag array can be as susceptible as first-level instruction and data caches (IL1/DL1) to soft errors

Journal ArticleDOI
TL;DR: This paper cryptanalyze Yang's protocol and presents the DoS attack, and proposes a Secure Identification and Key agreement protocol with user Anonymity (SIKA) that overcomes the above limitation while achieving security features like identification, authentication, key agreement and user anonymity.

Journal ArticleDOI
TL;DR: With the potential of a fully- or semi-automatic inventory of the buildings and their parameters, high-resolution satellite data and techniques for their processing are a useful supporting tool for the assessment of vulnerability.
Abstract: High-resolution space-borne remote sensing data are investigated for their potential to extract relevant parameters for a vulnerability analysis of buildings in European countries. For an evaluation of large earthquake scenarios, the number of parameters in models for vulnerability is reduced to a minimum of relevant information such as the type of building (age, material, number of storeys) and the geological and spatial context. Building-related parameters can be derived from remote sensing data either directly (e.g. height) or indirectly based on the recognition of the urban structure type in which the buildings are located. With the potential of a fully- or semi-automatic inventory of the buildings and their parameters, high-resolution satellite data and techniques for their processing are a useful supporting tool for the assessment of vulnerability.