scispace - formally typeset
Search or ask a question

Showing papers on "Vulnerability (computing) published in 2007"


Journal ArticleDOI
TL;DR: In this paper, the authors study a strategic model in which a defender must allocate defensive resources to a collection of locations and an attacker must choose a location to attack, where the defender prefers to allocate resources in a centralized (rather than decentralized) manner, the optimal allocation of resources can be non-monotonic in the value of the attacker's outside option, and defender prefers her defensive allocation to be public rather than secret.
Abstract: We study a strategic model in which a defender must allocate defensive resources to a collection of locations and an attacker must choose a location to attack. In equilibrium, the defender sometimes optimally leaves a location undefended and sometimes prefers a higher vulnerability at a particular location even if a lower risk could be achieved at zero cost. The defender prefers to allocate resources in a centralized (rather than decentralized) manner, the optimal allocation of resources can be non-monotonic in the value of the attacker's outside option, and the defender prefers her defensive allocation to be public rather than secret. Copyright 2007 Blackwell Publishing, Inc..

332 citations


DOI
01 May 2007
TL;DR: The next generation of the Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) methodology, OCTAVE Allegro, is introduced, which leads the organization to consider people, technology, and facilities in the context of their relationship to information and the business processes and services they support.
Abstract: This technical report introduces the next generation of the Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) methodology, OCTAVE Allegro. OCTAVE Allegro is a methodology to streamline and optimize the process of assessing information security risks so that an organization can obtain sufficient results with a small investment in time, people, and other limited resources. It leads the organization to consider people, technology, and facilities in the context of their relationship to information and the business processes and services they support. This report highlights the design considerations and requirements for OCTAVE Allegro based on field experience with existing OCTAVE methods and provides guidance, worksheets, and examples that an organization can use to begin performing OCTAVE Allegro-based risk assessments.

257 citations


Journal ArticleDOI
TL;DR: A broad overview of cyber security and risk assessment for SCADA and DCS is provided, the main industry organizations and government groups working in this area are introduced, and a comprehensive review of the literature to date is given.
Abstract: The growing dependence of critical infrastructures and industrial automation on interconnected physical and cyber-based control systems has resulted in a growing and previously unforeseen cyber security threat to supervisory control and data acquisition (SCADA) and distributed control systems (DCSs). It is critical that engineers and managers understand these issues and know how to locate the information they need. This paper provides a broad overview of cyber security and risk assessment for SCADA and DCS, introduces the main industry organizations and government groups working in this area, and gives a comprehensive review of the literature to date. Major concepts related to the risk assessment methods are introduced with references cited for more detail. Included are risk assessment methods such as HHM, IIM, and RFRM which have been applied successfully to SCADA systems with many interdependencies and have highlighted the need for quantifiable metrics. Presented in broad terms is probability risk analysis (PRA) which includes methods such as FTA, ETA, and FEMA. The paper concludes with a general discussion of two recent methods (one based on compromise graphs and one on augmented vulnerability trees) that quantitatively determine the probability of an attack, the impact of the attack, and the reduction in risk associated with a particular countermeasure.

246 citations


Journal ArticleDOI
TL;DR: It is shown that vulnerability announcements lead to a negative and significant change in a software vendor's market value, which is more negative if the vendor fails to provide a patch at the time of disclosure.
Abstract: Security defects in software cost millions of dollars to firms in terms of downtime, disruptions, and confidentiality breaches. However, the economic implications of these defects for software vendors are not well understood. Lack of legal liability and the presence of switching costs and network externalities may protect software vendors from incurring significant costs in the event of a vulnerability announcement, unlike such industries as auto and pharmaceuticals, which have been known to suffer significant loss in market value in the event of a defect announcement. Although research in software economics has studied firms' incentives to improve overall quality, there have not been any studies which show that software vendors have an incentive to invest in building more secure software. The objectives of this paper are twofold. 1) We examine how a software vendor's market value changes when a vulnerability is announced. 2) We examine how firm and vulnerability characteristics mediate the change in the market value of a vendor. We collect data from leading national newspapers and industry sources, such as the Computer Emergency Response Team (CERT), by searching for reports on published software vulnerabilities. We show that vulnerability announcements lead to a negative and significant change in a software vendor's market value. In our sample, on average, a vendor loses around 0.6 percent value in stock price when a vulnerability is reported. We find that a software vendor loses more market share if the market is competitive or if the vendor is small. To provide further insight, we use the information content of the disclosure announcement to classify vulnerabilities into various types. We find that the change in stock price is more negative if the vendor fails to provide a patch at the time of disclosure. Also, more severe flaws have a significantly greater impact. Our analysis provides many interesting implications for software vendors as well as policy makers.

232 citations


Journal ArticleDOI
TL;DR: This publication contains reprint articles from the Ask*IEEE Document Delivery Service which IEEE does not hold copyright.

215 citations


Journal ArticleDOI
TL;DR: In this article, the authors characterize the conditions under which OSS communities develop and sustain pro-social commitments, and point out the vulnerability of these conditions to developments in patent legislation.

213 citations


Book
Lisa C. Smith1, Ali Subandoro
01 Jan 2007
TL;DR: In this article, the authors present a new avenue for measuring food security, for both small and large populations, based on the data collected as part of household expenditure surveys on the quantities of food acquired by households.
Abstract: Food is one of the most basic needs for human survival. Access to it is a basic human right. Moreover, the pursuit of the Millennium Development Goal to cut hunger requires a sound understanding of the related food security issues. For these reasons, accurate measurement of the food security status of populations—or their ability to gain access to sufficient high-quality food to enable them to live an active, healthy life—is imperative to all international development efforts. It is necessary for effectively targeting food-insecure populations, researching and planning appropriate interventions, and monitoring progress. As past efforts have shown, accurately estimating the amount of food people eat is costly in terms of time and money, and such measurements have thus been carried out mostly in small populations. Where measurement has been extended to large populations, such as entire countries, it has been necessary to rely on less accurate, indirect techniques based on the availability of food at the national level. This technical guide presents a new avenue for measuring food security, for both small and large populations, based on the data collected as part of household expenditure surveys on the quantities of food acquired by households. It shows how these data can be used to measure a variety of food security indicators, including the prevalence of food energy deficiency and indicators of dietary quality and economic vulnerability to food insecurity. In keeping with the approach of IFPRI's Food Security in Practice series for practitioners, the manual guides readers step by step through the process of assessing the food security status of a population. It begins by offering guidance on choosing an appropriate strategy for calculating quantities of foods acquired by households, given time constraints, financial constraints, and the nature of the population's diet. The guide then leads the practitioner through the steps of collecting the data, processing and cleaning the data, and calculating the indicators. It concludes by illustrating how to conduct some basic food security analyses. I hope that this guide will assist practitioners in increasing the accuracy of the measurement of food insecurity for a greater number of populations, including those at the country level. Greater accuracy at the country level will provide the necessary foundations for overcoming food insecurity globally

205 citations


Proceedings ArticleDOI
24 Jun 2007
TL;DR: A methodology to evaluate the cybersecurity vulnerability using attack trees based on power system control networks is proposed and can be extended to security investment analysis.
Abstract: By penetrating the SCADA system, an intruder may remotely operate a power system using supervisory control privileges. Hence, cybersecurity has been recognized as a major threat due to the potential intrusion to the online system. This paper proposes a methodology to evaluate the cybersecurity vulnerability using attack trees. The attack tree formulation based on power system control networks is used to evaluate the system, scenario, and leaf vulnerabilities. The measure of vulnerabilities in the power system control framework is determined based on existing cybersecurity conditions before the vulnerability indices are evaluated. After the indices are evaluated, an upper bound is imposed on each scenario vulnerability in order to determine the pivotal attack leaves that require countermeasure improvements. The proposed framework can be extended to security investment analysis.

200 citations


Journal ArticleDOI
TL;DR: A strategic model in which a defender must allocate defensive resources to a collection of locations, and an attacker must choose a location to attack finds that the defender prefers to allocate resources in a centralized rather than decentralized manner.
Abstract: We study a strategic model in which a defender must allocate defensive resources to a collection of locations, and an attacker must choose a location to attack. The defender does not know the attacker's preferences, while the attacker observes the defender's resource allocation. The defender's problem gives rise to negative externalities, in the sense that increasing the resources allocated to one location increases the likelihood of an attack at other locations. In equilibrium, the defender exploits these externalities to manipulate the attacker's behavior, sometimes optimally leaving a location undefended, and sometimes preferring a higher vulnerability at a particular location even if a lower risk could be achieved at zero cost. Key results of our model are as follows: (1) the defender prefers to allocate resources in a centralized (rather than decentralized) manner; (2) as the number of locations to be defended grows, the defender can cost effectively reduce the probability of a successful attack only if the number of valuable targets is bounded; (3) the optimal allocation of resources can be nonmonotonic in the relative value of the attacker's outside option; and (4) the defender prefers his or her defensive allocation to be public rather than secret.

161 citations


Proceedings ArticleDOI
20 Mar 2007
TL;DR: It is shown how attack graphs can be used to compute actual sets of hardening measures that guarantee the safety of given critical resources, and offer a promising solution for administrators to monitor and predict the progress of an intrusion, and take appropriate countermeasures in a timely manner.
Abstract: This talk will discuss issues and methods for survivability of systems under malicious attacks. To protect from such attacks, it is necessary to take steps to prevent attacks from succeeding. At the same time, it is important to recognize that not all attacks can be averted at the outset; attacks that are successful to some degree must be recognized as unavoidable and comprehensive support for identifying and responding to attacks is required.In my talk, I will describe the recent research on attack graphs that represent known attack sequences attackers can use to penetrate computer networks. I will show how attack graphs can be used to compute actual sets of hardening measures that guarantee the safety of given critical resources. Attack graphs can also be used to correlate received alerts, hypothesize missing alerts, and predict future alerts, all at the same time. Thus, they offer a promising solution for administrators to monitor and predict the progress of an intrusion, and take appropriate countermeasures in a timely manner.

161 citations


Proceedings ArticleDOI
26 Apr 2007
TL;DR: This paper investigates the principal security issues for wireless mesh network (WMN) and identifies the new challenges and opportunities posed by this new networking environment and explores approaches to secure its communication.
Abstract: Wireless mesh network (WMN) is a new wireless networking paradigm. Unlike traditional wireless networks, WMNs do not rely on any fixed infrastructure. Instead, hosts rely on each other to keep the network connected. Wireless Internet service providers are choosing WMNs to offer Internet connectivity, as it allows a fast, easy and inexpensive network deployment. One main challenge in design of these networks is their vulnerability to security attacks. In this paper, we investigate the principal security issues for WMNs. We study the threats a WMN faces and the security goals to be achieved. We identify the new challenges and opportunities posed by this new networking environment and explore approaches to secure its communication.

Journal ArticleDOI
TL;DR: A quantitative risk assessment and management framework that supports strategic asset‐level resource allocation decision making for critical infrastructure and key resource protection and extensions of this model to support strategic portfolio‐level analysis and tactical risk analysis are suggested.
Abstract: This article proposes a quantitative risk assessment and management framework that supports strategic asset-level resource allocation decision making for critical infrastructure and key resource protection. The proposed framework consists of five phases: scenario identification, consequence and criticality assessment, security vulnerability assessment, threat likelihood assessment, and benefit-cost analysis. Key innovations in this methodology include its initial focus on fundamental asset characteristics to generate an exhaustive set of plausible threat scenarios based on a target susceptibility matrix (which we refer to as asset-driven analysis) and an approach to threat likelihood assessment that captures adversary tendencies to shift their preferences in response to security investments based on the expected utilities of alternative attack profiles assessed from the adversary perspective. A notional example is provided to demonstrate an application of the proposed framework. Extensions of this model to support strategic portfolio-level analysis and tactical risk analysis are suggested.

Book
01 Jan 2007
TL;DR: In this article, the authors present a model-based case study of the road network in Stockholm, Sweden and demonstrate the resilience of commercial backbone with peering (BSP) with respect to attacks and natural disasters.
Abstract: Overview of Reliability and Vulnerability in Critical Infrastructure.- Transport Network Vulnerability: a Method for Diagnosis of Critical Locations in Transport Infrastructure Systems.- A Framework for Vulnerability Assessment of Electric Power Systems.- Spatio-Temporal Models for Network Economic Loss Analysis Under Unscheduled Events: A Conceptual Design.- Vulnerability: A Model-Based Case Study of the Road Network in Stockholm.- Survivability of Commercial Backbones with Peering: A Case Study of Korean Networks.- Railway Capacity and Train Delay Relationships.- A Reliability-based User Equilibrium Model for Traffic Assignment.- Reliability Analysis of Road Networks and Preplanning of Emergency Rescue Paths.- Continuity in Critical Network Infrastructures: Accounting for Nodal Disruptions.- Analysis of Facility Systems' Reliability When Subject to Attack or a Natural Disaster.- Bounding Network Interdiction Vulnerability Through Cutset Identification.- Models for Reliable Supply Chain Network Design.- Moving from Protection to Resiliency: A Path to Securing Critical Infrastructure.

Journal ArticleDOI
TL;DR: In this paper, a stochastic programming approach to optimally reinforce and expand the transmission network so that the impact of deliberate attacks is mitigated is provided, where the network planner selects the new lines to be built accounting for the vulnerability of the transmission system against a set of credible intentional outages.
Abstract: This paper provides a stochastic programming approach to optimally reinforce and expand the transmission network so that the impact of deliberate attacks is mitigated. The network planner selects the new lines to be built accounting for the vulnerability of the transmission network against a set of credible intentional outages. The vulnerability of the transmission network is measured in terms of the expected load shed. An instance of the previously reported terrorist threat problem is solved to generate the set of credible deliberate attacks. The proposed model is formulated as a mixed-integer linear program for which efficient solvers are available. Results from a case study based on the IEEE Two Area Reliability Test System are provided and analyzed.

Journal ArticleDOI
TL;DR: It is found that the characteristics of the vulnerability (vulnerability risk before and after disclosure), cost structure of the software user population, and vendor's incentives to develop a patch determine the optimal (responsible) vulnerability disclosure.
Abstract: Security vulnerabilities in software are one of the primary reasons for security breaches, and an important challenge from knowledge management perspective is to determine how to manage the disclosure of knowledge about those vulnerabilities. The security community has proposed several disclosure mechanisms, such as full vendor, immediate public, and hybrid, and has debated about the merits and demerits of these alternatives. In this paper, we study how vulnerabilities should be disclosed to minimize the social loss. We find that the characteristics of the vulnerability (vulnerability risk before and after disclosure), cost structure of the software user population, and vendor's incentives to develop a patch determine the optimal (responsible) vulnerability disclosure. We show that, unlike some existing vulnerability disclosure mechanisms that fail to motivate the vendor to release its patch, responsible vulnerability disclosure policy always ensures the release of a patch. However, we find that this is not because of the threat of public disclosure, as argued by some security practitioners. In fact, not restricting the vendor with a time constraint can ensure the patch release. This result runs counter to the argument of some that setting a grace period always pushes the vendor to develop a patch. When the vulnerability affects multiple vendors, we show that the responsible disclosure policy cannot ensure that every vendor will release a patch. However, when the optimal policy does elicit a patch from each vendor, we show that the coordinator's grace period in the multiple vendor case falls between the grace periods that it would set individually for the vendors in the single vendor case. This implies that the coordinator does not necessarily increase the grace period to accommodate more vendors. We then extend our base model to analyze the impact of 1) early discovery and 2) an early warning system that provides privileged vulnerability knowledge to selected users before the release of a patch for the vulnerability on responsible vulnerability disclosure. We show that while early discovery always improves the social welfare, an early warning system does not necessarily improve the social welfare

Journal ArticleDOI
01 Aug 2007-EPL
TL;DR: A network efficiency measure for congested networks, that captures demands, costs, flows, and behavior, is applied to the Braess paradox network in which the demands are varied over the horizon and explicit formulae are derived for the importance values of the network nodes and links.
Abstract: In this paper, we propose a network efficiency measure for congested networks, that captures demands, costs, flows, and behavior. The network efficiency/performance measure can identify which network components, that is, nodes and links, have the greatest impact in terms of their removal, due to, for example, natural disasters, structural failures, terrorist attacks, etc., and, hence, are important from both vulnerability as well as security standpoints. The new measure is applied to the Braess paradox network in which the demands are varied over the horizon and explicit formulae are derived for the importance values of the network nodes and links. This measure is applicable to such congested networks as urban transportation networks and the Internet.

Patent
30 Jan 2007
TL;DR: In this paper, a method and system of determining and/or managing potential privilege escalation attacks in a system or network comprising one or more potentially heterogeneous hosts is presented, where the user interface can be used to render the potential privilege escalations as an appropriate representation.
Abstract: Disclosed herein is a method and system of determining and/or managing potential privilege escalation attacks in a system or network comprising one or more potentially heterogeneous hosts. The step of configuration scanning optionally includes making a list of operating system specific protection mechanism on each host. Vulnerability scanning optionally includes the step of identifying the vulnerability position of each identified program. Transitive closure of all security attacks on the network and potential privilege escalations can be determined. A user interface optionally renders the potential privilege escalations as an appropriate representation. The method may include none or one or more of several pre-emptive mechanisms and reactive mechanisms. Further, the method may optionally include a mechanism for a periodic safety check on the system ensuring continued security on the network.

Posted Content
TL;DR: A new network performance/efficiency measure that captures flows and behavior can identify which network components, that is, nodes and links, have the greatest impact in terms of their removal and, hence, are important from both vulnerability as well as security standpoints.
Abstract: In this paper, we demonstrate how a new network performance/efficiency measure, which captures demands, flows, costs, and behavior on networks, can be used to assess the importance of network components and their rankings.We provide new results regarding the measure, which we refer to as the Nagurney-Qiang measure, or, simply, the N-Q measure, and a previously proposed one, which did not explicitly consider demands and flows. We apply both measures to such critical infrastructure networks as transportation networks and the Internet and further explore the new measure through an application to an electric power generation and distribution network in the form of a supply chain. The Nagurney and Qiang network performance/efficiency measure that captures flows and behavior can identify which network components, that is, nodes and links, have the greatest impact in terms of their removal and, hence, are important from both vulnerability as well as security standpoints.

Book ChapterDOI
01 Jan 2007
TL;DR: This chapter explains and demonstrates the different tools available for performing vulnerability assessments, and provides examples from the most common industry-leading tools on the market today.
Abstract: This chapter explains and demonstrates the different tools available for performing vulnerability assessments. It provides examples from the most common industry-leading tools on the market today. Vulnerability is defined as a software or hardware bug or misconfiguration that a malicious individual can exploit, thereby impacting a system's confidentiality and integrity. It is the assessment tool's job to identify these bugs and misconfigurations. A vulnerability assessment tool probes a system for a specific condition that represents vulnerability. Some tools operate by using an agent , which is a piece of software that must run on every system to be scanned; other tools operate without the use of agents, and some use a combination of the two configurations. The architecture of the scanning engines, agents, and systems will vary from product to product, but it is this architecture that affects overall scanning performance.

Proceedings ArticleDOI
10 Sep 2007
TL;DR: The present paper teaches that the sole presence of particular multi threading implementations requires a very deep understanding of the interplay between the underlying hardware and software, in order to appropriately judge the implied security consequences.
Abstract: The paper presents a new aspect within that PC oriented side-channel attack arena. Specifically, we present a novel square vs. multiplication oriented side-channel attack which is very unique to certain simultaneous multi threading CPU architectures and it seems that it cannot be carried out on CPU architectures without SMT hardware assistance. The simple reason for this uniqueness of our novel attack is the fact that it doesn't rest - as all other previous MicroArchitectural side-channel attacks - upon a shared resource with the persistent state property between context/process switches, for e.g., caches, BTBs, etc. Instead, it is based upon the fact that Intel's hyper-threading technology shares the ALU's large parallel integer (floating-point) multiplier between its two hardware threads, where it is noteworthy that the multiplier obviously doesn't preserve its state during context switches. As the latest OpenSSL changes, i.e., protections against side-channels attacks are already in place, cf. (Brickell et al., 2006), our paper doesn't introduce a new vulnerability into the OpenSSL library at all. Nevertheless, our attack has the following unintuitive property. Longer key sizes just make our attack scenario easier and not more difficult as one could assume at first sight. Thus, the present paper teaches that the sole presence of particular multi threading implementations requires a very deep understanding of the interplay between the underlying hardware and software, in order to appropriately judge the implied security consequences.

Book ChapterDOI
20 Jun 2007
TL;DR: This paper presents techniques that exploit the Tor exit policy system to greatly simplify the traffic analysis and reveals a fundamental vulnerability exposed by this paper to the problem of anonymous web browsing itself.
Abstract: This paper describes a new attack on the anonymity of web browsing with Tor. The attack tricks a user's web browser into sending a distinctive signal over the Tor network that can be detected using traffic analysis. It is delivered by a malicious exit node using a man-in-the-middle attack on HTTP. Both the attack and the traffic analysis can be performed by an adversary with limited resources. While the attack can only succeed if the attacker controls one of the victim's entry guards, the method reduces the time required for a traffic analysis attack on Tor from O(nk) to O(n + k), where n is the number of exit nodes and k is the number of entry guards. This paper presents techniques that exploit the Tor exit policy system to greatly simplify the traffic analysis. The fundamental vulnerability exposed by this paper is not specific to Tor but rather to the problem of anonymous web browsing itself. This paper also describes a related attack on users who toggle the use of Tor with the popular Firefox extension Torbutton.

Journal ArticleDOI
TL;DR: A quantitative all-hazards framework for critical asset and portfolio risk analysis (CAPRA) that considers both natural and human-caused hazards is developed that resembles the traditional model based on the notional product of consequence, vulnerability, and threat.
Abstract: This article develops a quantitative all-hazards framework for critical asset and portfolio risk analysis (CAPRA) that considers both natural and human-caused hazards. Following a discussion on the nature of security threats, the need for actionable risk assessments, and the distinction between asset and portfolio-level analysis, a general formula for all-hazards risk analysis is obtained that resembles the traditional model based on the notional product of consequence, vulnerability, and threat, though with clear meanings assigned to each parameter. Furthermore, a simple portfolio consequence model is presented that yields first-order estimates of interdependency effects following a successful attack on an asset. Moreover, depending on the needs of the decisions being made and available analytical resources, values for the parameters in this model can be obtained at a high level or through detailed systems analysis. Several illustrative examples of the CAPRA methodology are provided.

Proceedings ArticleDOI
03 Jan 2007
TL;DR: This paper identifies a taxonomy of software security assurance tools and defines one type of tool: Web application scanner, i.e., an automated program that examines Web applications for security vulnerabilities.
Abstract: There are many commercial software security assurance tools that claim to detect and prevent vulnerabilities in application software. However, a closer look at the tools often leaves one wondering which tools find what vulnerabilities. This paper identifies a taxonomy of software security assurance tools and defines one type of tool: Web application scanner, i.e., an automated program that examines Web applications for security vulnerabilities. We describe the types of functions that are generally found in a Web application scanner and how to test it

Will Drewry1, Tavis Ormandy1
06 Aug 2007
TL;DR: This paper presents an effective fault injection testing technique and an automation library, LibFlayer, which explores techniques for vulnerability patch analysis and guided source code auditing.
Abstract: Flayer is a tool for dynamically exposing application innards for security testing and analysis. It is implemented on the dynamic binary instrumentation framework Valgrind [17] and its memory error detection plug-in, Memcheck [21]. This paper focuses on the implementation of Flayer, its supporting libraries, and their application to software security. Flayer provides tainted, or marked, data flow analysis and instrumentation mechanisms for arbitrarily altering that flow. Flayer improves upon prior taint tracing tools with bit-precision. Taint propagation calculations are performed for each value-creating memory or register operation. These calculations are embedded in the target application's running code using dynamic instrumentation. The same technique has been employed to allow the user to control the outcome of conditional jumps and step over function calls. Flayer's functionality provides a robust foundation for the implementation of security tools and techniques. In particular, this paper presents an effective fault injection testing technique and an automation library, LibFlayer. Alongside these contributions, it explores techniques for vulnerability patch analysis and guided source code auditing. Flayer finds errors in real software. In the past year, its use has yielded the expedient discovery of flaws in security critical software including OpenSSH and OpenSSL.

Journal ArticleDOI
TL;DR: The results from this study indicate that the Equal Error Rate (EER) is significantly influenced by the attribute selection process and to a lesser extent on the authentication algorithm employed, and provides evidence that a Probabilistic Neural Network (PNN) can be superior in terms of reduced training time and classification accuracy when compared with a typical MLFN back-propagation trained neural network.
Abstract: The majority of computer systems employ a login ID and password as the principal method for access security. In stand-alone situations, this level of security may be adequate, but when computers are connected to the internet, the vulnerability to a security breach is increased. In order to reduce vulnerability to attack, biometric solutions have been employed. In this paper, we investigate the use of a behavioural biometric based on keystroke dynamics. Although there are several implementations of keystroke dynamics available, their effectiveness is variable and dependent on the data sample and its acquisition methodology. The results from this study indicate that the Equal Error Rate (EER) is significantly influenced by the attribute selection process and to a lesser extent on the authentication algorithm employed. Our results also provide evidence that a Probabilistic Neural Network (PNN) can be superior in terms of reduced training time and classification accuracy when compared with a typical MLFN back-propagation trained neural network.

Book ChapterDOI
02 Jul 2007
TL;DR: It is shown DAA places an unnecessarily large burden on the TPM host and it is demonstrated how corrupt administrators can exploit this weakness to violate privacy.
Abstract: The Direct Anonymous Attestation (DAA) scheme provides a means for remotely authenticating a trusted platform whilst preserving the user's privacy. The protocol has been adopted by the Trusted Computing Group (TCG) in the latest version of its Trusted Platform Module (TPM) specification. In this paper we show DAA places an unnecessarily large burden on the TPM host. We demonstrate how corrupt administrators can exploit this weakness to violate privacy. The paper provides a fix for the vulnerability. Further privacy issues concerning linkability are identified and a framework for their resolution is developed. In addition an optimisation to reduce the number of messages exchanged is proposed.

Patent
13 Oct 2007
TL;DR: In this article, a facility is described for analyzing access control configurations, which comprises an operating system having resources and identifications of principals, the principals having access control privileges relating to the resources, the access control privilege described by access control metadata; an access control scanner component that receives the access-control metadata, determines relationships between the principals and the resources and emits access control relations information.
Abstract: A facility is described for analyzing access control configurations. In various embodiments, the facility comprises an operating system having resources and identifications of principals, the principals having access control privileges relating to the resources, the access control privileges described by access control metadata; an access control scanner component that receives the access control metadata, determines relationships between the principals and the resources, and emits access control relations information; and an access control inference engine that receives the emitted access control relations information and an access control policy model, analyzes the received information and model, and emits a vulnerability report. In various embodiments, the facility generates an information flow based on access control relations, an access control mechanism model, and an access control policy model; determines, based on the generated information flow, whether privilege escalation is possible; and when privilege escalation is possible, indicates in a vulnerability report that the privilege escalation is possible.

Patent
20 May 2007
TL;DR: In this paper, a method for evaluating access rules violations is proposed, which includes: receiving, a model of a computer network; and determining security metrics associated with a violation of an access rule in response to: the model of the computer network, multiple network nodes of the network accessible according to at least one violated access rule or according to the network model, at least 1 vulnerability associated with the multiple nodes, and damage associated with an exploitation of the at least single vulnerability.
Abstract: A method for evaluating access rules violations, the method includes: receiving, a model of a computer network; and determining security metrics associated with a violation of an access rule in response to: the model of the computer network, multiple network nodes of the computer network accessible according to at least one violated access rule or according to the network model, at least one vulnerability associated with the multiple network nodes, and damage associated with an exploitation of the at least one vulnerability.

01 Jan 2007
TL;DR: This emerging “0-day market” has some unique aspects that make this particularly difficult to accomplish in a fair manner, and issues will be illustrated by following two case studies of attempted sales of 0-day exploits.
Abstract: Trading of 0-day computer exploits between hackers has been taking place for as long as computer exploits have existed. A black market for these exploits has developed around their illegal use. Recently, a trend has developed toward buying and selling these exploits as a source of legitimate income for security researchers. However, this emerging “0-day market” has some unique aspects that make this particularly difficult to accomplish in a fair manner. These problems, along with possible solutions will be discussed. These issues will be illustrated by following two case studies of attempted sales of 0-day exploits.

Journal ArticleDOI
TL;DR: In this paper, strategies for developing and implementing a successful information security awareness program are presented, which also provides an introduction to the subject of human hacking while discussing the various countermeasures available to minimize the likelihood of such occurrences and their financial, reputation, psychological, and legal ramifications.
Abstract: Human hacking is a nontechnical kind of intrusion that relies heavily on human manipulation. Its impact is continuously giving serious concern in the Information technology arena which has often been undermined due to the ease with which this technique is widely used to infiltrate networks through unsuspecting individuals that are undeniably considered the "weakest link" in the security circle. Security awareness that brings about behavioral change, reduces employees' vulnerability, and protects against threats exploiting employees' vulnerability having a positive impact overall on risks related to information assets. Strategies for developing and implementing a successful information security awareness program are presented in this article, which also provides an introduction to the subject of human hacking while discussing the various counter-measures available to minimize the likelihood of such occurrences and their financial, reputation, psychological, and legal ramifications.