scispace - formally typeset
Search or ask a question

Showing papers on "Vulnerability (computing) published in 2009"


Journal ArticleDOI
TL;DR: The need to understand and address the various risks to the security of the IS on which the authors depend is as alarming and challenging as the need to understanding and addressing the various risk factors.
Abstract: Modern global economic and political conditions, technological infrastructure, and socio-cultural developments all contribute to an increasingly turbulent and dynamic environment for organizations, which maintain information systems (IS) for use in business, government, and other domains. As our institutions (economic, political, military, legal, social) become increasingly global and inter-connected; as we rely more on automated control systems to provide us with energy and services; and as we establish internet-based mechanisms for coordinating this global interaction, we introduce greater vulnerability to our systems and processes. This increased dependence on cyberspace also inflates our vulnerability – isolation is no longer an option. Perhaps no aspect of this phenomenon is as alarming and challenging as the need to understand and address the various risks to the security of the IS on which we depend.

377 citations


BookDOI
12 Feb 2009
TL;DR: In this paper, the authors examine the existing research literature on self-disclosure and the Internet and propose three critical issues that unite the ways in which we can best understand the links between privacy, selfdisclosure, and new technology: trust and vulnerability, costs and benefits, and control over personal information.
Abstract: This article examines the extant research literature on self-disclosure and the Internet, in particular by focusing on disclosure in computer-mediated communication and web-based forms - both in surveys and in e-commerce applications. It also considers the links between privacy and self-disclosure, and the unique challenges (and opportunities) that the Internet poses for the protection of privacy. Finally, the article proposes three critical issues that unite the ways in which we can best understand the links between privacy, self-disclosure, and new technology: trust and vulnerability, costs and benefits, and control over personal information. Central to the discussion is the notion that self-disclosure is not simply the outcome of a communication encounter: rather, it is both a product and process of interaction, as well as a way of regulating interaction dynamically. By adopting a privacy approach to understanding disclosure online, it becomes possible to consider not only media effects that encourage disclosure, but also the wider context and implications of such communicative behaviours.

300 citations


Journal ArticleDOI
TL;DR: The premise of this article is that risk to a system can be understood, defined, and quantified most effectively through a systems‐based philosophical and methodological approach, and by recognizing the central role of the system states in this process.
Abstract: The premise of this article is that risk to a system, as well as its vulnerability and resilience, can be understood, defined, and quantified most effectively through a systems-based philosophical and methodological approach, and by recognizing the central role of the system states in this process. A universally agreed-upon definition of risk has been difficult to develop; one reason is that the concept is multidimensional and nuanced. It requires an understanding that risk to a system is inherently and fundamentally a function of the initiating event, the states of the system and of its environment, and the time frame. In defining risk, this article posits that: (a) the performance capabilities of a system are a function of its state vector; (b) a system's vulnerability and resilience vectors are each a function of the input (e.g., initiating event), its time of occurrence, and the states of the system; (c) the consequences are a function of the specificity and time of the event, the vector of the states, the vulnerability, and the resilience of the system; (d) the states of a system are time-dependent and commonly fraught with variability uncertainties and knowledge uncertainties; and (e) risk is a measure of the probability and severity of consequences. The above implies that modeling must evaluate consequences for each risk scenario as functions of the threat (initiating event), the vulnerability and resilience of the system, and the time of the event. This fundamentally complex modeling and analysis process cannot be performed correctly and effectively without relying on the states of the system being studied.

293 citations


Proceedings ArticleDOI
07 Dec 2009
TL;DR: In this paper, the authors describe substantial enhancements to the NetSPA attack graph system required to model additional present-day threats (zero-day exploits and client-side attacks) and countermeasures (intrusion prevention systems, proxy firewalls, personal firewall, and host-based vulnerability scans).
Abstract: By accurately measuring risk for enterprise networks, attack graphs allow network defenders to understand the most critical threats and select the most effective countermeasures. This paper describes substantial enhancements to the NetSPA attack graph system required to model additional present-day threats (zero-day exploits and client-side attacks) and countermeasures (intrusion prevention systems, proxy firewalls, personal firewalls, and host-based vulnerability scans). Point-to-point reachability algorithms and structures were extensively redesigned to support "reverse" reachability computations and personal firewalls. Host-based vulnerability scans are imported and analyzed. Analysis of an operational network with 84 hosts demonstrates that client-side attacks pose a serious threat. Experiments on larger simulated networks demonstrated that NetSPA's previous excellent scaling is maintained. Less than two minutes are required to completely analyze a four-enclave simulated network with more than 40,000 hosts protected by personal firewalls.

229 citations


Journal ArticleDOI
01 Aug 2009
TL;DR: A general model for privacy-aware mobile services, which regards the attack resilience and the query-processing cost as two critical measures for designing location privatization solutions, and proposes a robust and scalable location anonymization model, XStar, which best leverages the two measures.
Abstract: Consider a mobile client who travels over roads and wishes to receive location-based services (LBS) from untrusted service providers. How might the user obtain such services without exposing her private position information? Meanwhile, how could the privacy protection mechanism incur no disincentive, e.g., excessive computation or communication cost, for any service provider or mobile user to participate in such a scheme? We detail this problem and present a general model for privacy-aware mobile services. A series of key features distinguish our solution from existing ones: a) it adopts the network-constrained mobility model (instead of the conventional random-waypoint model) to capture the privacy vulnerability of mobile users; b) it regards the attack resilience (for mobile users) and the query-processing cost (for service providers) as two critical measures for designing location privatization solutions, and provides corresponding analytical models; c) it proposes a robust and scalable location anonymization model, XStar, which best leverages the two measures; d) it introduces multi-folded optimizations in implementing XStar, which lead to further performance improvement. A comprehensive experimental evaluation is conducted to validate the analytical models and the efficacy of XStar.

163 citations


01 Jan 2009
TL;DR: In this article, an extended topological method has been proposed by incorporating several specific features of power systems such as electrical distance, power transfer distribution factors and line flow limits, starting from the extended metric for efficiency named as net-ability, an extended betweenness and proposes a joint method of extended and betweenness to rank the most critical lines and buses in an electrical power grid.
Abstract: Vulnerability analysis of power systems is a key issue in modern society and many efforts have contributed to the analysis. Complex network metrics for the assessment of the vulnerability of networked systems have been recently applied to power systems. Complex network theory may come in handy for vulnerability analysis of power systems due to a close link between the topological structure and physical properties of power systems. However, a pure topological approach fails to capture the engineering features of power systems. So an extended topological method has been proposed by incorporating several of specific features of power systems such as electrical distance, power transfer distribution factors and line flow limits. This paper defines, starting from the extended metric for efficiency named as net-ability, an extended betweenness and proposes a joint method of extended betweenness and net-ability to rank the most critical lines and buses in an electrical power grid. The method is illustrated in the IEEE-118-bus, IEEE-300-bus test systems as well as the Italian power grid.

144 citations


Journal ArticleDOI
TL;DR: A critical presentation is offered of the results of a set of investigations aimed at evaluating the potentials of using object-oriented modeling as the simulation framework to capture the detailed dynamics of the operational scenarios involving the most vulnerable parts of the critical infrastructure as identified by the preceding network analysis.

127 citations


Proceedings ArticleDOI
17 May 2009
TL;DR: Four attacks that can be executed by an adversary having only wireless access to just a card (and not to a legitimate reader) are proposed and the most serious of them recovers a secret key in less than a second on ordinary hardware.
Abstract: The Mifare Classic is the most widely used contactless smartcard on the market.The stream cipher CRYPTO1 used by the Classic has recently been reverse engineered and serious attacks have been proposed. The most serious of them retrieves a secret key in under a second. In order to clone a card, previously proposed attacks require that the adversary either has access to an eavesdropped communication session or executes a message-by-message man-in-the-middle attack between the victim and a legitimate reader. Although this is already disastrous from a cryptographic point of view, system integrators maintain that these attacks cannot be performed undetected.This paper proposes four attacks that can be executed by an adversary having only wireless access to just a card (and not to a legitimate reader). The most serious of them recovers a secret key in less than a second on ordinary hardware. Besides the cryptographic weaknesses, we exploit other weaknesses in the protocol stack. A vulnerability in the computation of parity bits allows an adversary to establish a side channel. Another vulnerability regarding nested authentications provides enough plaintext for a speedy known-plaintext attack.

118 citations


Proceedings ArticleDOI
15 Mar 2009
TL;DR: In this article, the state of the art of vulnerability assessment methods in the context of cascading failures is reviewed, and the impact of emerging technologies including phasor technology, high-performance computing techniques, and visualization techniques on the vulnerability assessment of power grid failures is addressed, and future research directions are presented.
Abstract: Cascading failures present severe threats to power grid security, and thus vulnerability assessment of power grids is of significant importance. Focusing on analytic methods, this paper reviews the state of the art of vulnerability assessment methods in the context of cascading failures. These methods are based on steady-state power grid modeling or high-level probabilistic modeling. The impact of emerging technologies including phasor technology, high-performance computing techniques, and visualization techniques on the vulnerability assessment of cascading failures is then addressed, and future research directions are presented.

112 citations


Proceedings ArticleDOI
09 Nov 2009
TL;DR: This work analytically calculate the anonymity provided by ShadowWalker and shows that it performs well for moderate levels of attackers, and is much better than the state of the art.
Abstract: Peer-to-peer approaches to anonymous communication promise to eliminate the scalability concerns and central vulnerability points of current networks such as Tor. However, the P2P setting introduces many new opportunities for attack, and previous designs do not provide an adequate level of anonymity. We propose ShadowWalker: a new low-latency P2P anonymous communication system, based on a random walk over a redundant structured topology. We base our design on shadows that redundantly check and certify neighbor information; these certifications enable nodes to perform random walks over the structured topology while avoiding route capture and other attacks.We analytically calculate the anonymity provided by ShadowWalker and show that it performs well for moderate levels of attackers, and is much better than the state of the art. We also design an extension that improves forwarding performance at a slight anonymity cost, while at the same time protecting against selective DoS attacks. We show that our system has manageable overhead and can handle moderate churn, making it an attractive new design for P2P anonymous communication.

100 citations


Proceedings ArticleDOI
13 Apr 2009
TL;DR: The ontology for vulnerability management (OVM) has been populated with all vulnerabilities in NVD with additional inference rules, knowledge representation, and data-mining mechanisms and provides a promising pathway to making ISAP successful.
Abstract: In order to reach the goals of the Information Security Automation Program (ISAP) [1], we propose an ontological approach to capturing and utilizing the fundamental concepts in information security and their relationship, retrieving vulnerability data and reasoning about the cause and impact of vulnerabilities. Our ontology for vulnerability management (OVM) has been populated with all vulnerabilities in NVD [2] with additional inference rules, knowledge representation, and data-mining mechanisms. With the seamless integration of common vulnerabilities and their related concepts such as attacks and countermeasures, OVM provides a promising pathway to making ISAP successful.

Journal ArticleDOI
TL;DR: This paper proposes a general framework based on the principles of epidemic theory, for vulnerability analysis of current broadcast protocols in wireless sensor networks, and develops a common mathematical model for the propagation that incorporates important parameters derived from the communication patterns of the protocol under test.
Abstract: While multi-hop broadcast protocols, such as Trickle, Deluge and MNP, have gained tremendous popularity as a means for fast and convenient propagation of data/code in large scale wireless sensor networks, they can, unfortunately, serve as potential platforms for virus spreading if the security is breached. To understand the vulnerability of such protocols and design defense mechanisms against piggy-backed virus attacks, it is critical to investigate the propagation process of these protocols in terms of their speed and reachability. In this paper, we propose a general framework based on the principles of epidemic theory, for vulnerability analysis of current broadcast protocols in wireless sensor networks. In particular, we develop a common mathematical model for the propagation that incorporates important parameters derived from the communication patterns of the protocol under test. Based on this model, we analyze the propagation rate and the extent of spread of a malware over typical broadcast protocols proposed in the literature. The overall result is an approximate but convenient tool to characterize a broadcast protocol in terms of its vulnerability to malware propagation. We have also performed extensive simulations which have validated our model.

Proceedings ArticleDOI
08 Jul 2009
TL;DR: The ASPIER tool is implemented and used to verify authentication and secrecy properties of a part of an industrial strength protocol implementation -- the handshake in OpenSSL -- for configurations consisting of up to 3 servers and 3 clients.
Abstract: We present ASPIER -- the first framework that combines software model checking with a standard protocol security model to automatically analyze authentication and secrecy properties of protocol implementations in C. The technical approach extends the iterative abstraction-refinement methodology for software model checking with a domain-specific protocol and symbolic attacker model. We have implemented the ASPIER tool and used it to verify authentication and secrecy properties of a part of an industrial strength protocol implementation -- the handshake in OpenSSL -- for configurations consisting of up to 3 servers and 3 clients. We have also implemented two distinct methods for reasoning about attacker message derivations, and evaluated them in the context of OpenSSL verification. ASPIER detected the "version-rollback" vulnerability in OpenSSL 0.9.6c source code and successfully verified the implementation when clients and servers are only willing to run SSL 3.0.

Book ChapterDOI
10 Sep 2009
TL;DR: This paper proposes a new concept, namely Public-key Encryption with Registered Keyword Search (PERKS), which requires a sender to register a keyword with a receiver before the sender can generate a tag for this keyword.
Abstract: Public-key Encryption with Keyword Search (PEKS) enables a server to test whether a tag from a sender and a trapdoor from a receiver contain the same keyword In this paper, we highlight some potential security concern, ie a curious server is able to answer whether any selected keyword is corresponding to a given trapdoor or not (called an offline keyword guessing attack) The existing semantic security definition for PEKS does not capture this vulnerability We propose a new concept, namely Public-key Encryption with Registered Keyword Search (PERKS), which requires a sender to register a keyword with a receiver before the sender can generate a tag for this keyword Clearly the keyword preregistration is a disadvantage The payback is that the semantic security definition for PERKS proposed in this paper is immune to the offline keyword guessing attack We also propose a construction of PERKS and prove its security The construction supports testing multiple tags in batch mode, which can significantly reduce the computational complexity in some situations

Proceedings Article
01 Jan 2009
TL;DR: It is discovered that a MiFare classic card can be cloned in a much more practical card-only scenario, where the attacker only needs to be in the proximity of the card for a number of minutes, therefore making usurpation of identity through pass cloning feasible at any moment and under any circumstances.
Abstract: MiFare Classic is the most popular contactless smart card with about 200 millions copies in circulation worldwide. At Esorics 2008 Dutch researchers showed that the underlying cipher Crypto-1 can be cracked in as little as 0.1 seconds if the attacker can access or eavesdrop the RF communications with the (genuine) reader. We discovered that a MiFare classic card can be cloned in a much more practical card-only scenario, where the attacker only needs to be in the proximity of the card for a number of minutes, therefore making usurpation of identity through pass cloning feasible at any moment and under any circumstances. For example, anybody sitting next to the victim on a train or on a plane is now be able to clone his/her pass. Other researchers have also (independently from us) discovered this vulnerability (Garcia et al., 2009) however our attack requires less queries to the card and does not require any precomputation. In addition, we discovered that certain versions or clones of MiFare Classic are even weaker, and can be cloned in 1 second. The main security vulnerability that we need to address with regard to MiFare Classic is not about cryptography, RFID protocols and software vulnerabilities. It is a systemic one: we need to understand how much our economy is vulnerable to sophisticated forms of electronic subversion where potentially one smart card developer can intentionally (or not), but quite easily in fact, compromise the security of governments, businesses and financial institutions worldwide.

Journal ArticleDOI
TL;DR: The paper analyzes the efficiency of deploying false targets as part of a defense strategy and obtains the optimal number of false targets and the attacked targets for the case of fixed and variable resources of the defender and the attacker as solutions of a non-cooperative game between the two agents.

Book ChapterDOI
21 Sep 2009
TL;DR: This paper uncovers a vulnerability which not only allows an attacker to penetrate CDN's protection, but to actually use a content delivery network to amplify the attack against a customer Web site.
Abstract: Content Delivery Networks (CDNs) are commonly believed to offer their customers protection against application-level denial of service (DoS) attacks. Indeed, a typical CDN with its vast resources can absorb these attacks without noticeable effect. This paper uncovers a vulnerability which not only allows an attacker to penetrate CDN's protection, but to actually use a content delivery network to amplify the attack against a customer Web site. We show that leading commercial CDNs - Akamai and Limelight - and an influential research CDN - Coral - can be recruited for this attack. By mounting an attack against our own Web site, we demonstrate an order of magnitude attack amplification though leveraging the Coral CDN. We present measures that both content providers and CDNs can take to defend against our attack. We believe it is important that CDN operators and their customers be aware of this attack so that they could protect themselves accordingly.

Book
31 Dec 2009
TL;DR: This report aims to address the concern that this reliance on computers and computer networks raises the vulnerability of the nation’s critical infrastructures to “cyber” attacks.
Abstract: The nation’s health, wealth, and security rely on the production and distribution of certain goods and services. The array of physical assets, processes and organizations across which these goods and services move are called critical infrastructures (e.g. electricity, the power plants that generate it, and the electric grid upon which it is distributed). Computers and communications, themselves critical infrastructures, are increasingly tying these infrastructures together. This report aims to address the concern that this reliance on computers and computer networks raises the vulnerability of the nation’s critical infrastructures to “cyber” attacks.

Journal ArticleDOI
TL;DR: This paper proposes an approach based on fuzzy logic and expert system for network forensics that can analyze computer crimes in network environment and make digital evidences automatically and shows that the system can classify most kinds of attack types and provide analyzable and comprehensible information for forensic experts.

Journal ArticleDOI
01 Sep 2009
TL;DR: A mixed-strategy game-theory model able to capture the strategic interactions between malicious agents that may be willing to attack power systems and the system operators, with its related bodies, that are in charge of defending them is presented.
Abstract: The new scenarios of malicious attack prompt for their deeper consideration and mainly when critical systems are at stake. In this framework, infrastructural systems, including power systems, represent a possible target due to the huge impact they can have on society. Malicious attacks are different in their nature from other more traditional cause of threats to power system, since they embed a strategic interaction between the attacker and the defender (characteristics that cannot be found in natural events or systemic failures). This difference has not been systematically analyzed by the existent literature. In this respect, new approaches and tools are needed. This paper presents a mixed-strategy game-theory model able to capture the strategic interactions between malicious agents that may be willing to attack power systems and the system operators, with its related bodies, that are in charge of defending them. At the game equilibrium, the different strategies of the two players, in terms of attacking/protecting the critical elements of the systems, can be obtained. The information about the attack probability to various elements can be used to assess the risk associated with each of them, and the efficiency of defense resource allocation is evidenced in terms of the corresponding risk. Reference defense plans related to the online defense action and the defense action with a time delay can be obtained according to their respective various time constraints. Moreover, risk sensitivity to the defense/attack-resource variation is also analyzed. The model is applied to a standard IEEE RTS-96 test system for illustrative purpose and, on the basis of that system, some peculiar aspects of the malicious attacks are pointed out.

Journal ArticleDOI
TL;DR: The security proof for the route discovery algorithm endairA is flawed, and moreover, this algorithm is vulnerable to a hidden channel attack, and it is argued that composability is an essential feature for ubiquitous applications.
Abstract: Mobile ad hoc networks (MANETs) are collections of wireless mobile devices with restricted broadcast range and resources, and no fixed infrastructure. Communication is achieved by relaying data along appropriate routes that are dynamically discovered and maintained through collaboration between the nodes. Discovery of such routes is a major task, both from efficiency and security points of view. Recently, a security model tailored to the specific requirements of MANETs was introduced by Acs, Buttyan, and Vajda. Among the novel characteristics of this security model is that it promises security guarantee under concurrent executions, a feature of crucial practical implication for this type of distributed computation. A novel route discovery algorithm called endairA was also proposed, together with a claimed security proof within the same model. In this paper, we show that the security proof for the route discovery algorithm endairA is flawed, and moreover, this algorithm is vulnerable to a hidden channel attack. We also analyze the security framework that was used for route discovery and argue that composability is an essential feature for ubiquitous applications. We conclude by discussing some of the major security challenges for route discovery in MANETs.

Journal ArticleDOI
01 Oct 2009
TL;DR: This study uses a large questionnaire survey among endusers to examine password behavior of end-users and finds that end-user behavior can increase the vulnerability of computer and information systems.
Abstract: Considering that many organizations today are extremely dependent on information technology, computer and information security (CIS) has become a critical concern from a business viewpoint. CIS is concerned with protecting the confidentiality, integrity, accessible information, when using computer systems. Much research has been conducted on CIS in the past years. However, the attention has been primarily focused on technical problems and solutions. Only recently, the role of human factors in CIS has been recognized. End-user behavior can increase the vulnerability of computer and information systems. In this study, using a large questionnaire survey among endusers, we examine password behavior of end-users.

Proceedings ArticleDOI
20 Jan 2009
TL;DR: This work analyzed the 0Day lifespans of 491 vulnerabilities and conservatively estimated that in the worst year there were on average 2500 0Day vulnerabilities in existence on any given day and made a more aggressive estimate of 4500.
Abstract: We define a 0Day vulnerability to be any vulnerability, in deployed software, that has been discovered by at least one person but has not yet been publicly announced or patched. These 0Day vulnerabilities are of particular interest when assessing the risk to a system from exploit of vulnerabilities which are not generally known to the public or, most importantly, to the owners of the system. Using the 0Day definition given above, we analyzed the 0Day lifespans of 491 vulnerabilities and conservatively estimated that in the worst year there were on average 2500 0Day vulnerabilities in existence on any given day. Then using a small but intriguing set of 15 0Day vulnerability lifespans representing the time from actual discovery to public disclosure, we made a more aggressive estimate. In this case, we estimated that in the worst year there were, on average, 4500 0Day vulnerabilities in existence on any given day.

Journal ArticleDOI
TL;DR: The paper analyses the optimal distribution of the defense resources between protecting the genuine system elements and deploying false elements (targets) in a series system, which is destroyed when any genuine element is destroyed.

Journal ArticleDOI
TL;DR: The computational complexities of the attacks are so practicable that Chien et al.'s protocol cannot enhance the RFID security any more than the original EPC standard.

Book ChapterDOI
TL;DR: In this article, a new approach for transmission network expansion planning that accounts for increasingly plausible deliberate outages is presented, and two vulnerability-constrained transmission expansion models are formulated as mixed-integer linear programs for which efficient solvers are available.

Journal ArticleDOI
TL;DR: Applying portfolio optimization methods instead of risk prioritization ranking, rating, or scoring methods can achieve greater risk-reduction value for resources spent.
Abstract: Two commonly recommended principles for allocating risk management resources to remediate uncertain hazards are: (1) select a subset to maximize risk-reduction benefits (e.g., maximize the von Neumann-Morgenstern expected utility of the selected risk-reducing activities), and (2) assign priorities to risk-reducing opportunities and then select activities from the top of the priority list down until no more can be afforded. When different activities create uncertain but correlated risk reductions, as is often the case in practice, then these principles are inconsistent: priority scoring and ranking fails to maximize risk-reduction benefits. Real-world risk priority scoring systems used in homeland security and terrorism risk assessment, environmental risk management, information system vulnerability rating, business risk matrices, and many other important applications do not exploit correlations among risk-reducing opportunities or optimally diversify risk-reducing investments. As a result, they generally make suboptimal risk management recommendations. Applying portfolio optimization methods instead of risk prioritization ranking, rating, or scoring methods can achieve greater risk-reduction value for resources spent.

01 Jan 2009
TL;DR: In this article, the authors make a theoretical contribution to the usage of theories from criminology in supply chain risk management to handle antagonistic threats against the transport network, where the authors define the legal descriptions and criminal threats against and within supply chain management activities that entail both the systems context and boundaries.
Abstract: The World Trade Centre terror attack in 2001 changed the world and with it the conditions for logistics worldwide. The aftermath of the attack brought needed attention to the vulnerability of modern supply chains. This thesis addresses the antagonistic threats that exploit the vulnerability in a supply chain. Antagonistic threats are a limited array of risks and uncertainties and can be addressed with risk management tools and strategies. There are three key demarcations between antagonistic threats and other risks and uncertainties: deliberate (caused), illegal (defined by law), and hostile (negative impact, in this thesis, for transport network activities). This thesis makes a theoretical contribution to the usage of theories from criminology in supply chain risk management to handle antagonistic threats against the transport network. The recognition that antagonistic threats toward the transport network are a problem leads to verification of the research questions from the background and the theoretical framework. This is done to place or relate the research questions closer to the context. Furthermore, it leads to the conclusion that the answers may or may not contain competing and/or incompatible parts which differ depending on the perspective or viewpoint at the moment. One of the most important things to understand is that antagonistic threats toward freight always have been a feature in both business and politics. The different functions and goals for all stakeholders mean that all stakeholders and actors may use similar methods to manage antagonistic threats but the effects and consequences will change according to the circumstances. The system approach in this thesis is a soft-system thinking where reality is described in subjective terms and the whole system has the distinctive trait of vague or undefined boundaries between system components and the surrounding environment. Therefore, this thesis uses a complex system approach in which paradoxes and bounded rationality describes the system’s behaviour. This thesis defines the legal descriptions and criminal threats against and within supply chain management activities that entail both the systems context and boundaries. Managing of the antagonistic threats through the risk management perspective is separated into two sides, pre-event and post-event measures, which means the system needs to be robust and resilient, using logistics terms. It should be robust to automatically handle small risks (normally with high likelihood and low impact). The system also needs resilience in order to adapt, improvise, and overcome any disturbance greater than the system’s robustness can handle. Both robustness and its resilience can constitute of the full range of prevention, mitigation, and transferring tools and methods. Regardless of which perspective or viewpoint is chosen for analysing the problem, the same basic set of tools and methods are valid, but in practical use they need to be adapted to the actors’ needs and wants for managing their exposure to antagonistic threats.

Patent
15 Jul 2009
TL;DR: In this paper, a detection method based on security vulnerability mode is presented, wherein the detection method comprises the following steps: reading the source code document of tested program for preprocessing, analyzing the safety vulnerability state machine description document corresponding to security vulnerability modes; executing lexical analysis and syntax analysis to the tested program code for constructing an abstract syntax tree of tested programs; constructing a control flow chart according to the abstract syntax trees for generating symbol table; then calculating and updating the range set of variable value, analyzed the function calling relationship to tested program according to symbol table and generating a function
Abstract: The present invention discloses a detection method based on security vulnerability mode, wherein the detection method comprises the following steps: reading the source code document of tested program for preprocessing, analyzing the safety vulnerability state machine description document corresponding to security vulnerability mode; executing lexical analysis and syntax analysis to the tested program code for constructing an abstract syntax tree of tested program; constructing a control flow chart according to the abstract syntax tree for generating symbol table; then calculating and updating the range set of variable value, analyzing the function calling relationship to the tested program according to symbol table and generating a function calling relationship graph, and then establishing a ud/du chain; traversing the control flow chart through establishing a security vulnerability mode state machine and calling the ud/du chain, calculating the state transition of security vulnerability state machine of each node on the control flow chart, if the security vulnerability machine enters a defect state, reporting the corresponding checking node, and outputting a security vulnerability testing report after testing The method of the invention has the advantages of high automatization degree and high testing accuracy

Journal ArticleDOI
TL;DR: In this paper, the authors present a concept overview of an automatic operator of electrical networks (AOEN) for real-time alleviation of component overloads and increase of system static loadability, based on state-estimator data only.
Abstract: This paper presents a concept overview of an automatic operator of electrical networks (AOEN) for real-time alleviation of component overloads and increase of system static loadability, based on state-estimator data only. The control used for this purpose is real-power generation rescheduling, although any other control input could fit the new framework. The key performance metrics are the vulnerability index of a generation unit (VIGS) and its sensitivity (SVIGS), accurately computed using a realistic ac power flow incorporating the AGC model (AGC-PF). Transmission overloads, vulnerability indices and their sensitivities with respect to generation control are translated into fuzzy-set notations to formulate, transparently, the relationships between incremental line flows and the active power output of each controllable generator. A fuzzy-rule-based system is formed to select the best controllers, their movement and step-size, so as to minimize the overall vulnerability of the generating system while eliminating overflows. The controller performance is illustrated on the IEEE 39-bus (New England) network and the three-area IEEE-RTS96 network subjected to severe line outage contingencies. A key result is that minimizing the proposed vulnerability metric in real-time results in increased substantial loadability (prevention) in addition to overload elimination (correction).