scispace - formally typeset
Search or ask a question

Showing papers on "Vulnerability (computing) published in 1999"


Proceedings ArticleDOI
18 Jul 1999
TL;DR: Weaknesses in using deterministic methods for performing security assessment for bulk transmission systems are described and motivation for using probabilistic risk is presented and fundamental relations for making the associated calculations are provided.
Abstract: We describe weaknesses in using deterministic methods for performing security assessment for bulk transmission systems. We also present motivation for using probabilistic risk and provide fundamental relations for making the associated calculations. The benefits and applications of using a risk index for security assessment are discussed, and an illustration is provided for line overload security assessment in the operational context.

160 citations


01 Jan 1999
TL;DR: These notes describe how the design of TCP/IP and the 4.2BSD implementation allow users on untrusted and possibly very distant hosts to masquerade as users on trusted hosts to reduce their vulnerability to each other.
Abstract: The 4.2 Berkeley Software Distribution of the Unix operating system (4.2BSD for short) features an extensive body of software based on the "TCP/IP" family of protocols. In particular, each 4.2BSD system "trusts" some set of other systems, allowing users logged into trusted systems to execute commands via a TCP/IP network without supplying a password. These notes describe how the design of TCP/IP and the 4.2BSD implementation allow users on untrusted and possibly very distant hosts to masquerade as users on trusted hosts. Bell Labs has a growing TCP/IP network connecting machines with varying security needs; perhaps steps should be taken to reduce their vulnerability to each other.

143 citations


Proceedings Article
01 Jan 1999
TL;DR: This paper reported on some ways to try to deceive a state-of-the-art speaker verification (SV) system and defined a worst case scenario where the impostor has extensive knowledge about the person to deceive as well as the system to attack.
Abstract: This paper reports on some ways to try to deceive a state-of-the-art speaker verification (SV) system. In order to evaluate the risk in SV systems one has to take into account the possible intentional impostors who know whom they are attacking. We defined a worst case scenario where the impostor has extensive knowledge about the person to deceive as well as the system to attack. In this framework we tested our SV system against concatenated client speech, re-synthesis of the client speech and diphone synthesis of the client.

140 citations


Patent
30 Jul 1999
TL;DR: In this paper, attempted intrusions in a telecommunications signaling network (202) and assessing (212) the vulnerability of the network to the attempted intrusion are detected in real-time, using a known protocol for the network, in order to detect anomalies tending to indicate an attempted intrusion.
Abstract: Detecting (206) attempted intrusions in a telecommunications signaling network (202) and assessing (212) the vulnerability of the network to the attempted intrusions. Intrusion rules are applied to received messages in the network in real-time, using a known protocol for the network, in order to detect anomalies tending to indicate an attempted intrusion. In order to assess the vulnerability of the network, the vulnerability rules are applied to rankings of particular parameters relating to elements in the network. The rankings provide an indication of susceptibility of a network element to an attempted intrusion relative to other network elements.

135 citations


Book ChapterDOI
12 Apr 1999
TL;DR: The Legion security architecture is presented, a flexible, adaptable framework for solving the metacomputing security problem and it is demonstrated that this framework is flexible enough to implement a wide range of security mechanisms and high-level policies.
Abstract: A metacomputing environment is a collection of geographically distributed resources (people, computers, devices, databases) connected by one or more high-speed networks and potentially spanning multiple administrative domains. Security is an essential part of metasystem design -- high-level resources and services defined by the metacomputer must be protected from one another and from possibly corrupted underlying resources, while those underlying resources must minimize their vulnerability to attacks from the metacomputer level. We present the Legion security architecture, a flexible, adaptable framework for solving the metacomputing security problem. We demonstrate that this framework is flexible enough to implement a wide range of security mechanisms and high-level policies.

70 citations


Proceedings ArticleDOI
01 Nov 1999
TL;DR: These interactive authentication protocols yield new constructions for non-interactive group signature schemes that use the higher-residuosity assumption, which leads to greater efficiency and more natural security proofs than previous constructions.
Abstract: We develop new schemes for anonymous authentication that support identity escrow. Our protocols also allow a prover to demonstrate membership in an arbitrary subset of users; key revocation is an important special case of this feature. Using the Fiat-Shamir heuristic, our interactive authentication protocols yield new constructions for non-interactive group signature schemes. We use the higher-residuosity assumption, which leads to greater efficiency and more natural security proofs than previous constructions. It also leads to an increased vulnerability to collusion attacks, although countermeasures are available.

45 citations


Journal ArticleDOI
TL;DR: Traditional approaches to information systems security are discussed: putting controls and mechanisms in place that protect confidentiality, integrity, and availability by stopping users from doing bad things and their limitations.
Abstract: The past few years have seen governmental, military, and commercial organizations widely adopt Web-based commercial technologies because of their convenience, ease of use, and ability to take advantage of rapid advances in the commercial market. With this increasing reliance on internetworked computer resources comes an increasing vulnerability to information warfare. In today's heavily networked environment, safety demands protection from both obvious and subtle intrusions that can delete or corrupt vital data. Traditionally, information systems security focuses primarily on prevention: putting controls and mechanisms in place that protect confidentiality, integrity, and availability by stopping users from doing bad things. Moreover, most mechanisms are powerless against misbehavior by legitimate users who perform functions for which they are authorized. The paper discusses traditional approaches and their limitations.

36 citations


Proceedings ArticleDOI
01 Sep 1999
TL;DR: The overall objective is to arrive at a general and clear-cut framework that would describe how trustable (dependable, secure) a system is, regardless of the reason for its not being totally trustable.
Abstract: Problems related to security and dependability/ reliability are still treated separately in many contexts. It has been shown that there is a considerable conceptual overlap, however, and an integrated framework to the two disciplines has already been suggested. This paper shows that there is also a conceptual overlap of impairments from these areas and suggests an integrated approach that clarifies the functional relation between these, both from dependability and security viewpoints. The overall objective is to arrive at a general and clear-cut framework that would describe how trustable (dependable, secure) a system is, regardless of the reason for its not being totally trustable. For example, it should be possible to treat a system failure caused by an intentional intrusion or a hardware fault using the same methodology. A few examples from real-world situations are given to support the suggested approach.

34 citations


Journal ArticleDOI
TL;DR: The security services required to counteract security threats and the cryptographic techniques used to provide these services are introduced and a comprehensive review of the authentication protocols proposed by standards organizations and by independent researchers are provided.

25 citations


01 Jul 1999
TL;DR: In this paper, the authors present a view of the state of the art in anomaly detection and reaction (ADR) technology, which encompasses the automated capabilities that can detect or find anomalies in computer systems, report them in useful ways, remove discovered anomalies, and repair damage they may have caused.
Abstract: : This paper presents a view of the state of the art in anomaly detection and reaction (ADR) technology. The paper develops the view from six sources: three prior reports (two national, one MITRE), a survey of commercially available software, a survey of government software, and a survey of government-funded research projects. ADR encompasses the automated capabilities that can detect or find anomalies in computer systems, report them in useful ways, remove discovered anomalies, and repair damage they may have caused. Included in this scope of interest are traditional intrusion detection and reaction tools. The broader scope of anomaly detection and reaction also includes vulnerability scanners, infraction scanners, and security compliance monitors. These tools protect not only against intruders but against errors and carelessness in administration and operation of end systems and network components. This synopsis draws on the following sources of information: (1) the National Info-Sec Technical Baseline report on intrusion detection and response; (2) the description of the state of the art in network-based intrusion detection systems in a report of Hill and Aguirre; (3) the report of the Intrusion Detection Subgroup of the National Security Telecommunications Advisory Committee on the implications of intrusion detection technology research and development on national security and emergency preparedness; (4) product descriptions of commercial off-the-shelf (COTS) and government off-the-shelf (GOTS) ADR systems; and (5) descriptions of current research in anomaly detection and reaction. Tables show intrusion detection tools by product type and architecture, provide commentary on issues in ADR, present the main thrust of numerous research efforts in ADR, and provide a condensation of the state of the art in ADR.

23 citations


Proceedings ArticleDOI
05 Oct 1999
TL;DR: This discussion leads to an identification of several levels of automated tool sophistication, from simple calculators to artificial intelligence application, and a discussion of the FAA's plans and approach to establishing standard airport assessment methods throughout the US domestic airports.
Abstract: This paper presents the results of an evaluation of several different approaches to conducting quantitative airport vulnerability and risk assessment. Field tests of seven methodologies applied to a total of thirteen major US domestic airports provided the results and reports used to evaluate the various methodologies. The process of evaluation used a rigorous decision technology approach, which involves evaluation criteria, several different weighting schemes, and computation of the overall desirability value for each methodology. The evaluation criteria will be identified and the final results presented. Additional insight into the types of automation demonstrated in the project will also be provided. This discussion leads to an identification of several levels of automated tool sophistication, from simple calculators to artificial intelligence application. Overall analysis of the airport vulnerability assessments generated trends in terms of commonly identified security upgrades. Prioritized security upgrade categories are derived based on their frequency of occurrence in these analyses. This paper concludes with a discussion of the FAA's plans and approach to establishing standard airport assessment methods throughout the US domestic airports. Automated tool development and refinement initiatives are presented, along with on-going verification program plans and results.

Journal ArticleDOI
TL;DR: Results show that including the vulnerability MOE as a second criterion to complement the reliability MOE allows one to formulate component-hardening strategies, and the Pareto-efficient frontier generated by trading off these two MOEs is very small.
Abstract: As the information revolution continues, those who depend upon secure information-networks but cannot adequately protect them will become more vulnerable to tampering by an adversary. Prescriptive models used to recommend improvements to networks usually use reliability or flow as the Measure Of Effectiveness (MOE). Such measures will not give value to efforts that make a network component more difficult to exploit. Similarly, Risk Assessment Models (RAMs) are used to quantify the importance of a component to overall network performance (again measured in terms of reliability or flow) but do not prescribe improvement strategies. This study develops a prescriptive RAM that includes an MOE called invulnerability. This gives value to efforts that make a component more difficult to exploit. Results show that including the vulnerability MOE as a second criterion to complement the reliability MOE allows one to formulate component-hardening strategies. Furthermore, the Pareto-efficient frontier generated by trading off these two MOEs is very small. This helps to pinpoint specific components that should be improved or hardened for information security.

Journal ArticleDOI
TL;DR: The authors describe how the growing reliance of the electric power industry on information technologies introduces a new class of cyber vulnerability and argues that the key to meeting that challenge successfully is to recognize the mutually supportive roles public and private sectors can play.
Abstract: The authors describe how the growing reliance of the electric power industry on information technologies introduces a new class of cyber vulnerability. The principal challenge is to determine how best to counter cyber threats posed by malicious elements, be they terrorists bent on destruction, vandals hacking their way into control or data exchange systems, or even commercial competitors, stealing their adversaries' data or sabotaging their operations. They argue that the key to meeting that challenge successfully is to recognize the mutually supportive roles public and private sectors can play.

Book ChapterDOI
30 Sep 1999
TL;DR: A structured approach of a limited risk analysis on an Internet connection is described, in order to assess the threats which will be encountered if the organisation decides to connect to the Internet, and to determine which measures are necessary to protect against the relevant threats.
Abstract: Many organisations use risk analysis to analyse the vulnerability of their information technology. However, the majority of existing risk analysis methods and tools cannot deal adequately with the variable complex of measures against Internet threats, depending on Internet services rather than installed equipment or information systems. This paper describes a structured approach of a limited risk analysis on an Internet connection, in order to assess the threats which will be encountered if the organisation decides to connect to the Internet, and to determine which measures are necessary to protect against the relevant threats. This is useful in both the design phase for selecting a suitable set of security measures, as well as the testing phase to audit the adequacy of a chosen set of measures.

Journal Article
TL;DR: The theory and method described includes two methods for computing the vulnerability of combat aircraft corresponding to multiple hit kills and the efficiency of the methods presented are shown.
Abstract: The theory and method described in this paper includes two methods for computing the vulnerability measur es of combat aircraft corresponding to multiple hit kills Several examples are g iven to show the calculation process and the efficiency of the methods presented in this paper \;

Book ChapterDOI
01 Jan 1999
TL;DR: The objective of doing risk analysis in real time is to find a method through which dynamically to determine the vulnerability of, for example, a TCP/IP packet in terms of generic threat categories such as interception and fabrication.
Abstract: In current times, sending confidential data over the Internet is becoming more commonplace every day The process of sending confidential data over the Internet is, however, concomitant with great effort: encryption algorithms have to be incorporated and encryption key management and distribution have to take place Wouldn’t it be easier, more secure and faster if only technology could be introduced to do risk analysis in real time? The objective of doing risk analysis in real time is to find a method through which dynamically to determine the vulnerability of, for example, a TCP/IP packet in terms of generic threat categories such as interception and fabrication Once the vulnerability of the packet has been determined, the appropriate countermeasures can be activated to secure the packet before it is sent off to its original destination The countermeasures are activated according to certain data that is found in and extracted from the TCP/IP packets In order to be able to obtain this data, each TCP/IP packet flowing through a certain point in a network is intercepted and analysed

Proceedings ArticleDOI
05 Oct 1999
TL;DR: This paper provides a project synopsis and research results of the FAA Research grant No 9550.7 A, titled "Air Cargo Security Access System", which explored a biometric smart card security access system that was operationally tested at Chicago's O'Hare International Airport.
Abstract: The increasing emphasis that has been placed over the last few years on building a global economy and creating seamless political boundaries requires governments to augment all aspects of aviation security. This is becoming increasingly important in the air cargo sector of aviation-given that truck-to-air cargo movements are growing at a rate of nearly ten per cent per year. The concern is exacerbated by the fact that nearly 60% of air cargo is transported on passenger planes. In responding to these factors, the Gore Commission on Aviation Security identified air cargo security as an area of vulnerability. Consequently, the Federal Aviation Administration is operationally testing a variety of technology applications designed to improve aviation security, decrease processing time, and reduce human error. This paper provides a project synopsis and research results of the FAA Research grant No 9550.7 A, titled "Air Cargo Security Access System". This grant explored a biometric smart card security access system that was operationally tested at Chicago's O'Hare International Airport. The information presented in this paper are the findings of the research grantees, ATAF, etc. The views and findings expressed do not represent an endorsement on the part of the FAA. The United States Government assumes no liability for the contents or use.

Journal ArticleDOI
TL;DR: Solid Modelling CAD techniques have been modified to develop techniques to perform the two main standard vulnerability assessments, namely the shotline and vulnerable area methods.
Abstract: A methodology has been developed to integrate the vulnerability discipline into the conceptual/preliminary design process of combat aircraft. An interactive and programmable solid modelling Computer Aided Design (CAD) system is used to generate a CAD solid model of the aircraft’s critical components. The aircraft’s components’ sizes and shapes are pre-defined by a conceptual/preliminary design synthesis computer model. A systematic Child-Parent assembly process is used to model the aircraft vulnerability, by defining the criticality degree of each component in the aircraft assembly. Solid Modelling CAD techniques have been modified to develop techniques to perform the two main standard vulnerability assessments, namely the shotline and vulnerable area methods.

21 Jul 1999
TL;DR: In this paper, the authors present an overview of all potential types of sabotage at nuclear power plants and discuss potential consequences of sabotage acts, including economic and political; not just those that may result in unacceptable radiological exposure to the public, are also discussed.
Abstract: Recently there has been a noted worldwide increase in violent actions including attempted sabotage at nuclear power plants. Several organizations, such as the International Atomic Energy Agency and the US Nuclear Regulatory Commission, have guidelines, recommendations, and formal threat- and risk-assessment processes for the protection of nuclear assets. Other examples are the former Defense Special Weapons Agency, which used a risk-assessment model to evaluate force-protection security requirements for terrorist incidents at DOD military bases. The US DOE uses a graded approach to protect its assets based on risk and vulnerability assessments. The Federal Aviation Administration and Federal Bureau of Investigation conduct joint threat and vulnerability assessments on high-risk US airports. Several private companies under contract to government agencies use formal risk-assessment models and methods to identify security requirements. The purpose of this paper is to survey these methods and present an overview of all potential types of sabotage at nuclear power plants. The paper discusses emerging threats and current methods of choice for sabotage--especially vehicle bombs and chemical attacks. Potential consequences of sabotage acts, including economic and political; not just those that may result in unacceptable radiological exposure to the public, are also discussed. Applicability of risk-assessment methods and mitigation techniques are also presented.

Proceedings ArticleDOI
20 Dec 1999
TL;DR: This paper suggests that network services should provide a second line of defense to catch those attackers who are not excluded by the first line-the conventional signon process, to reduce system vulnerability in open, untrusted networks.
Abstract: In this paper we identify a number of security problems encountered in open, untrusted networks and motivate why some of these problems are going to remain with us for the foreseeable future. In order to reduce system vulnerability in such environments, we suggest that network services should provide a second line of defense to catch those attackers who are not excluded by the first line-the conventional signon process. Part of this fallback position could adapt anomaly detection (a concept borrowed from conventional network intrusion detection systems) to provide a means of gradually and continuously authenticating users and modulating their access rights accordingly.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: Preliminary work in the modelling and simulation of these C3I networks to analyse their vulnerability to different forms of attack is reported, using a discrete event model to simulate the complex traffic flow across the network.
Abstract: The increasing use of distributed electronic networks to enhance C3I capability raises the issue of vulnerability to network attack and its impact on C3I. This paper reports preliminary work in the modelling and simulation of these C3I networks to analyse their vulnerability to different forms of attack. A discrete event model is used to simulate the complex traffic flow across the network. Standard measures of network performance are used, such as capacity, throughput, loss probability and delay, as well as the impact on C3I systems themselves. The vulnerability of the C3I system is expressed in terms of the degradations in network and C3I performance when exposed to different attack scenarios resulting in loss of system components.

01 Jan 1999
TL;DR: MEVA-GF is a graphical user interface (GUI) based program that provides an architecture for assembling an assessment model or simulation in modular fashion that may be used to configure a fast running stochastic model for Monte-Carlo type calculations that require hundreds of runs for statistical accuracy.
Abstract: : The Modular Effectiveness Vulnerability Assessment - Ground Fixed (MEVA-GF) is an engineering tool for assessing the vulnerability of fixed ground targets to conventional weapon attack. MEVA-GF is a graphical user interface (GUI) based program that provides an architecture for assembling an assessment model or simulation in modular fashion. Individual modules representing the weapon, target, weapon delivery, penetration, blast, fragmentation, etc. are linked together using a data flow paradigm that creates the assessment network. The modularity inherent in the architecture provides the user flexibility in the design of networks by offering modules with varying levels of fidelity. MEVA-GF may be used to configure a fast running stochastic model for Monte-Carlo type calculations that require hundreds of runs for statistical accuracy. Higher fidelity models may be constructed for more deterministic type studies where longer run times are not a consideration and more precision is desired. Critical components within targets may be modeled and assigned to fault trees providing a means for assessing functional damage to a target. The targets response (i.e., damage) to the weapon effects (i.e., penetration, blast, fragmentation) is output into data files and can be visualized using a three dimensional graphical representation.

Stephen F. Bush1
01 Jan 1999
TL;DR: This white paper proposes incorporating methods and algorithms into an existing proto-type tool for using vulnerability information collected from an actual network and simulating the results of an attack so the command and control strategies can be studied.
Abstract: This precis describes a tool for quantifying the vulnerability of a communications network. It is important that information warfare studiesinclude both offensive and defensive strategies in an integrated manner since neither can be studied in isolation. It is assumed that an attackerhas a finite amount of resources with which to discover faults in the network security of a data communications network and that each faultdiscovery consumes the attackers’ resources. Network security actions may be taken to increase security in strategic areas of the network andto actively deter an attack. Reactions such as these by network security in response to an attack have a both a monetary cost and a cost interms of reduction of network resources and degradation of services to network consumers. An optimal course of action by network securityin response to an attack is to minimize network access to an attacker while also minimizing the impact to legitimate network consumers. Thisrequires precise assessment of network security vulnerability and quantification of effects on network consumers by actions taken by networksecurity in response to an attack. This white paper proposes incorporating methods and algorithms into an existing proto-type tool that we havedeveloped for using vulnerability information collected from an actual network and simulating the results of an attack so the command andcontrol strategies can be studied.KeywordsInformation Warfare Strategy and Control, Network Security, Vulnerability Analysis.

01 Sep 1999
TL;DR: The research presented here demonstrates that independent detection agents can be run in a distributed fashion, each operating mostly independent of the others, yet cooperating and communicating to provide a truly distributed detection mechanism without a single point of failure.
Abstract: : Because computer security in today's networks is one of the fastest expanding areas of the computer industry, protecting resources from intruders is an arduous task that must be automated to be efficient and responsive. Most intrusion-detection systems currently rely on some type of centralized processing to analyze the data necessary to detect an intruder in real time. A centralized approach can be vulnerable to attack. If an intruder can disable the central detection system, then most, if not all, protection is subverted. The research presented here demonstrates that independent detection agents can be run in a distributed fashion, each operating mostly independent of the others, yet cooperating and communicating to provide a truly distributed detection mechanism without a single point of failure. The agents can run along with user and system software without noticeable consumption of system resources, and without generating an overwhelming amount of network traffic during an attack.

ReportDOI
07 Apr 1999
TL;DR: In this paper, the authors present an assessment of America's interests, threats, and requirements in this emerging world order, and the likely near term threats to our security will avoid America's military strengths and be directed toward the more accessible targets, our national resolve and economy.
Abstract: : Accompanied by a new play of forces and dynamics, the age of geopolitics is giving way to the age of geoeconomics. Within our national security apparatus a strong tendency still exists to view foreign and domestic problems from a nineteenth century perspective. America's predominant leadership role, national resolve and power are being tested more frequently in a world free of the bipolar constrains of the Cold War. To obtain the desired synergistic relationship among economic, diplomatic, and military elements of power our National Security Strategy must conduct an unambiguous assessment of our interests, threats, and requirements in this emerging world order. The likely near term threats to our security will avoid America's military strengths and be directed toward the more accessible targets, our national resolve and economy. An asymmetric strike against our critical infrastructures seems the most likely means of attack. Electric power, telecommunications and transportation are among those systems whose incapacity or destruction would have a debilitating impact on the defense and economic security of our nation. In recognition of America's dependency and vulnerability, the Department of Defense should be brought center stage in a role of Homeland Defense to protect our national infrastructures. systems whose incapacity or destruction would have a debilitating impact on the defense and economic security of our nation. In recognition of America's dependency and vulnerability, the Department of Defense should be brought center stage in a role of Homeland Defense to protect our national infrastructures.

Journal Article
TL;DR: In this article, the authors describe what is involved in preparing for the initial audit and include a sample security audit report, which can guide and direct the survey process and can be used to guide the full-fledged security survey process.
Abstract: Prior to conducting a full-fledged security survey in order to meet requirements of JCAHO and other agencies, it is essential to conduct an initial audit that will assess your hospital's vulnerability. The information from this audit can guide and direct the survey process. This article describes what is involved in preparing for the initial audit and includes a sample security audit report.

Journal Article
TL;DR: In this article, the authors proposed a six-step program to protect the nation's buildings and infrastructure from terrorist chemical and biological attack, which involves the following: identify the potential chemical agents; assess the characteristics of potential targets; deny attackers access to vulnerable areas; deter attackers by surveillance devices, closed-circuit televisions, and alarms; detect the attempted intrusion of chemical/biological agents by developing broad spectrum, multiple agent, and toxic material detectors; and design the building or infrastructure to defend its occupants.
Abstract: This paper proposes a six-step program to protect the nation's buildings and infrastructure from terrorist chemical and biological attack. It involves the following: identify the potential chemical and biological agents; assess the characteristics of potential targets; deny attackers access to vulnerable areas; deter attackers by surveillance devices, closed-circuit televisions, and alarms; detect the attempted intrusion of chemical/biological agents by developing broad spectrum, multiple agent, and toxic material detectors; and design the building or infrastructure to defend its occupants.

Proceedings ArticleDOI
28 Jun 1999
TL;DR: This work uses a compositional framework to model security architectures involving heterogeneous and distributed security functions and proposes constraints that security functions should guarantee in order to interact consistently, and securely with other functions.
Abstract: We use a compositional framework to model security architectures involving heterogeneous and distributed security functions. Our goal is to assist the ITSEC evaluation of suitability binding and vulnerability of a set of security functions. We propose constraints that security functions should guarantee in order to interact consistently, and securely with other functions. To illustrate these notions we study the interactions of various components of a secure LAN.

ReportDOI
01 Oct 1999
TL;DR: A small effort was conducted at Sandia National Laboratories to explore the use of a number of modern analytic technologies in the assessment of terrorist actions and to predict trends on Bayesian networks as a means of capturing correlations between groups, tactics, and targets.
Abstract: A small effort was conducted at Sandia National Laboratories to explore the use of a number of modern analytic technologies in the assessment of terrorist actions and to predict trends. This work focuses on Bayesian networks as a means of capturing correlations between groups, tactics, and targets. The data that was used as a test of the methodology was obtained by using a special parsing algorithm written in JAVA to create records in a database from information articles captured electronically. As a vulnerability assessment technique the approach proved very useful. The technology also proved to be a valuable development medium because of the ability to integrate blocks of information into a deployed network rather than waiting to fully deploy only after all relevant information has been assembled.

Journal ArticleDOI
TL;DR: A probabilistic model for initiation of an accident involving a potentially dangerous object by a missile attack is developed and a formulation of the problem of optimization of protection with constraints on the mass of the protection, taking account of differences in the probabilities of effects and the vulnerability of an object with respect to the directions of fire.
Abstract: A probabilistic model for initiation of an accident involving a potentially dangerous object by a missile attack is developed. A formulation of the problem of optimization of protection with constraints on the mass of the protection, taking account of differences in the probabilities of effects and the vulnerability of an object with respect to the directions of fire, is given. Examples of solutions of the problem by the linear-programming method are presented for means of transportation and for transport containers. 1 figure, 2 tables, 5 references.