scispace - formally typeset
Search or ask a question

Showing papers presented at "Computational Science and Engineering in 1996"


Journal ArticleDOI
01 Mar 1996
TL;DR: The article discusses the motivations behind the development of ANNs and describes the basic biological neuron and the artificial computational model, and outlines network architectures and learning processes, and presents some of the most commonly used ANN models.
Abstract: Artificial neural nets (ANNs) are massively parallel systems with large numbers of interconnected simple processors. The article discusses the motivations behind the development of ANNs and describes the basic biological neuron and the artificial computational model. It outlines network architectures and learning processes, and presents some of the most commonly used ANN models. It concludes with character recognition, a successful ANN application.

4,281 citations


Journal ArticleDOI
01 Sep 1996
TL;DR: This paper considers how Adifor 2.0, which won the 1995 Wilkinson Prize for Numerical Software, can automatically differentiate complicated Fortran code much faster than a programmer can do it by hand.
Abstract: Numerical codes that calculate not only a result, but also the derivatives of the variables with respect to each other, facilitate sensitivity analysis, inverse problem solving, and optimization. The paper considers how Adifor 2.0, which won the 1995 Wilkinson Prize for Numerical Software, can automatically differentiate complicated Fortran code much faster than a programmer can do it by hand. The Adifor system has three main components: the AdiFor preprocessor, the ADIntrinsics exception-handling system, and the SparsLinC library.

397 citations


Journal ArticleDOI
01 Mar 1996
TL;DR: This algorithm uses symbolic rules to represent the network decision process and extracts rules from a neural network with discretized hidden unit activation values to preserve network accuracy and explain the prediction process.
Abstract: Neural networks often surpass decision trees in predicting pattern classifications, but their predictions cannot be explained. This algorithm's symbolic representations make each prediction explicit and understandable. Our approach to understanding a neural network uses symbolic rules to represent the network decision process. The algorithm, NeuroRule, extracts these rules from a neural network. The network can be interpreted by the rules which, in general, preserve network accuracy and explain the prediction process. We based NeuroRule on a standard three layer feed forward network. NeuroRule consists of four phases. First, it builds a weight decay backpropagation network so that weights reflect the importance of the network's connections. Second, it prunes the network to remove irrelevant connections and units while maintaining the network's predictive accuracy. Third, it discretizes the hidden unit activation values by clustering. Finally, it extracts rules from the network with discretized hidden unit activation values.

209 citations


Journal ArticleDOI
01 Mar 1996
TL;DR: A novel global minimization method, called NOVEL (Nonlinear Optimization via External Lead), is proposed, and its superior performance on neural network learning problems is demonstrated.
Abstract: We propose a novel global minimization method, called NOVEL (Nonlinear Optimization via External Lead), and demonstrate its superior performance on neural network learning problems The goal is improved learning of application problems that achieves either smaller networks or less error prone networks of the same size This training method combines global and local searches to find a good local minimum In benchmark comparisons against the best global optimization algorithms, it demonstrates superior performance improvement

187 citations


Journal ArticleDOI
01 Sep 1996
TL;DR: In this article, the authors present some background on software libraries and problem-solving environments and discuss the long path to this vision of scientific software's future, and the roadblocks in the way.
Abstract: As more scientists and engineers adopt computation as a primary tool, they will want more problem-solving help from easy-to-use, comprehensive software systems. A workshop discussed the long path to this vision of scientific software's future, and the roadblocks in the way. In order to understand the findings of the workshop, the paper presents some background on software libraries and problem solving environments.

138 citations


Journal ArticleDOI
01 Mar 1996
TL;DR: A prototype full custom vector microprocessor, TO, is packaged as the Spert-II (Synthetic Perceptron Testbed II) workstation accelerator system, to accelerate multiparameter neural network training for speech recognition research.
Abstract: The Spert-II fixed point vector microprocessor system performs training and recall faster than commercial workstations for neural networks used in speech recognition research. We have packaged a prototype full custom vector microprocessor, TO, as the Spert-II (Synthetic Perceptron Testbed II) workstation accelerator system. We originally developed Spert-II to accelerate multiparameter neural network training for speech recognition research. Our speech research algorithms constantly change. Also, neural nets are often integrated with other tasks to form complete applications. We thus desired a general purpose, easily programmable accelerator that could speed up a range of tasks.

103 citations


Journal ArticleDOI
01 Mar 1996
TL;DR: The Lucas (Land Use Change Analysis System) as mentioned in this paper ) is a C++ object-oriented language that allows a flexible, modular approach to model and predict land-use change.
Abstract: Changes in land use depend on both natural processes and human behavior, complicating the job of natural resource managers. Building Lucas (Land-Use Change Analysis System) to model and predict land-use change required expertise from ecologists, economists, sociologists and computer scientists. The C++ object-oriented language allowed a flexible, modular approach.

93 citations


Journal ArticleDOI
01 Dec 1996
TL;DR: SMD, a system for interactively steering molecular dynamics calculations of protein molecules, includes computation, visualization, and communication components and provides a new tool for biochemists to use in exploring the structure of proposed designs.
Abstract: SMD, a system for interactively steering molecular dynamics calculations of protein molecules, includes computation, visualization, and communication components. Biochemists can "tug" molecules into different shapes by specifying external forces in the graphical interface, which are added to internal forces representing atomic bonds and nonbonded interactions. SMD provides a new tool for biochemists to use in exploring the structure of proposed designs, as well as in more general applications such as exploring the molecular dynamics model itself. Its primary use is in modeling single large biomolecules in a bath of water acting as the solvent.

85 citations


Journal ArticleDOI
01 Mar 1996
TL;DR: Simple rules of thumb remain the most common estimating approach, but most of these tools are far superior to manual estimating methods in both ease of use and repeatability, and many are also more accurate.
Abstract: ccurate software estimating is too difficult for simple rules of thumb. Yet in spite of their inadequacy and the availability of more than 50 commercial software-estimating tools-simple rules of thumb remain the most common estimating approach. Many companies develop and market software cost and quality estimating tools. Including my own firm's two proprietary tools, there are at least 50 commercial software-estimating tools on the market in the United States, and the worldwide total exceeds 75. Most of these tools are far superior to manual estimating methods in both ease of use and repeatability, and many are also more accurate.

85 citations


Journal ArticleDOI
01 Jun 1996
TL;DR: The article explores the possibility of rapidly designing an appropriate neural net (NN) for time series prediction based on information obtained from stochastic modeling, and possibly initial values for the NN parameters, according to the most adequate linear model.
Abstract: The article explores the possibility of rapidly designing an appropriate neural net (NN) for time series prediction based on information obtained from stochastic modeling. Such an analysis could provide some initial knowledge regarding the choice of an NN architecture and parameters, as well as regarding an appropriate data sampling rate. Stochastic analysis provides a complementary approach to previously proposed dynamical system analysis for NN design. Based on E. Takens's theorem (1981), an estimate of the dimension m of the manifold from which the time series originated can be used to construct an NN model using 2m+1 external inputs. This design is further extended by M.A.S. Potts and D.S. Broomhead (1991) who first embed the state space of a discrete time dynamical system in a manifold of dimension n>>2m+1, which is further projected to its 2m+1 principal components used as external inputs in a radial basis function NN model for time series prediction. Our approach is to perform an initial stochastic analysis of the data and to choose an appropriate NN architecture, and possibly initial values for the NN parameters, according to the most adequate linear model.

76 citations


Journal ArticleDOI
01 Sep 1996
TL;DR: Camel, an interactive parallel programming environment based on cellular automata, can develop high-performance applications in science and engineering and show its usefulness in geology, traffic planning, image processing, and genetic algorithms.
Abstract: The paper discusses Camel, an interactive parallel programming environment based on cellular automata. With Camel users can develop high-performance applications in science and engineering. Examples in geology, traffic planning, image processing, and genetic algorithms show its usefulness.

Journal ArticleDOI
01 Mar 1996
TL;DR: Two approaches are explored: parallel simulation on general purpose computers, and simulation/emulation on neurohardware, which tries to imitate the neuroparadigm using the best of silicon technology.
Abstract: Parallelization is necessary to cope with the high computational and communication demands of neuroapplications, but general purpose parallel machines soon reach performance limitations. The article explores two approaches: parallel simulation on general purpose computers, and simulation/emulation on neurohardware. Different parallelization methods are discussed, and the most popular techniques are explained. While the software approach looks for an optimal programming model for neural processing, the hardware approach tries to imitate the neuroparadigm using the best of silicon technology.

Journal ArticleDOI
01 Jun 1996
TL;DR: Practical, basic rules of thumb that can help you predict if parallelism might be worthwhile, given your application and the effort you want to invest are offered.
Abstract: This article offers practical, basic rules of thumb that can help you predict if parallelism might be worthwhile, given your application and the effort you want to invest. The techniques presented for estimating likely performance gains are drawn from the experiences of hundreds of computational scientists and engineers at national labs, universities, and research facilities. The information is more anecdotal than experimental, but it reflects the very real problems that must be overcome if parallel programming is to yield useful benefits.

Journal ArticleDOI
G. Cats, L. Wolters1
01 Dec 1996
TL;DR: The article outlines the analysis unit, forecast model, and postprocessor of the Hirlam system, as well as its implementation on massively parallel processing systems.
Abstract: In 1985 the national meteorological institutes of Denmark, Finland, Iceland, The Netherlands, Norway, and Sweden joined forces to develop and maintain a numerical short range weather forecasting system. Later Ireland and Spain joined the project, and the Hirlam (high resolution limited area modeling) project is now in its third phase. Version 2 of the Hirlam system, built during the second phase of the project, is now used in routine weather forecasting at almost all participating institutes. Modern numerical weather forecasting systems have three basic components: an analysis unit, a forecast model, and a postprocessor. Analysis provides the initial conditions for the model, based on recent observations and other sources of information. The forecast model spatially and temporally discretizes and then integrates the classical equations-Newton's second law, mass conservation, and thermodynamics. In the postprocessing step, the relevant weather phenomena (for example, wind speed at 10 m height) are calculated from the model variables. The article outlines the analysis unit, forecast model, and postprocessor of the Hirlam system, as well as its implementation on massively parallel processing systems.

Journal ArticleDOI
01 Mar 1996
TL;DR: The SPEC/HPG benchmark suite as mentioned in this paper is the first suite of code for real-world high-performance computing (HPC) applications, and it is based on the standard performance evaluation corporation/high-performance group (SPEEC).
Abstract: Users need benchmarks that are representative of real-world high-performance computing (HPC) applications in order to make good performance comparisons across platforms on a level playing field. This information can only enhance the usefulness of new HPC technology in the marketplace. SPEC/HPG (Standard Performance Evaluation Corporation/High-Performance Group) has released its first suite of codes and welcomes feedback.

Journal ArticleDOI
01 Mar 1996
TL;DR: Parameters of certain neural net training algorithms and classification procedures are often chosen or adapted using heuristics that contain fuzzy descriptors, quantified into a fuzzy inference engine, which can take the human out of the loop and provide for faster convergence or improved performance.
Abstract: Parameters of certain neural net training algorithms and classification procedures are often chosen or adapted using heuristics that contain fuzzy descriptors. Such heuristics, quantified into a fuzzy inference engine, can take the human out of the loop and provide for faster convergence or improved performance. Other applications, outside of neural nets, are also possible.

Journal ArticleDOI
Craig C. Douglas1
01 Dec 1996
TL;DR: By combining computation from several scales of mesh fineness, multigrid and multilevel methods can improve speed and accuracy in a wide variety of science and engineering applications.
Abstract: By combining computation from several scales of mesh fineness, multigrid and multilevel methods can improve speed and accuracy in a wide variety of science and engineering applications. The article sketches the history of the techniques, explains the basics, and gives pointers to the literature and current research.

Journal ArticleDOI
01 Mar 1996
TL;DR: The Neural Logic Network (Neulonet) system models a wide range of human decision making behaviors by combining the strengths of rule based expert systems and neural networks.
Abstract: The Neural Logic Network (Neulonet) system models a wide range of human decision making behaviors by combining the strengths of rule based expert systems and neural networks. Neulonet differs from other neural networks by having an ordered pair of numbers associated with each node and connection, as shown. Let Q be the output node and P/sub 1/, P, ..., P/sub N/, be input nodes. Also, let values associated with the node P/sub i/, be denoted by (a/sub i/, b/sub i/,), and the weight for the connection from P/sub i/, to Q be (/spl alpha//sub i/,/spl beta//sub i/,). Each node's ordered pair takes one of three values-(1,0) for true, (0,1) for false, or (0,0) for "don't know"; (1,1) is undefined.

Journal ArticleDOI
George Cybenko1
01 Mar 1996
TL;DR: An artificial neural network (ANN) is a computational system inspired by the structure, processing method and learning ability of a biological brain.
Abstract: An artificial neural network (ANN) is a computational system inspired by the structure, processing method and learning ability of a biological brain. In a commonly accepted model of the brain, a given neuron receives electrochemical input signals from many neurons through synapses-some inhibitory, some excitatory-at its receiving branches, or dendrites. If and when the net sum of the signals reaches a threshold, the neuron fires, transmitting a new signal through its axon, across the synapses to the dendrites of the many neurons it is in turn connected with. In the artificial system, "neurons", essentially tiny virtual processors, are usually implemented in software. Given an input, an artificial neuron uses some function to compute an output. As the output signal is propagated to other neurons, it is modified by "synaptic weights" or inter-neuron connection strengths. The weights determine the final output of the network, and can thus be adjusted to encode a desired functionality.

Journal ArticleDOI
01 Dec 1996
TL;DR: Parallel volume rendering (PVR) as mentioned in this paper is a system for high performance visualization with the speed required for interactivity and steering that was developed in a collaboration between the State University of New York at Stony Brook and Sandia National Laboratories.
Abstract: Traditional volume rendering methods are too slow to provide interactive visualization, especially for large 3D data sets. The PVR (parallel volume rendering) system implements parallel volume rendering techniques that speed up the visualization process. Moreover, it helps computational scientists, engineers, and physicians to more effectively apply volume rendering to visualization tasks. The authors describe the PVR system that they have developed in a collaboration between the State University of New York at Stony Brook and Sandia National Laboratories. PVR is an attempt to provide an easy-to-use portable system for high performance visualization with the speed required for interactivity and steering. The current version of PVR consists of about 25000 lines of C and Tcl/Tk code. It has been used at Stony Brook, Sandia, and Brookhaven National Labs to visualize large data sets for over a year.

Journal ArticleDOI
01 Dec 1996
TL;DR: Two of the emerging visualization paradigms most useful to computational scientists are interactive vimalizatim and c m putational steering are presented, and an example of an interactive application for studying fluid flow is presented, showing how simulation and visualization can combine in real time for a better understanding of a phenomenon.

Journal ArticleDOI
01 Mar 1996
TL;DR: In the "problem-solving environments" that will one day free scientists and engineers from many algorithmic and computing details, computational intelligence techniques such as neural networks and fuzzy logic can help automate the choice of the best solution methods.
Abstract: In the "problem-solving environments" that will one day free scientists and engineers from many algorithmic and computing details, computational intelligence techniques such as neural networks and fuzzy logic can help automate the choice of the best solution methods. Classifying the problems, illustrated in this article by using partial differential equations (PDEs) as an example, is a first step.

Journal ArticleDOI
01 Mar 1996
TL;DR: The timing attack as discussed by the authors is basically a way of deciphering a user's private key information by measuring the time it takes to carry out cryptographic operations, such as branching and conditional statements, RAM cache hits, processor instructions that run in nonfixed time, as well as performance optimizations to bypass unnecessary operations.
Abstract: Although most encryption algorithms are theoretically secure and remain impervious to even the most sophisticated cryptanalytic techniques, new attacks like the timing attack exploit the engineering side of network security. A timing attack is basically a way of deciphering a user's private key information by measuring the time it takes to carry out cryptographic operations. Factors such as branching and conditional statements, RAM cache hits, processor instructions that run in nonfixed time, as well as performance optimizations to bypass unnecessary operations, all contribute to predictability and therefore to the probability of key decryption.

Journal ArticleDOI
G.V. Wilson1, R.H. Landau, S. McConnell
01 Jun 1996
TL;DR: The author chose one week as the length of the idealized course because it is long enough to permit discussion of several topics, but short enough to force stringent prioritization.
Abstract: To help clarify the issues involved in deciding what computing skills to teach to physical scientists and engineers, the article presents a thought experiment. Imagine that every new graduate student in science and engineering at your institution, or every new employee in your company's R&D division, has to take an intensive one week computing course. What would you want that course to cover? Should it concentrate on algorithms and data structures, such as multigrid methods and adaptively refined meshes? Should it introduce students to one or two commonly used packages, such as Matlab and SAS? Or should it try to teach students the craft of programming, giving examples to show why modularity is important and how design cycles work. The author chose one week as the length of our idealized course because it is long enough to permit discussion of several topics, but short enough to force stringent prioritization.

Journal ArticleDOI
01 Dec 1996
TL;DR: The article analyzes the components of lag for four applications that use virtual environments: Monte-a simple application to calculate /spl pi/ using a parallel Monte Carlo algorithm; Automotive Disk Brake-uses a parallel finite element code to allow users to design and analyze an automotive disk braking system under different conditions; BoilerMaker-lets users design and analyzed the placement of pollution control system injectors in boilers and incinerators.
Abstract: When coupling supercomputer simulations to "virtual reality" for real time interactive visualization, the critical performance metric is the end to end lag time in system response. Measuring the simulation, tracking, rendering, network, and synchronization components of lag time shows the feasibility of coupling supercomputers with virtual environments for some applications. For others, simulation time makes interactivity difficult. The article analyzes the components of lag for four applications that use virtual environments: Monte-a simple application to calculate /spl pi/ using a parallel Monte Carlo algorithm; Automotive Disk Brake-uses a parallel finite element code to allow users to design and analyze an automotive disk braking system under different conditions; BoilerMaker-lets users design and analyze the placement of pollution control system injectors in boilers and incinerators; Calvin (Collaborative Architectural Layout Via Immersive Navigation)-allows people at different sites to work collaboratively on the design and viewing of architectural spaces.

Journal ArticleDOI
01 Mar 1996
TL;DR: The raison,d'@tre of this S mode is to exploit the tolerance for imprecision and uncertainty in real-world problems to achieve tractability, robustness, and low cost.
Abstract: EVERAL NOVEL MODES OF COMPUTATION have recently emerged that are collectively known as soft computing. The raison,d'@tre of this S mode is to exploit the tolerance for imprecision and uncertainty in real-world problems to achieve tractability, robustness, and low cost. Soft computing is usually used to find an approximate solution to a precisely (or an imprecisely) formulated problem. Neurocomputing, with its artificial neural networks, is one of the major components of this approach. Although this emerging technology is rooted in various disciplines, the concept of artificial neural networks was inspired by biological neural networks. Biological neurons, believed to be the structural constituents of the brain, are much slower than silicon logic gates. But inferencing in biological neural networks is faster than the fastest computer. The brain compensates for the relatively slower operation by having an enormous number of massively interconnected neurons. A biological neural network is a nonlinear, highly parallel device characterized by robustness and fault tolerance. It also can learn by adapting its synaptic weights to changes in the surrounding environment; handle imprecise, fuzzy, noisy, and probabilistic information; and * generalize from known tasks or examples to unknown ones. Artificial neural networks (A"s) are an attempt to mimic some-or all-of these characteristi~s.~~~ This soft computational paradigm differs from a programmed instruction sequence in that information is stored in the synaptic connections. Each neuron is an elementary processor with primitive operations, like summing the weighted inputs coming to it and then amplifylng or thresholding the sum. Even a synchronous assembly of McCulloch-Pitts neurons can, in principle, perform universal computations for suitably chosen weights. Such an assembly can perform the same computations as an ordinary digital computer. A neural network is characterized by the network topology, the connection strength berween pairs of neurons (weights), node properties, and the status-updating rules. The updating or learning rules control weights and/or states of the processing elements (neurons). Normally, an objective function is defined that represents the complete status of the network, and its set of minima corresponds to different stable states of, the network. There are three broad paradigms of learning: supervised, unsupervised (or self-organized), and reinforcement.2 (Reinforcement is sometimes viewed as a special case of supervised learning.) Each category has many algorithms. In supervised learning (learning with a teacher), adaption occurs when the system directly compares the network output with a known correct or desired answer. In unsupenrised learning, the network is tuned …

Journal ArticleDOI
01 Mar 1996
TL;DR: In a nutshell, programming by contract is about providing just such specifications for software components (that is, classes), and it provides the best hope of a basis for a true software component industry.
Abstract: "Why can't software be more like hardware?" has been the software engineer's lament for nearly as long as there have been large software systems. In particular, why isn't there a software components industry to rival the existing hardware components industry? Hardware components come with the following attributes: an interface that hides detail that would only confuse or at least distract me; an unambiguous interface specification written in a language I can understand (in the case of the integrated circuit, this may be a fairly complex language, but it's one I expect to learn if I'm going to work with that hardware); a guarantee-the component has been tested and/or validated against its specification. All three items-especially the last one-are notably lacking for software components. Indeed, software tends to come with an antiguarantee, otherwise known as a disclaimer. All of the above points rely on a rigorous specification of the hardware component's interface. In a nutshell, programming by contract is about providing just such specifications for software components (that is, classes), and it provides the best hope of a basis for a true software component industry. The discussion focuses on object oriented software.

Journal ArticleDOI
01 Dec 1996
TL;DR: The article discusses the use of pV3 for investigating thermal convection in the interior of the Earth, used to help understand details of the physics of variable viscosity and phase transitions in high Rayleigh number convection.
Abstract: In 3D simulation of thermal convective flows that change with time, each run can generate gigabytes of data. A client server visualization tool called pV3, running on a parallel computer, allows data exploration during computation instead of by postprocessing. While pV3 has been primarily used in the area of flow visualization designed with aerospace applications in mind, it has easily been applied in other flow visualization areas, such as geoscience. It has also been used for 3D wave propagation problems, including seismic waves and electromagnetics. More importantly, many of the concepts of pV3's client server interactive parallel visualization model could be used as a prototype for other application areas, such as enhanced MRI in medicine. The key to this kind of interactive visualization lies in being able to scale the compute part of the visualization as well as the application is scaled, while still minimizing network traffic. The article discusses the use of pV3 for investigating thermal convection in the interior of the Earth. pV3 was used to help understand details of the physics of variable viscosity and phase transitions in high Rayleigh number convection.

Journal ArticleDOI
01 Jun 1996
TL;DR: In this article, the authors discuss the removal of atmospheric effects due to scattering and absorption, i.e., atmospheric correction, which basically consists of two major steps: first, the optical characteristics of the atmosphere are estimated, given the atmospheric optical properties, and then the remotely sensed imagery is corrected by inversion procedures that derive the surface reflectance.
Abstract: The varied features of the earth's surface each reflect sunlight and other wavelengths of solar radiation in a highly specific way. This principle provides the foundation for the science of satellite based remote sensing. A vexing problem confronting remote sensing researchers, however, is that the reflected radiation observed from remote locations is significantly contaminated by atmospheric particles. These aerosols and molecules scatter and absorb the solar photons reflected by the surface in such a way that only part of the surface radiation can be detected by a sensor. The article discusses the removal of atmospheric effects due to scattering and absorption, ie., atmospheric correction. Atmospheric correction algorithms basically consist of two major steps. First, the optical characteristics of the atmosphere are estimated. Various quantities related to the atmospheric correction can then be computed by radiative transfer algorithms, given the atmospheric optical properties. Second, the remotely sensed imagery is corrected by inversion procedures that derive the surface reflectance. We focus on the second step, describing our work on improving the computational efficiency of the existing atmospheric correction algorithms. We discuss a known atmospheric correction algorithm and then introduce a substantially more efficient version which we have devised. We have also developed a parallel implementation of our algorithm.

Journal ArticleDOI
01 Dec 1996
TL;DR: The authors have been leading an effort to design monitoring systems that use a causal network, which uses a directed graph and probability theory to produce continuous probabilistic information on why a system has abnormal readings at some sensors.
Abstract: Complex industrial systems, such as utility turbine generators, are usually monitored by observing data recorded by sensors placed at various locations in the system. Typically, data are collected continuously and an expert, or a team of experts, monitors the readings. From the readings they assess the "health" of the system. Should readings at some sensors become unusual,the experts then use their diagnostic skills to determine the cause of the problem. It is better to detect problems early and correct them rather than waiting for more serious problems or a major failure. However, there are several problems associated with using human expertise to monitor complex systems which are outlined. There have been considerable efforts to develop expert computer systems that can perform the monitoring and diagnosis. These efforts include the use of ruled based artificial intelligence. At General Electric corporate R&D, one of the authors has been leading an effort to design monitoring systems that use a causal network. They have been shown to deliver much ofthe diagnostic ability needed in various GE applications. Indeed, the GE work has a wide range of applications, and can be used in complex systems such as power generators, transportation equipment (planes, trains, and automobiles), medical equipment, and production plants. Causal networks use a directed graph and probability theory to produce continuous probabilistic information on why a system has abnormal readings at some sensors.