scispace - formally typeset
Open AccessProceedings ArticleDOI

Evaluating Model Uncertainty Based on Probabilistic Analysis and Component Output Uncertainty Descriptions

Reads0
Chats0
TLDR
This paper describes a method utilizing information obtained from the common practice component level validation to assess uncertainties on model top level and investigates the applicability of combining the output uncertainty method with probabilistic techniques to provide upper and lower bounds on model uncertainties.
Abstract
To support early model validation, this paper describes a method utilizing information obtained from the common practice component level validation to assess uncertainties on model top level. Initiated in previous research, a generic output uncertainty description component, intended for power-port based simulation models of physical systems, has been implemented in Modelica. A set of model components has been extended with the generic output uncertainty description, and the concept of using component level output uncertainty to assess model top level uncertainty has been applied on a simulation model of a radar liquid cooling system. The focus of this paper is on investigating the applicability of combining the output uncertainty method with probabilistic techniques, not only to provide upper and lower bounds on model uncertainties but also to accompany the uncertainties with estimated probabilities. It is shown that the method may result in a significant improvement in the conditions for conducting an assessment of model uncertainties. The primary use of the method, in combination with either deterministic or probabilistic techniques, is in the early development phases when system level measurement data are scarce. The method may also be used to point out which model components contribute most to the uncertainty on model top level. Such information can be used to concentrate physical testing activities to areas where it is needed most. In this context, the method supports the concept of Virtual Testing.

read more

Content maybe subject to copyright    Report

1 Copyright © 2012 by ASME
Proceedings of the ASME 2012 International Mechanical Engineering Congress & Exposition
IMECE2012
November 9-15, 2012, Houston, Texas, USA
IMECE2012-85236
EVALUATING MODEL UNCERTAINTY BASED ON PROBABILISTIC ANALYSIS AND
COMPONENT OUTPUT UNCERTAINTY DESCRIPTIONS
Magnus Carlsson
Saab Aeronautics
Linköping, Sweden, SE-581 88
Email: magnus.carlsson@liu.se
Hampus Gavel
Saab Aeronautics
Linköping, Sweden, SE-581 88
Johan Ölvander
Div. of Machine Design
Dept. of Management and Engineering
Linköping University
Linköping, Sweden, SE-581 83
ABSTRACT
To support early model validation, this paper describes a
method utilizing information obtained from the common
practice component level validation to assess uncertainties on
model top level. Initiated in previous research, a generic output
uncertainty description component, intended for power-port
based simulation models of physical systems, has been
implemented in Modelica. A set of model components has been
extended with the generic output uncertainty description, and
the concept of using component level output uncertainty to
assess model top level uncertainty has been applied on a
simulation model of a radar liquid cooling system. The focus of
this paper is on investigating the applicability of combining the
output uncertainty method with probabilistic techniques, not
only to provide upper and lower bounds on model uncertainties
but also to accompany the uncertainties with estimated
probabilities.
It is shown that the method may result in a significant
improvement in the conditions for conducting an assessment of
model uncertainties. The primary use of the method, in
combination with either deterministic or probabilistic
techniques, is in the early development phases when system
level measurement data are scarce. The method may also be
used to point out which model components contribute most to
the uncertainty on model top level. Such information can be
used to concentrate physical testing activities to areas where it
is needed most. In this context, the method supports the concept
of Virtual Testing.
INTRODUCTION
Simulation models of physical systems, with or without control
software, are widely used in the aeronautic industry, with
applications ranging from system development to verification
and end-user training. In the effort to reduce the cost of
physical testing related to the certification process, the
aeronautic industry strives to expand the usage of modeling and
simulation (M&S) further by introducing the concept of Virtual
Testing (VT). While no compact and broadly agreed definition
of VT has been found, the term VT in this paper refers to the
structured use of M&S to critically evaluate a product’s design
against specified requirements. In the case of certification, the
requirements are set by certification authorities, typically the
Federal Aviation Administration in the US or the European
Aviation Safety Agency in Europe [1,2] . When VT is used as
an Acceptable Means of Compliance in certification, this may
be termed Virtual Certification (VC). There is an intuitive
analogy between physical testing and VT in terms of the test
article and the actual test execution the test article in physical
testing corresponds to a validated simulation model in VT, and
the physical test execution corresponds to the simulation in VT.
In both cases, it is equally important that test procedures and
test setups are well defined.
At the time of writing, EU funded VT related research
projects are on-going in all major transportation sectors from
the aeronautic sector to the automotive, railway, and maritime
sectors. One example from the aeronautic sector is the
CRESCENDO project, in which methodologies and tools

2 Copyright © 2012 by ASME
intended to enable collaborative design, VT, and VC are being
developed [3]. It should be emphasized that the CRESCENDO
VT and VC approaches are intended to support the current
certification process, and that VT will not replace physical
testing. Instead, VT is intended to be the means to better plan
physical testing, to reduce the number of physical tests, and to
reduce risk associated with physical testing.
The importance of Verification and Validation (V&V) of
simulation models is well known and the V&V research field
has a long history, see for example Naylor and Finger [4] who
propose a method named multi-stage verification, and Sargent
[5] who provides an overview of the subject and describes a set
of validation techniques. In today’s developments of VT
towards VC, the challenging task of assessing a model’s
validity is nonetheless of greater importance than ever. In a
broader perspective, model validation is only one factor in the
assessment of the credibility of a M&S activity. For examples
of credibility assessment methods, see the Credibility
Assessment Scale proposed in the NASA Standard for Models
and Simulations [6], the Predictive Capability Maturity Model
proposed by Sandia National Laboratories [7], and the
Validation Process Maturity Model proposed by Harmon and
Youngblood [8]. A brief summary of these three methods is
provided by Carlsson et al. [9].
With the above credibility scope in mind, this paper zooms
into model validation, and more specifically into early model
validation, which here refers to assessment of a model’s
validity in lack of system level measurement data. A main
research question is: Is there an industrial applicable way to
use information on component level uncertainty to draw
conclusions on model top level uncertainty? As an answer, this
paper proposes a pragmatic approach to how to utilize
uncertainty information obtained from the common practice of
component validation to assess uncertainties on model top
level. Previous research has shown that the method may result
in a significant reduction of the number of uncertain parameters
that require consideration in a simulation model, and the
method has been tested in combination with a set of
deterministic techniques [10]. When the number of uncertain
parameters to take into account has been successfully reduced,
probabilistic techniques may be considered even for
computationally expensive models. The method is primarily
intended for large scale mathematical 1-D dynamic simulation
models of physical systems with or without control software,
typically described by Ordinary Differential Equations (ODE)
or Differential Algebraic Equations (DAE).
The following section introduces the reader to early model
validation and provides the context of the proposed method.
The proposed method is then combined with probabilistic
techniques and applied in an uncertainty analysis of a
simulation model of a radar liquid cooling system. The final
section contains conclusions and recommendations to consider
when applying the proposed method in uncertainty analysis of
simulation models.
EARLY MODEL VALIDATION
Several definitions of the terms verification and validation
exist, some of them collected in the Generic Methodology for
Verification and Validation (GM-VV) [11]. As formulated by
Balci [12], verification concerns building the model right, i.e.
determining whether the model is compliant with the model
specification and if it accurately represents the underlying
mathematical model. Validation concerns building the right
model, i.e. determining whether the model is a sufficiently
accurate representation of the real system of interest from the
perspective of the intended use of the model. This brief
description of V&V terminology is in line with definitions used
by NASA [6], ITOP [13], and the US DoD [14].
Balci [12] lists more than 75 techniques for verification,
validation, and testing (VV&T), divided into four groups;
informal, formal, static, and dynamic. These are further
described in Balci [15]. Another well-established set of
validation techniques is provided by Sargent, see Ref. [16] for
an up-to-date version. As indicated above, Sargent’s list
concerns validation techniques only, while Balci’s list contains
a mix of VV&T techniques, and it is not always easy to
determine whether a specific technique should be considered to
be directed towards verification or validation. It is the authors’
understanding that informal techniques like face validation and
reviews are generic and may concern both verification and
validation. Informal techniques are of great importance and
often easy to apply, but will not be further discussed in this
paper. Formal techniques based on mathematical proof of
correctness may also cover both verification and validation
aspects. However, as indicated by Balci [15], formal methods
are rarely applicable where complex simulation models are
cencerned. Static techniques like interface analysis and
structural analysis are believed to be directed more towards
verification than validation. Left are the group of dynamic
techniques which, as clarified in the sections below, are of most
interest to this paper.
V&V of simulation models is sometimes seen as activities
carried out at the end of the modeling process in particular the
validation activity which may require a large amount of
measurement data from the real system of interest. When using
M&S to take early model-based design decisions when no
physical prototype of the system exists it is still important to
assess the uncertainty in the simulation results. In addition to
this, the authors experience from M&S of aircraft vehicle
systems is that there tend to be a persistent lack of system level
measurement data for validation purposes, also in the later
development stages. In practice, when modeling for example an
aircraft subsystem, one never has access to system level
measurement data covering all points in the flight envelope. To
what extent may the results from the validation against
measurement data then be interpolated/extrapolated? Since this
question may be hard to answer, it is important to be able to
assess model uncertainties with only limited system level
measurement data available. Such an assessment would
constitute an important part of early model validation.

3 Copyright © 2012 by ASME
With the purpose of facilitating early model validation, this
paper proposes a method based mainly on a combination of the
dynamic techniques denoted by Balci as sub-model/module
testing, bottom-up testing, and predictive validation. As
described in the following sections, the proposed method may
be combined with sensitivity analysis and/or optimization
techniques, and applied in deterministic- as well as probabilistic
frameworks to enable simulation model uncertainty analysis.
Uncertainty analysis in this paper refers to the process of
identifying, quantifying, and assessing the impact of
uncertainty sources embedded along the development and
usage of simulation models. A few examples of potential
sources of uncertainty are model parameters, model boundary
conditions, model simplifications, and the numerical method
used by the solver. According to Roy and Oberkampf [17], all
uncertainties originate from three key sources; model inputs,
numerical approximations, and model form uncertainty. This is
in line with the definitions provided by Coleman and Steele
[18]. Commonly, a distinction is made between aleatory
uncertainty (due to statistical variations, also referred to as
variability, inherent uncertainty, irreducible uncertainty, or
stochastic uncertainty) and epistemic uncertainty (due to lack of
information, also referred to as reducible uncertainty or
subjective uncertainty). See Padulo [19] for an extensive
literature review of uncertainty taxonomies.
THE OUTPUT UNCERTAINTY METHOD
To help the reader understand the proposed method, a
simulation model of a radar liquid cooling system is used as an
industrial application example. The method was originally
described by Carlsson et al. [10] by the means of a scenario
description. The following sub-sections introduce the industrial
application example and describe the method using a short
version of the scenario.
Industrial Application Example
A simulation model of the radar liquid cooling system in a Saab
Gripen Demonstrator Aircraft is used as an illustrative example.
The model was developed in the Modelica based M&S tool
Dymola [20,21]. The main components in the system are pump,
accumulator, liquid-to-air heat exchanger, piping, and a sub-
system of heat loads including the radar antenna and related
electronic equipment. The simulation model layout is shown in
the picture below, which also includes information to
distinguish between components and sub-models. In the figure
below, a component is a model of a single piece of equipment
and a sub-model includes several components.
Figure 1: LAYOUT OF THE RADAR LIQUID COOLING
SYSTEM.
From a system simulation perspective, this model may
appear fairly simple. Yet it is a component based model of a
physical system, including a number of components and one
sub-model. This 1-D dynamic simulation model is used to
predict pressure, mass flow, and temperature levels at different
points in the system. The components include equations
describing pressure variations due to g-loads and fluid thermal
expansion, internal heat exchange between equipment and
fluid, external heat exchange between equipment and
surrounding equipment bays, temperature dynamics in
equipment and fluid, as well as fluid dynamics due to transport
delays in the piping arrangement. The model includes
approximately 200 equations, 100 parameters, and 50 states.
The radar liquid loop model was developed using a sub-
package of a component library developed at Saab Aeronautics
and uses a connector interface that includes information about
pressure, mass flow, and specific enthalpy 󰇛 󰇗 󰇜.
Motivation of Method
Prior to initiating the development of a simulation model’s
components and sub-models, there are normally activities such
as specifying the intended use of the model, deriving model
requirements, defining model layout and interfaces, and
producing a V&V plan [9]. In the following short scenario,
these initial activities are assumed to be completed and we
move straight on to what one may call the core of model
development. Briefly described, a typical approach in
component based modeling is to a) model each component or if
possible select suitable components from a component library,
b) perform V&V activities on component level, which is often
an iterative process including tuning of component parameters,
and c) assemble sub-models up to model top level.
Available information on component level typically used
in steps a) and b) may for example be datasheets, rig test data
for similar components, or component level CFD simulation
results. Thus, after carrying out the component V&V activities
in step b), there is indeed uncertainty information available for
the individual components and sub-models. However, in the
authors’ experience this uncertainty information on component
level is not always utilized at model top level. To summarize
the problem uncertainties of the components are known to
Heat Load
(sub-model)
Pipe 1
(component)
Pipe 2
(component)
Accumulator
(component)
Heat
Exchanger
(component)
Cooling
air in
Cooling
air out

4 Copyright © 2012 by ASME
some degree, but what is the uncertainty on model top level?
For example, what is the uncertainty in the pressure at the heat
load input port in the liquid cooling model? Reasonably, it
should be possible to utilize our knowledge of the uncertainties
on component level and sub-model level to estimate the
uncertainties on top level.
Where system level measurement data is unavailable, a
common approach is to perform a sensitivity analysis, e.g. by
varying component parameters and performing a simulation for
each parameter change to determine how different parameters
affect the model output. However, in the scenario described
above we have knowledge of the uncertainties of the
component characteristics (output), but we do not know the
uncertainties in the component parameters (input). Due to lack
of information on parameter uncertainty, quantifying
uncertainties in component parameters is often a difficult task.
As an example what is a suitable range of the roughness
coefficient in component “Pipe 1”, or what does the probability
density function look like? Quantifying parameter uncertainties
in models with many parameters is thus not always feasible.
From an uncertainty analysis point of view there is a
drawback if the only thing that is varied in the sensitivity
analysis is a model’s original component parameters the
uncertainties in a model’s original component parameters only
cover one aspect of the total model uncertainty. In that case,
other kinds of uncertainties, like uncertainties of underlying
equations or uncertainties due to model simplifications, are
ignored.
In addition to this, sensitivity analysis applied on models
with many parameters requires a large number of simulations.
One approach to mitigate the computational heaviness of the
sensitivity analysis is to use simplified models, also known as
meta-models or surrogate models, e.g. response surfaces of
varying order [22]. By definition there is a discrepancy between
the surrogate model and the original model of interest. In this
approach additional V&V tasks therefore need to be performed.
If a sensitivity analysis is carried out on the surrogate model,
knowledge is gained of how the parameters affect the surrogate
model output and not the output of the original model.
Description of Method
To answer the question What is the uncertainty on model top
level?”, given the constraints regarding large scale physical
models as well as the lack of system level measurement data,
this section proposes an approach based on the original model
components extended with an uncertainty description utilizing
available information on component output uncertainty. As the
model components may be legacy code or originate from a
Commercial Off The Shelf (COTS) component library, it is
favorable to keep them unmodified. Andersson [23] describes
how a fault injection block may be implemented in signal flow
models. At Saab Aeronautics, this kind of fault injection feature
has proven to be useful for simulation of different kind of faults
in mid-scale and large-scale simulators, for example sensor
failures of various kinds. The method proposed in this paper is
similar to the fault injection feature for signal flow models,
except that consideration must be given to the power port
concept commonly used in physical modeling.
The idea is to develop a new uncertain component by
including an original component and adding an uncertainty
description component. The uncertainties are introduced in the
uncertainty description component by including equations for
modifying one or more of the variables in the connector
interface. The uncertainties may be expressed in absolute terms
or relative to some characteristic of the original component. As
this approach enables uncertainties to be defined for a
component’s outputs rather than its inputs, the method is
termed output uncertainty. A brief description is given below of
how the method is implemented in the thermal-fluid component
library used in the liquid cooling model. For equations and
further implementation aspects, see Ref. [10].
In the component library used for the liquid cooling model,
the connector interface includes information on pressure, mass
flow, and specific enthalpy 󰇛 󰇗 󰇜. In the aim to achieve an
intuitive uncertainty description, it has been chosen to add
uncertainties in terms of pressure and temperature (the latter
implicitly meaning specific enthalpy). This is appropriate since
pressure and temperature are two commonly used entities when
measuring or specifying system characteristics. In line with the
discussion above, two types of uncertainty descriptions have
been implemented absolute and relative. The absolute
uncertainty component introduces two parameters; pressure
uncertainty p
UC
[Pa] and temperature uncertainty T
UC
[K]. The
relative uncertainty component uses similar parameters, but
relative to the pressure difference and temperature difference
over the original component; relative pressure uncertainty p
RUC
[-] and relative temperature uncertainty T
RUC
[-].
It should be noted that as when varying for example a
component’s pressure loss coefficient varying a component’s
pressure uncertainty parameter corresponds to a variation of the
component’s pressure drop characteristics. Thus, introducing
uncertainties in pressure implies uncertainties in mass flow.
The figure below shows an example of the pressure drop
characteristics of a pipe component with absolute and relative
uncertainty respectively.
Figure 2: ABSOLUTE UNCERTAINTY VERSUS RELATIVE
UNCERTAINTY.
0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6
0
20
40
60
80
100
120
140
m
flow
[kg/s]
p [kPa]
Reference
10kPa Absolute Uncertainty
10% Relative Uncertainty

5 Copyright © 2012 by ASME
Based on existing components, a new component library
with uncertainty descriptions is created. As an example, a new
component UncertainPipe is created by including an original
pipe component and a relative uncertainty component, and
propagating all parameters to component top level. From a user
point of view, the UncertainPipe looks like the original pipe
component with the two additional parameters p
RUC
and T
RUC
.
Note that this is done for all flow type components in the model
(pump, HEX, pipe1, AESA, and pipe2). The figure below shows
the liquid cooling model updated with the appropriate uncertain
components, as well as how the uncertainty description
components are connected with the original components.
Figure 3: RADAR LIQUID COOLING MODEL, UPDATED WITH COMPONENTS INCLUDING AN OUTPUT UNCERTAINTY
DESCRIPTION. THE TWO NEW PARAMETERS IN THE PARAMETER DIALOG ARE MARKED WITH A RED ELLIPSE.
Context of Method
To define the context of the output uncertainty method and to
clarify the difference compared to alternative methods, Figure 4
is provided. The figure aims to visualize that uncertainty
analysis of simulation models may be carried out in several
different ways, by combining a set of techniques. The figure
does not claim to show all possible ways of performing an
uncertainty analysis, but is intended to show alternatives
closely related to the proposed output uncertainty method. As
indicated in the figure, one approach to assess simulation model
uncertainties is to use the nominal (or “original”) model in
combination with some deterministic or probabilistic technique.
In the case of sensitivity analysis (SA), simply using upper and
lower bounds on parameter values would imply a deterministic
uncertainty analysis, while using probability density functions
would imply a probabilistic uncertainty analysis.
Starting from the top of the figure and following the arrows
down to the bottom, a set of different tool chains are obtained.
Naturally, each tool chain has its own benefits and drawbacks
regarding for example execution time, management effort,
availability of uncertainty information, and results information
content. However, assessing the benefits and drawbacks of each
alternative tool chain is beyond the scope of this paper.

Citations
More filters
Proceedings ArticleDOI

A Framework for Early and Approximate Uncertainty Quantification of Large System Simulation Models

TL;DR: Uncertainty Quantification is vital to ensure credibility in simulation results and to justify model-based design decisions – especially in early development phases when system level measureme decisions are made.
References
More filters
Journal ArticleDOI

Metamodels for Computer-Based Engineering Design: Survey and Recommendations

TL;DR: This paper surveys their existing application in engineering design, and addresses the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes, along with recommendations for the appropriate use of statistical approximation techniques in given situations.
Posted Content

Verification and validation of simulation models

TL;DR: In this article, the authors present a survey of verification and validation of simulation models in operations research, focusing on good programming practice (such as modular programming), checking intermediate simulation outputs through tracing and statistical testing per module, statistical testing of final simulation outputs against analytical results, and animation.
Journal ArticleDOI

Verification and validation of simulation models

TL;DR: Three approaches to deciding model validity are described, two paradigms that relate verification and validation to the model development process are presented, and various validation techniques are defined.
Book

Experimentation, Validation, and Uncertainty Analysis for Engineers

TL;DR: This work focuses on using the Taylor Series Method for Uncertainty Propagation in Experiments and Validation to assess the importance of uncertainty in the design and execution of experiments and applications.
Journal ArticleDOI

A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing

TL;DR: An overview of a comprehensive framework is given for estimating the predictive uncertainty of scientific computing applications, which treats both types of uncertainty (aleatory and epistemic), incorporates uncertainty due to the mathematical form of the model, and provides a procedure for including estimates of numerical error in the Predictive uncertainty.
Related Papers (5)