scispace - formally typeset
Search or ask a question

Showing papers on "Uncertainty quantification published in 2005"


Journal ArticleDOI
01 Feb 2005
TL;DR: A statistical approach for characterizing uncertainty in predictions that are made with the aid of a computer simulation model that uses a Bayesian formulation and relies on Gaussian process models to model unknown functions of the model inputs.
Abstract: We develop a statistical approach for characterizing uncertainty in predictions that are made with the aid of a computer simulation model. Typically, the computer simulation code models a physical system and requires a set of inputs---some known and specified, others unknown. A limited amount of field data from the true physical system is available to inform us about the unknown inputs and also to inform us about the uncertainty that is associated with a simulation-based prediction. The approach given here allows for the following: uncertainty regarding model inputs (i.e., calibration); accounting for uncertainty due to limitations on the number of simulations that can be carried out; discrepancy between the simulation code and the actual physical system; uncertainty in the observation process that yields the actual field data on the true physical system. The resulting analysis yields predictions and their associated uncertainties while accounting for multiple sources of uncertainty. We use a Bayesian formulation and rely on Gaussian process models to model unknown functions of the model inputs. The estimation is carried out using a Markov chain Monte Carlo method. This methodology is applied to two examples: a charged particle accelerator and a spot welding process.

629 citations


Journal ArticleDOI
TL;DR: In this article, the sensitivity of the ground-motion models to these conversions is shown, especially the measure of source-to-site distance, highlighting the need to take into account any incompatibilities among the selected equations.
Abstract: Logic trees are widely used in probabilistic seismic hazard analysis as a tool to capture the epistemic uncertainty associated with the seismogenic sources and the ground-motion prediction models used in estimating the hazard. Combining two or more ground-motion relations within a logic tree will generally require several conversions to be made, because there are several definitions available for both the predicted ground-motion parameters and the explanatory parameters within the predictive ground-motion relations. Procedures for making conversions for each of these factors are presented, using a suite of predictive equations in current use for illustration. The sensitivity of the resulting ground-motion models to these conversions is shown to be pronounced for some of the parameters, especially the measure of source-to-site distance, highlighting the need to take into account any incompatibilities among the selected equations. Procedures are also presented for assigning weights to the branches in the ground-motion section of the logic tree in a transparent fashion, considering both intrinsic merits of the individual equations and their degree of applicability to the particular application.

326 citations


Journal ArticleDOI
TL;DR: These methods are discussed in the specific context of a quasi-one-dimensional nozzle flow with uncertainty in inlet conditions and nozzle shape and it is shown that both stochastic approaches efficiently handle uncertainty propagation.
Abstract: This paper discusses two stochastic approaches to computing the propagation of uncertainty in numerical simulations: polynomial chaos and stochastic collocation. Chebyshev polynomials are used in both cases for the conventional, deterministic portion of the discretization in physical space. For the stochastic parameters, polynomial chaos utilizes a Galerkin approximation based upon expansions in Hermite polynomials, whereas stochastic collocation rests upon a novel transformation between the stochastic space and an artificial space. In our present implementation of stochastic collocation, Legendre interpolating polynomials are employed. These methods are discussed in the specific context of a quasi-one-dimensional nozzle flow with uncertainty in inlet conditions and nozzle shape. It is shown that both stochastic approaches efficiently handle uncertainty propagation. Furthermore, these approaches enable computation of statistical moments of arbitrary order in a much more effective way than other usual techniques such as the Monte Carlo simulation or perturbation methods. The numerical results indicate that the stochastic collocation method is substantially more efficient than the full Galerkin, polynomial chaos method. Moreover, the stochastic collocation method extends readily to highly nonlinear equations. An important application is to the stochastic Riemann problem, which is of particular interest for spectral discontinuous Galerkin methods.

287 citations


Journal ArticleDOI
TL;DR: In this paper, a nonparametric probabilistic approach of random uncertainties is presented for linear dynamical systems and for nonlinear dynamical system constituted of a linear part with additional localized nonlinearities.

269 citations


Journal ArticleDOI
TL;DR: In this article, a general non-parametric probabilistic approach of model uncertainties for dynamical systems has been proposed using the random matrix theory, and a comprehensive overview of this approach in developing its foundations in simple terms and illustrating all the concepts and the tools introduced in the general theory, by using a simple example.

226 citations


Journal ArticleDOI
TL;DR: In this article, the authors look at the difference between natural and epistemic uncertainty in flood frequency analysis and show how more data steepen the cumulative distribution function (cdf) of AFP, and therefore decrease the uncertainty about AFP.

205 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the case against the use of the mean hazard curve and explain why this practice should be discontinued and, where necessary, removed from regulations, in the current practice of probabilistic seismic hazard analysis using logic trees.
Abstract: In the current practice of probabilistic seismic hazard analysis (PSHA) using logic trees, it is common to use the mean hazard curve to determine ground motions for engineering design. We present the case against the use of the mean hazard curve and explain why this practice should be discontinued and, where necessary, removed from regulations. The identification and quantification of uncertainties is integral to modern seismic hazard analysis. In probabilistic seismic hazard studies, the variability of the earthquake magnitude, earthquake location, and ground motion level (expressed as the number of logarithmic standard deviations above the logarithmic mean) are considered explicitly in the computation of the hazard. In major seismic hazard projects, the scientific uncertainty in the models of the distributions of earthquake magnitude, location, and ground motion are also considered using logic trees (Kulkarni et al. 1984, Coppersmith and Youngs 1986, Reiter 1990, Bommer et al. 2005). The inherent variability considered directly in the hazard computation is called the aleatory variability, and the scientific uncertainty in the models of the earthquake occurrence and ground motion is called the epistemic uncertainty. The terms randomness and uncertainty have also been used for aleatory variability and epistemic uncertainty, respectively; however, the former terms are now commonly used interchangeably. As a result, they are often mixed up when used in hazard analysis. The terms ‘‘aleatory variability’’ and ‘‘epistemic uncertainty’’ are used to provide an unambiguous terminology. This is not simply semantics: distinguishing between the two types of uncertainty is fundamental to the way that they are dealt with in the hazard calculations and how uncertainty is handled in decision making on the basis of the hazard analysis. In application, the key difference is that aleatory variability leads to the shape of the hazard curve and the epistemic uncertainty leads to alternative hazard curves. There is no dilemma regarding the inclusion of the aleatory variability in the hazard calculations, particularly the variability associated with ground-motion prediction equations: a ‘‘hazard curve’’ calculated using only median values from the equations and neglecting the standard deviation has little meaning and cannot be considered a genuine hazard curve. The hazard analyst does, however, have control over the branches of the logic tree and the weights assigned to these, and hence over the degree to which epistemic uncer

184 citations


Journal ArticleDOI
TL;DR: The results indicate that the Bayesian inference method can provide accurate point estimates as well as uncertainty quantification to the solution of the inverse radiation problem.

136 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose to treat the ground-motion sections of a complete logic tree for seismic hazard as a single composite model representing the complete state-of-knowledge and belief of a particular analyst on ground motion in a particular target region.
Abstract: Logic trees have become a popular tool in seismic hazard studies. Commonly, the models corresponding to the end branches of the complete logic tree in a probabalistic seismic hazard analysis (psha) are treated separately until the final calculation of the set of hazard curves. This comes at the price that information regarding sensitivities and uncertainties in the ground-motion sections of the logic tree are only obtainable after disaggregation. Furthermore, from this end-branch model perspective even the designers of the logic tree cannot directly tell what ground-motion scenarios most likely would result from their logic trees for a given earthquake at a particular distance, nor how uncertain these scenarios might be or how they would be affected by the choices of the hazard analyst. On the other hand, all this information is already implicitly present in the logic tree. Therefore, with the ground-motion perspective that we propose in the present article, we treat the ground-motion sections of a complete logic tree for seismic hazard as a single composite model representing the complete state-of-knowledge-and-belief of a particular analyst on ground motion in a particular target region. We implement this view by resampling the ground-motion models represented in the ground-motion sections of the logic tree by Monte Carlo simulation (separately for the median values and the sigma values) and then recombining the sets of simulated values in proportion to their logic-tree branch weights. The quantiles of this resampled composite model provide the hazard analyst and the decision maker with a simple, clear, and quantitative representation of the overall physical meaning of the ground-motion section of a logic tree and the accompanying epistemic uncertainty. Quantiles of the composite model also provide an easy way to analyze the sensitivities and uncertainties related to a given logic-tree model. We illustrate this for a composite ground-motion model for central Europe. Further potential fields of applications are seen wherever individual best estimates of ground motion have to be derived from a set of candidate models, for example, for hazard maps, sensitivity studies, or for modeling scenario earthquakes.

115 citations


Journal ArticleDOI
TL;DR: In this article, the authors compared two techniques for uncertainty quantification in chemistry computations, one based on sensitivity analysis and error propagation, and the other on stochastic analysis using polynomial chaos techniques.
Abstract: This study compares two techniques for uncertainty quantification in chemistry computations, one based on sensitivity analysis and error propagation, and the other on stochastic analysis using polynomial chaos techniques. The two constructions are studied in the context of H2O2 ignition under supercritical-water conditions. They are compared in terms of their prediction of uncertainty in species concentrations and the sensitivity of selected species concentrations to given parameters. The formulation is extended to one-dimensional reacting-flow simulations. The computations are used to study sensitivities to both reaction rate pre-exponentials and enthalpies, and to examine how this information must be evaluated in light of known, inherent parametric uncertainties in simulation parameters. The results indicate that polynomial chaos methods provide similar first-order information to conventional sensitivity analysis, while preserving higher-order information that is needed for accurate uncertainty quantification and for assigning confidence intervals on sensitivity coefficients. These higher-order effects can be significant, as the analysis reveals substantial uncertainties in the sensitivity coefficients themselves. © 2005 Wiley Periodicals, Inc. Int J Chem Kinet 37: 368–382, 2005

88 citations


Book ChapterDOI
09 Mar 2005
TL;DR: A density measure for uncertain objectives is proposed to maintain diversity in the nondominated set and the approach is demonstrated to the reliability optimization problem, where uncertain component failure rates are usual and exhaustive tests are often not possible.
Abstract: Multi-objective evolutionary algorithms (MOEAs) have proven to be a powerful tool for global optimization purposes of deterministic problem functions. Yet, in many real-world problems, uncertainty about the correctness of the system model and environmental factors does not allow to determine clear objective values. Stochastic sampling as applied in noisy EAs neglects that this so-called epistemic uncertainty is not an inherent property of the system and cannot be reduced by sampling methods. Therefore, some extensions for MOEAs to handle epistemic uncertainty in objective functions are proposed. The extensions are generic and applicable to most common MOEAs. A density measure for uncertain objectives is proposed to maintain diversity in the nondominated set. The approach is demonstrated to the reliability optimization problem, where uncertain component failure rates are usual and exhaustive tests are often not possible due to time and budget reasons.

Journal ArticleDOI
TL;DR: In this paper, the authors compare the expected costs incurred when model uncertainty is ignored with those incurred when this uncertainty is explicitly considered using robust optimization, and demonstrate how explicitly considering uncertainty may limit worst-case MR&R expenditures.
Abstract: The network-level infrastructure management problem involves selecting and scheduling maintenance, repair, and rehabilitation (MR&R) activities on networks of infrastructure facilities so as to maintain the level of service provided by the network in a cost-effective manner. This problem is frequently formulated as a Markov decision problem (MDP) solved via linear programming (LP). The conditions of facilities are represented by elements of discrete condition rating sets, and transition probabilities are employed to describe deterioration processes. Epistemic and parametric uncertainties not considered within the standard MDP/LP framework are associated with the transition probabilities used in infrastructure management optimization routines. This paper contrasts the expected costs incurred when model uncertainty is ignored with those incurred when this uncertainty is explicitly considered using robust optimization. A case study involving a network-level pavement management MDP/LP problem demonstrates how explicitly considering uncertainty may limit worst-case MR&R expenditures. The methods and results can also be used to identify the costs of uncertainty in transition probability matrices used in infrastructure management systems.

01 Sep 2005
TL;DR: In this paper, the authors compare the expected costs incurred when model uncertainty is ignored with those incurred when this uncertainty is explicitly considered using Robust Optimization, and demonstrate how explicitly considering uncertainty may limit worst case MR&R expenditures.
Abstract: The network-level infrastructure management problem involves selecting and scheduling Maintenance, Repair, and Rehabilitation (MR&R) activities on networks of infrastructure facilities so as to maintain the level of service provided by the network in a cost-effective manner. This problem is frequently formulated as a Markov Decision Problem (MDP) solved via Linear Programming (LP). The conditions of facilities are represented by elements of discrete condition rating sets, and transition probabilities are employed to describe deterioration processes. Epistemic and parametric uncertainties not considered within the standard MDP/LP framework are associated with the transition probabilities used in infrastructure management optimization routines. This paper contrasts the expected costs incurred when model uncertainty is ignored with those incurred when this uncertainty is explicitly considered using Robust Optimization. A case study involving a network-level pavement management MDP/LP problem demonstrates how explicitly considering uncertainty may limit worst case MR&R expenditures. The methods and results can also be used to identify the costs of uncertainty in transition probability matrices used in infrastructure management systems.

Journal ArticleDOI
TL;DR: In this paper, a computational framework is presented that integrates a high-fidelity aeroelastic model into reliability-based design optimization, and the system reliability is evaluated by a first-order reliability analysis method.

Journal ArticleDOI
TL;DR: This presentation investigates and illustrates formal procedures for assessing the uncertainty in the probability that a safety system will fail to operate as intended in an accident environment and suggests that evidence theory provides a potentially valuable representational tool for the display of the implications of significant epistemic uncertainty in inputs to complex analyses.
Abstract: Safety systems are important components of high-consequence systems that are intended to prevent the unintended operation of the system and thus the potentially significant negative consequences that could result from such an operation. This presentation investigates and illustrates formal procedures for assessing the uncertainty in the probability that a safety system will fail to operate as intended in an accident environment. Probability theory and evidence theory are introduced as possible mathematical structures for the representation of the epistemic uncertainty associated with the performance of safety systems, and a representation of this type is illustrated with a hypothetical safety system involving one weak link and one strong link that is exposed to a high temperature fire environment. Topics considered include (1) the nature of diffuse uncertainty information involving a system and its environment, (2) the conversion of diffuse uncertainty information into the mathematical structures associated with probability theory and evidence theory, and (3) the propagation of these uncertainty structures through a model for a safety system to obtain representations in the context of probability theory and evidence theory of the uncertainty in the probability that the safety system will fail to operate as intended. The results suggest that evidence theory provides a potentially valuable representational tool for the display of the implications of significant epistemic uncertainty in inputs to complex analyses.

Journal ArticleDOI
01 Feb 2005
TL;DR: It is evident that the computational cost for the KLME approach is significantly lower than those required by the Monte Carlo and CME approaches, and while the computational costs for the CME approach depend on the number of grid nodes, the cost for this approach is independent of the numberof grid nodes.
Abstract: Geological formations are ubiquitously heterogeneous, and the equations that govern flow and transport in such formations can be treated as stochastic partial differential equations. The Monte Carlo method is a straightforward approach for simulating flow in heterogeneous porous media; an alternative based on the moment-equation approach has been developed in the last two decades to reduce the high computational expense required by the Monte Carlo method. However, the computational cost of the moment-equation approach is still high. For example, to solve head covariance up to first order in terms of $\sigma_Y^2$, the variance of log hydraulic conductivity Y = ln Ks, it is required to solve sets of linear algebraic equations with N unknowns for 2N times (N being the number of grid nodes). The cost is even higher if higher-order approximations are needed. Zhang and Lu [J. Comput. Phys., 194 (2004), pp. 773--794] developed a new approach to evaluate high-order moments (fourth order for mean head in terms of $\sigma_Y$, and third order for head variances in terms of $\sigma_Y^2$) of flow quantities based on the combination of Karhunen--Loeve decomposition and perturbation methods. In this study, we systematically investigate the computational efficiency and solution accuracy of three approaches: Monte Carlo simulations, the conventional moment-equation (CME) approach, and the moment-equation approach based on Karhunen--Loeve decomposition (KLME). It is evident that the computational cost for the KLME approach is significantly lower than those required by the Monte Carlo and CME approaches. More importantly, while the computational costs (in terms of the number of times for solving linear algebraic equations with N unknowns) for the CME approach depend on the number of grid nodes, the cost for the KLME approach is independent of the number of grid nodes. This makes it possible to apply the KLME method to solve more realistic large-scale flow problems.

Journal ArticleDOI
01 May 2005
TL;DR: In this paper, an equation-and Galerkin-free computational approach to uncertainty quantification for dynamical systems conducts UQ computations using short bursts of appropriately initialized ensembles of simulations.
Abstract: The authors' equation- and Galerkin-free computational approach to uncertainty quantification for dynamical systems conducts UQ computations using short bursts of appropriately initialized ensembles of simulations. Their basic procedure estimates the quantities arising in stochastic Galerkin computations.


Proceedings ArticleDOI
01 Jan 2005
TL;DR: In this paper, an overall method for selection of a rapid prototyping (RP) technology under the geometric uncertainty inherent to mass customization is presented. But the method is not suitable for large lot sizes, such as with customized products, due to the high costs of tooling and setup.
Abstract: Rapid Prototyping (RP) is the process of building three-dimensional objects, in layers, using additive manufacturing. Rapid Manufacturing (RM) is the use of RP technologies to manufacture end-use, or finished, products. At small lot sizes, such as with customized products, traditional manufacturing technologies become infeasible due to the high costs of tooling and setup. RM offers the opportunity to produce these customized products economically. Coupled with the customization opportunities afforded by RM is a certain degree of uncertainty. This uncertainty is mainly attributed to the lack of information known about what the customer’s specific requirements and preferences are at the time of production. In this paper, we present an overall method for selection of a RM technology under the geometric uncertainty inherent to mass customization. Specifically, we define the types of uncertainty inherent to RM (epistemic), propose a method to account for this uncertainty in a selection process (interval analysis), and propose a method to select a technology under uncertainty (Hurwicz selection criterion). We illustrate our method with an example on the selection of an RM technology to produce custom caster wheels.Copyright © 2005 by ASME

Proceedings ArticleDOI
01 Jul 2005
TL;DR: In this paper, the authors formulated the stochastic (i.e. robust) water distribution system design problem as a multiobjective optimization problem under uncertainty and solved it by a genetic algorithm.
Abstract: The problem of the stochastic (i.e. robust) water distribution system (WDS) design is formulated and solved here as a multiobjective optimization problem under uncertainty. The objectives are to minimize two parameters – a) cost of the network design/rehabilitation; b) probability of network failure due to uncertainty in input parameters. The sources of uncertainty analyzed here are future water consumption and pipe roughnesses. All uncertain model input parameters are assumed to be random variables following some known probability density function (PDF). We also assume that those random variables are not necessarily independent and the matrix giving the correlations between all pairs of uncertain parameters (correlation matrix) is specified. To avoid using a computationally demanding sampling-based technique for uncertainty quantification, the original stochastic formulation is replaced by a deterministic one. After some simplifications, a fast numerical integration method is used to quantify the uncertainties. The optimization problem is solved by a Genetic Algorithm (GA), which finds the Pareto front by using the non-dominating sorting GA (NSGAII) for multi-objective optimisation. The proposed methodology was tested on the New York tunnel problem.

01 Sep 2005
TL;DR: An efficient solution algorithm is developed to solve robust counterparts of the asset management problem and shows how the proposed approach may reduce maintenance and rehabilitation (M&R) expenditures.
Abstract: Asset management systems help public works agencies decide when and how to maintain and rehabilitate infrastructure facilities in a cost effective manner Many sources of error, some difficult to quantify, can limit the ability of asset management systems to accurately predict how built systems will deteriorate This paper introduces the use of robust optimization to deal with epistemic uncertainty The Hurwicz criterion is employed to ensure management policies are never ‘too conservative’ An efficient solution algorithm is developed to solve robust counterparts of the asset management problem A case study demonstrates how the consideration of uncertainty alters optimal management policies and shows how the proposed approach may reduce maintenance and rehabilitation (M&R) expenditures

Journal ArticleDOI
TL;DR: In this paper, a polynomial response surface model is developed, relating system parameters to measurable output features, and the response surface is used in an inverse sense to identify system parameters from measured output features.
Abstract: Metamodels have been used with success in many areas of engineering for decades but only recently in the field of structural dynamics. A metamodel is a fast running surrogate that is typically used to aid an analyst or test engineer in the fast and efficient exploration of the design space. Response surface metamodels are used in this work to perform parameter identification of a simple five degree of freedom system, motivated by their low training requirements and ease of use. In structural dynamics applications, response surface metamodels have been utilized in a forward sense, for activities such as sensitivity analysis or uncertainty quantification. In this study a polynomial response surface model is developed, relating system parameters to measurable output features. Once this relationship is established, the response surface is used in an inverse sense to identify system parameters from measured output features. A design of experiments is utilized to choose points, representing a fraction of the full design space of interest, for fitting the response surface metamodel. Two parameters commonly used to characterize damage in a structural system, stiffness and damping, are identified. First changes are identified and located with success in a linear 5DOF system. Then parameter identification is attempted with a nonlinear 5DOF system and limited success is achieved. This work will demonstrate that use of response surface metamodels in an inverse sense shows promise for use in system parameter identification for both linear and weakly nonlinear systems and that the method has potential for use in damage identification applications.

Journal ArticleDOI
TL;DR: A probabilistic approach to the NLP method is proposed, which allows one to estimate the probability distribution of the predicted discharge values and to quantify the total uncertainty related to the forecast.
Abstract: [1] In the recent past the nonlinear prediction (NLP) method, initially developed in the context of nonlinear time series analysis, has been successfully applied to river flow deterministic forecasting In this work a probabilistic approach to the NLP method is proposed, which allows one to estimate the probability distribution of the predicted discharge values and to quantify the total uncertainty related to the forecast An ensemble technique is also proposed in order to optimize the choice of the parameter values and to provide robustness to the model calibration The probabilistic NLP method is applied to a river flow time series, giving results that confirm the effectiveness and reliability of the proposed approach

Proceedings ArticleDOI
TL;DR: This paper proposes a method for adjusting the shape of relative permeability curves during history-matching at the coarse scale, using the Neighbourhood Approximation algorithm and B-spline parameterisation and aims at encapsulating sub-grid heterogeneity in multi-phase functions directly in the coarse-scale model, and predicting uncertainty.
Abstract: Reservoir simulation to predict production performance requires two steps: one is history-matching, and the other is uncertainty quantification in forecasting. In the process of history-matching, rock relative permeability curves are often altered to reproduce production data. However, guidelines for changing the shape of the curves have not been clearly established. The aim of this paper is to clarify the possible influence of relative permeabilities on reservoir simulation using the uncertainty envelope. We propose a method for adjusting the shape of relative permeability curves during history-matching at the coarse scale, using the Neighbourhood Approximation algorithm and B-spline parameterisation. After generating multiple history-matched models, we quantify the uncertainty envelope in a Bayesian framework. Our approach aims at encapsulating sub-grid heterogeneity in multi-phase functions directly in the coarse-scale model, and predicting uncertainty. In this sense, the framework diers from conventional procedures which perturb fine-scale features, upscale the models and evaluate each performance. In addition, B-spline parameterisation is flexible allowing the capture of local features in the relative permeability curves. The results of synthetic cases showed that the lack of knowledge of the subgrid permeability and the insucient

Journal ArticleDOI
TL;DR: A method is developed in this paper to accelerate the convergence in computing the solution of stochastic algebraic systems of equations by computing a polynomial chaos decomposition of a stoChastic preconditioner to the system of equations.
Abstract: A method is developed in this paper to accelerate the convergence in computing the solution of stochastic algebraic systems of equations. The method is based on computing, via statistical sampling, a polynomial chaos decomposition of a stochastic preconditioner to the system of equations. This preconditioner can subsequently be used in conjunction with either chaos representations of the solution or with approaches based on Monte Carlo sampling. In addition to presenting the supporting theory, the paper also presents a convergence analysis and an example to demonstrate the significance of the proposed algorithm.


Proceedings ArticleDOI
16 May 2005
TL;DR: The nonparametric approach to vehicle uncertainties modelling shows the sensitivity of the vibroacoustic frequency responses to structural and cavity uncertainties as well as coupling interface uncertainties, and flexible parts appear to be more sensitive to random uncertainties than stiff parts.
Abstract: In order to improve the robustness of vibroacoustic numerical predictions, one introduces a model of random uncertainties. The random uncertainty modelling relies on a nonparametric approach providing random system realizations with a maximum entropy. This approach only requires a few uncertainty parameters but takes into account data errors as well as model errors. It appears to be well adapted to study the variability of structural-acoustic systems; the implementation of the method for this class of problem is presented here for the first time. Practically, the paper deals with a classical low frequency vibroacoustic modelling such as used for booming noise predictions. The application of the nonparametric approach to vehicle uncertainties modelling shows the sensitivity of the vibroacoustic frequency responses to structural and cavity uncertainties as well as coupling interface uncertainties. Flexible parts appear to be more sensitive to random uncertainties than stiff parts. The sensitivity of the structural modes to structural random uncertainties is also shown in a stochastic MAC table.

04 Sep 2005
TL;DR: Both parametric and non-parametric probabilistic approaches are used on a complex system of aerospace engineering constituted of a satellite coupled with its launcher in order to quantify the sensitivity of the structure to data uncertainties as well as model uncertainties.
Abstract: The dynamical analysis of complex mechanical systems is in general very sensitive to random uncertainties. In order to treat the latter in a rational way and to increase the robustness of the dynamical predictions, the random uncertainties can be represented by probabilistic models. The structural complexity of the dynamical systems arising in these fields results in large finite element models with significant random uncertainties. Parametric probabilistic models capture the uncertainty in the parameters of the numerical model of the structure, which are often directly related to physical parameters in the actual structure, e.g. Young's modulus. Model uncertainties would have to be modeled separately. On the other hand, the proposed nonparametric model of random uncertainties represents a global probabilistic approach which, in addition, takes directly into account model uncertainty, such as that related to the choice of a particular type of finite element. The uncertain parameters of the structure are not modeled directly by random variables (r.v.'s); instead, the probability model is directly introduced from the generalized matrices of a mean reduced matrix model of the structure by using the maximum entropy principle (Soize 2001). In this formulation the global scatter of each random matrix is controlled by one real positive scalar called dispersion parameter. An example problem from aerospace engineering, specifically the FE model of the scientific satellite INTEGRAL of the European Space Agency (ESA) (Alenia 1998) is used to elucidate the two approaches. First the analysis based on the parametric formulation is carried out; the associated results are then used to calibrate the dispersion parameters and to construct the reduced matrices of the non-parametric model.

01 Feb 2005
TL;DR: In this article, the propagation of parameter uncertainty through a model to obtain the uncertain vibration response is becoming more practical for industrial scale finite element models due to the increase in computing power available.
Abstract: The propagation of parameter uncertainty through a model to obtain the uncertain vibration response is becoming more practical for industrial scale finite element models due to the increase in computing power available. In some cases the parametric uncertainty may be measured directly, for example the thickness of a panel. However the parameters for joint models (for example) must be estimated from measurements using the techniques of finite element model updating. In these cases the techniques of model updating must be extended to allow for uncertainty quantification from a series of measurements on nominally identically structures. The validation of these methods requires laboratory experiments where the uncertain parameter is measured directly and also estimated by updating. This paper outlines the results of experiments that may be used for this purpose, namely a moving mass on a free-free beam and a copper pipe with uncertain internal pressure. The data from these experiments will be freely available on the associated website at Bristol.