scispace - formally typeset
Search or ask a question

Showing papers on "Uncertainty quantification published in 2015"


Book
15 Dec 2015
TL;DR: In this paper, a measure of information and uncertainty is used to measure the probability that a model is true or false. But this measure does not consider the distributional uncertainty of uncertainty.
Abstract: Introduction.- Measure and Probability Theory.- Banach and Hilbert Spaces.- Optimization Theory.- Measures of Information and Uncertainty.- Bayesian Inverse Problems.- Filtering and Data Assimilation.- Orthogonal Polynomials and Applications.- Numerical Integration.- Sensitivity Analysis and Model Reduction.- Spectral Expansions.- Stochastic Galerkin Methods.- Non-Intrusive Methods.- Distributional Uncertainty.- References.- Index.

401 citations


MonographDOI
01 May 2015
TL;DR: This book is an ideal introduction for graduate students in applied mathematics, computer science, engineering, geoscience and other emerging application areas with a general dynamical systems approach.
Abstract: In this book the authors describe the principles and methods behind probabilistic forecasting and Bayesian data assimilation. Instead of focusing on particular application areas, the authors adopt a general dynamical systems approach, with a profusion of low-dimensional, discrete-time numerical examples designed to build intuition about the subject. Part I explains the mathematical framework of ensemble-based probabilistic forecasting and uncertainty quantification. Part II is devoted to Bayesian filtering algorithms, from classical data assimilation algorithms such as the Kalman filter, variational techniques, and sequential Monte Carlo methods, through to more recent developments such as the ensemble Kalman filter and ensemble transform filters. The McKean approach to sequential filtering in combination with coupling of measures serves as a unifying mathematical framework throughout Part II. Assuming only some basic familiarity with probability, this book is an ideal introduction for graduate students in applied mathematics, computer science, engineering, geoscience and other emerging application areas.

353 citations


Journal ArticleDOI
TL;DR: In this paper, a non-probabilistic fuzzy approach and a probabilistic Bayesian approach for model updating for non-destructive damage assessment is presented. But the model updating problem is an inverse problem prone to ill-posedness and ill-conditioning.

338 citations


Journal ArticleDOI
TL;DR: In this paper, the posterior distribution of a high-dimensional linear regression under sparsity constraints is characterized and employed to the construction and study of credible sets for uncertainty quantification, where the prior is a mixture of point masses at zero and continuous distributions.
Abstract: We study full Bayesian procedures for high-dimensional linear regression under sparsity constraints. The prior is a mixture of point masses at zero and continuous distributions. Under compatibility conditions on the design matrix, the posterior distribution is shown to contract at the optimal rate for recovery of the unknown sparse vector, and to give optimal prediction of the response vector. It is also shown to select the correct sparse model, or at least the coefficients that are significantly different from zero. The asymptotic shape of the posterior distribution is characterized and employed to the construction and study of credible sets for uncertainty quantification.

252 citations


Journal ArticleDOI
TL;DR: A review of kinetic uncertainty quantification in combustion chemistry can be found in this paper, where the authors provide a self-contained review about the mathematical principles and methods of uncertainty quantization and their application in highly complex, multi-parameter combustion chemistry problems.

233 citations


Journal ArticleDOI
TL;DR: The Chaospy software toolbox is compared to similar packages and demonstrates a stronger focus on defining reusable software building blocks that can easily be assembled to construct new, tailored algorithms for uncertainty quantification.

228 citations


Journal ArticleDOI
TL;DR: PC-Kriging is derived as a new non-intrusive meta-modeling approach combining PCE and Kriging, which approximates the global behavior of the computational model whereas Kriged manages the local variability of the model output.
Abstract: Computer simulation has become the standard tool in many engineering fields for designing and optimizing systems, as well as for assessing their reliability. Optimization and uncertainty quantification problems typically require a large number of runs of the computational model at hand, which may not be feasible with high-fidelity models directly. Thus surrogate models (a.k.a metamodels) have been increasingly investigated in the last decade. Polynomial Chaos Expansions (PCE) and Kriging are two popular non-intrusive metamodelling techniques. PCE surrogates the computational model with a series of orthonormal polynomials in the input variables where polynomials are chosen in coherency with the probability distributions of those input variables. A least-square minimization technique may be used to determine the coefficients of the PCE. On the other hand, Kriging assumes that the computer model behaves as a realization of a Gaussian random process whose parameters are estimated from the available computer runs, i.e. input vectors and response values. These two techniques have been developed more or less in parallel so far with little interaction between the researchers in the two fields. In this paper, PC-Kriging is derived as a new non-intrusive meta-modeling approach combining PCE and Kriging. A sparse set of orthonormal polynomials (PCE) approximates the global behavior of the computational model whereas Kriging manages the local variability of the model output. An adaptive algorithm similar to the least angle regression algorithm determines the optimal sparse set of polynomials. PC-Kriging is validated on various benchmark analytical functions which are easy to sample for reference results. From the numerical investigations it is concluded that PC-Kriging performs better than or at least as good as the two distinct meta-modeling techniques. A larger gain in accuracy is obtained when the experimental design has a limited size, which is an asset when dealing with demanding computational models.

220 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo ensemble approach was adopted to characterizeparametricuncertainty, because initial experimentsindicatetheexistenceof significant nonlinear interactions, and the resulting ensemble exhibits a wider uncertainty range before 1900, as well as an uncertainty maximum around World War II.
Abstract: Described herein is the parametric and structural uncertainty quantification for the monthly Extended Reconstructed Sea Surface Temperature (ERSST) version 4 (v4). A Monte Carlo ensemble approach was adoptedtocharacterizeparametricuncertainty,becauseinitialexperimentsindicatetheexistenceofsignificant nonlinear interactions. Globally, the resulting ensemble exhibits a wider uncertainty range before 1900, as well as an uncertainty maximum around World War II. Changes at smaller spatial scales in many regions, or for important features such as Nino-3.4 variability, are found to be dominated by particular parameter choices. Substantial differences in parametric uncertainty estimates are found between ERSST.v4 and the independently derived Hadley Centre SST version 3 (HadSST3) product. The largest uncertainties are over the mid and high latitudes in ERSST.v4but in the tropics in HadSST3. Overall, in comparison with HadSST3, ERSST.v4 has larger parametric uncertainties at smaller spatial and shorter time scales and smaller parametric uncertainties at longer time scales, which likely reflects the different sources of uncertainty quantified in the respective parametric analyses. ERSST.v4 exhibits a stronger globally averaged warming trend than HadSST3duringtheperiodof1910‐2012,butwithasmallerparametricuncertainty.Theseglobal-meantrend estimates and their uncertainties marginally overlap. Several additional SST datasetsare usedto infer the structuraluncertainty inherent in SST estimates. For the global mean, the structural uncertainty, estimated as the spread between available SST products, is more often than not larger than the parametric uncertainty in ERSST.v4. Neither parametric nor structural uncertainties call into question that on the global-mean level and centennial time scale, SSTs have warmed notably.

218 citations


Journal ArticleDOI
TL;DR: This work is a comparative assessment of four approaches recently proposed in the literature: the uncertainty surface method, the particle disparity approach, the peak ratio criterion and the correlation statistics method.
Abstract: A posteriori uncertainty quantification of particle image velocimetry (PIV) data is essential to obtain accurate estimates of the uncertainty associated with a given experiment. This is particularly relevant when measurements are used to validate computational models or in design and decision processes. In spite of the importance of the subject, the first PIV uncertainty quantification (PIV-UQ) methods have been developed only in the last three years. The present work is a comparative assessment of four approaches recently proposed in the literature: the uncertainty surface method (Timmins et al 2012), the particle disparity approach (Sciacchitano et al 2013), the peak ratio criterion (Charonko and Vlachos 2013) and the correlation statistics method (Wieneke 2015). The analysis is based upon experiments conducted for this specific purpose, where several measurement techniques are employed simultaneously. The performances of the above approaches are surveyed across different measurement conditions and flow regimes.

213 citations


Journal ArticleDOI
TL;DR: Results reveal that, at least on a theoretical level, the solution map can be well approximated by discretizations of moderate complexity, thereby showing how the curse of dimensionality is broken.
Abstract: Parametrized families of PDEs arise in various contexts such as inverse problems, control and optimization, risk assessment, and uncertainty quantification. In most of these applications, the number of parameters is large or perhaps even infinite. Thus, the development of numerical methods for these parametric problems is faced with the possible curse of dimensionality. This article is directed at (i) identifying and understanding which properties of parametric equations allow one to avoid this curse and (ii) developing and analyzing effective numerical methodd which fully exploit these properties and, in turn, are immune to the growth in dimensionality. The first part of this article studies the smoothness and approximability of the solution map, that is, the map $a\mapsto u(a)$ where $a$ is the parameter value and $u(a)$ is the corresponding solution to the PDE. It is shown that for many relevant parametric PDEs, the parametric smoothness of this map is typically holomorphic and also highly anisotropic in that the relevant parameters are of widely varying importance in describing the solution. These two properties are then exploited to establish convergence rates of $n$-term approximations to the solution map for which each term is separable in the parametric and physical variables. These results reveal that, at least on a theoretical level, the solution map can be well approximated by discretizations of moderate complexity, thereby showing how the curse of dimensionality is broken. This theoretical analysis is carried out through concepts of approximation theory such as best $n$-term approximation, sparsity, and $n$-widths. These notions determine a priori the best possible performance of numerical methods and thus serve as a benchmark for concrete algorithms. The second part of this article turns to the development of numerical algorithms based on the theoretically established sparse separable approximations. The numerical methods studied fall into two general categories. The first uses polynomial expansions in terms of the parameters to approximate the solution map. The second one searches for suitable low dimensional spaces for simultaneously approximating all members of the parametric family. The numerical implementation of these approaches is carried out through adaptive and greedy algorithms. An a priori analysis of the performance of these algorithms establishes how well they meet the theoretical benchmarks.

212 citations


Journal ArticleDOI
TL;DR: In this paper, a probabilistic finite element (FE) model updating technique based on Hierarchical Bayesian modeling is proposed for identification of civil structural systems under changing ambient/environmental conditions.

Journal ArticleDOI
TL;DR: It is shown that for a given computational budget, basis selection produces a more accurate PCE than would be obtained if the basis were fixed a priori.

DOI
01 Jan 2015
TL;DR: The UQLAB project as mentioned in this paper is a MATLAB-based software framework for uncertainty quantification, which includes a highly optimized core probabilistic modelling engine and a simple programming interface that provides unified access to heterogeneous high performance computing resources.
Abstract: Uncertainty quantification is a rapidly growing field in computer simulation-based scientific applications. The UQLAB project aims at the development of a MATLABbased software framework for uncertainty quantification. It is designed to encourage both academic researchers and field engineers to use and develop advanced and innovative algorithms for uncertainty quantification, possibly exploiting modern distributed computing facilities. Ease of use, extendibility and handling of non-intrusive stochastic methods are core elements of its development philosophy. The modular platform comprises a highly optimized core probabilistic modelling engine and a simple programming interface that provides unified access to heterogeneous high performance computing resources. Finally, it provides a content-management system that allows users to easily develop additional custom modules within the framework. In this contribution, we intend to demonstrate the features of the platform at its current development stage.

Journal ArticleDOI
TL;DR: Stochastic methods are used as subgrid-scale parameterizations for model error representation, uncertainty quantification, data assimilation, and ensemble prediction in weather and climate models as mentioned in this paper.
Abstract: Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.

Journal ArticleDOI
TL;DR: A generalized framework for estimating discharge uncertainty at many gauging stations with different errors in the stage‐discharge relationship is presented and is shown to be robust, versatile and able to capture place‐specific uncertainties for a number of different examples.
Abstract: A generalized framework for discharge uncertainty estimation is presentedAllows estimation of place-specific discharge uncertainties for many catchmentsLocal conditions dominate in determining discharge uncertainty magnitudes.

Journal ArticleDOI
TL;DR: A computationally efficient machine learning framework is developed based on multi-level recursive co-kriging with sparse precision matrices of Gaussian–Markov random fields that is able to construct response surfaces of complex dynamical systems by blending multiple information sources via auto-regressive stochastic modelling.
Abstract: We propose a new framework for design under uncertainty based on stochastic computer simulations and multi-level recursive co-kriging. The proposed methodology simultaneously takes into account multi-fidelity in models, such as direct numerical simulations versus empirical formulae, as well as multi-fidelity in the probability space (e.g. sparse grids versus tensor product multi-element probabilistic collocation). We are able to construct response surfaces of complex dynamical systems by blending multiple information sources via auto-regressive stochastic modelling. A computationally efficient machine learning framework is developed based on multi-level recursive co-kriging with sparse precision matrices of Gaussian–Markov random fields. The effectiveness of the new algorithms is demonstrated in numerical examples involving a prototype problem in risk-averse design, regression of random functions, as well as uncertainty quantification in fluid mechanics involving the evolution of a Burgers equation from a random initial state, and random laminar wakes behind circular cylinders.

Journal ArticleDOI
TL;DR: An abstract, problem-dependent theorem is given on the cost of the new multilevel estimator based on a set of simple, verifiable assumptions for a typical model problem in subsurface flow and shows significant gains over the standard Metropolis--Hastings estimator.
Abstract: In this paper we address the problem of the prohibitively large computational cost of existing Markov chain Monte Carlo methods for large-scale applications with high-dimensional parameter spaces, e.g., in uncertainty quantification in porous media flow. We propose a new multilevel Metropolis--Hastings algorithm and give an abstract, problem-dependent theorem on the cost of the new multilevel estimator based on a set of simple, verifiable assumptions. For a typical model problem in subsurface flow, we then provide a detailed analysis of these assumptions and show significant gains over the standard Metropolis--Hastings estimator. Numerical experiments confirm the analysis and demonstrate the effectiveness of the method with consistent reductions of more than an order of magnitude in the cost of the multilevel estimator over the standard Metropolis--Hastings algorithm for tolerances $\varepsilon < 10^{-2}$.

Journal ArticleDOI
TL;DR: In this paper, a numerical strategy for the efficient estimation of set-valued failure probabilities, coupling Monte Carlo with optimization methods, is presented in order to both speed up the reliability analysis, and provide a better estimate for the lower and upper bounds of the failure probability.

Journal ArticleDOI
TL;DR: The watershed between tractability and intractability in ambiguity-averse uncertainty quantification and chance constrained programming is delineated and tools from distributionally robust optimization are derived that derive explicit conic reformulations for tractable problem classes and suggest efficiently computable conservative approximations for intractable ones.
Abstract: The objective of uncertainty quantification is to certify that a given physical, engineering or economic system satisfies multiple safety conditions with high probability. A more ambitious goal is to actively influence the system so as to guarantee and maintain its safety, a scenario which can be modeled through a chance constrained program. In this paper we assume that the parameters of the system are governed by an ambiguous distribution that is only known to belong to an ambiguity set characterized through generalized moment bounds and structural properties such as symmetry, unimodality or independence patterns. We delineate the watershed between tractability and intractability in ambiguity-averse uncertainty quantification and chance constrained programming. Using tools from distributionally robust optimization, we derive explicit conic reformulations for tractable problem classes and suggest efficiently computable conservative approximations for intractable ones.

Journal ArticleDOI
TL;DR: A novel methodology based on active subspaces is employed to characterize the effects of the input uncertainty on the scramjet performance, and this dimension reduction enables otherwise infeasible uncertainty quantification.

Journal ArticleDOI
TL;DR: The proposed dictionary matching approach permits segmentation, anomaly detection, and indexing to be performed in a unified manner with the additional benefit of uncertainty quantification.
Abstract: We propose a framework for indexing of grain and subgrain structures in electron backscatter diffraction patterns of polycrystalline materials. We discretize the domain of a dynamical forward model onto a dense grid of orientations, producing a dictionary of patterns. For each measured pattern, we identify the most similar patterns in the dictionary, and identify boundaries, detect anomalies, and index crystal orientations. The statistical distribution of these closest matches is used in an unsupervised binary decision tree (DT) classifier to identify grain boundaries and anomalous regions. The DT classifies a pattern as an anomaly if it has an abnormally low similarity to any pattern in the dictionary. It classifies a pixel as being near a grain boundary if the highly ranked patterns in the dictionary differ significantly over the pixel’s neighborhood. Indexing is accomplished by computing the mean orientation of the closest matches to each pattern. The mean orientation is estimated using a maximum likelihood approach that models the orientation distribution as a mixture of Von Mises–Fisher distributions over the quaternionic three sphere. The proposed dictionary matching approach permits segmentation, anomaly detection, and indexing to be performed in a unified manner with the additional benefit of uncertainty quantification.

Journal ArticleDOI
TL;DR: The work required for executing this data-to-prediction process-measured in number of forward (and adjoint) ice sheet model solves-is independent of the state dimension, parameter dimension, data dimension, and the number of processor cores.

Journal ArticleDOI
TL;DR: This paper investigates the classical (frequentist) and subjective (Bayesian) interpretations of uncertainty and their implications on prognostics, and argues that the Bayesian interpretation of uncertainty is more suitable for condition-based progNostics and health monitoring.

Journal ArticleDOI
TL;DR: This paper develops an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level and employs tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points.
Abstract: Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.

Journal ArticleDOI
TL;DR: This study makes a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods and finds that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters.
Abstract: Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

Journal ArticleDOI
TL;DR: In this paper, the authors present?4U, an extensible framework for nonintrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the goals of uncertainty quantification in nuclear EFT calculations and outline a recipe to obtain statistically meaningful error bars for their predictions, arguing that the different sources of theory error can be accounted for within a Bayesian framework, as they illustrate using a toy model.
Abstract: The application of effective field theory (EFT) methods to nuclear systems provides the opportunity to rigorously estimate the uncertainties originating in the nuclear Hamiltonian. Yet this is just one source of uncertainty in the observables predicted by calculations based on nuclear EFTs. We discuss the goals of uncertainty quantification in such calculations and outline a recipe to obtain statistically meaningful error bars for their predictions. We argue that the different sources of theory error can be accounted for within a Bayesian framework, as we illustrate using a toy model.

Journal ArticleDOI
TL;DR: In this paper, an open-box, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations is developed, where uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints.
Abstract: Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering application. For many practical flows, the turbulence models are by far the largest source of uncertainty. In this work we develop an open-box, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Simulation results suggest that, even with very sparse observations, the posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, and has potential implications in many fields in which the model uncertainty comes from unresolved physical processes. A notable example is climate modeling, where high-consequence decisions are made based on predictions (e.g., projected temperature rise) with major uncertainties originating from closure models that are used to account for unresolved or unknown physics including radiation, cloud, and boundary layer processes.

Journal ArticleDOI
TL;DR: In this paper, a full-scale Bayesian analysis of the northern galactic cap of the Sloan Digital Sky Survey (SDSS) Data Release 7 main galaxy sample was performed, relying on a fully probabilistic, physical model of the non-linearly evolved density field.
Abstract: We present a chrono-cosmography project, aiming at the inference of the four dimensional formation history of the observed large scale structure from its origin to the present epoch. To do so, we perform a full-scale Bayesian analysis of the northern galactic cap of the Sloan Digital Sky Survey (SDSS) Data Release 7 main galaxy sample, relying on a fully probabilistic, physical model of the non-linearly evolved density field. Besides inferring initial conditions from observations, our methodology naturally and accurately reconstructs non-linear features at the present epoch, such as walls and filaments, corresponding to high-order correlation functions generated by late-time structure formation. Our inference framework self-consistently accounts for typical observational systematic and statistical uncertainties such as noise, survey geometry and selection effects. We further account for luminosity dependent galaxy biases and automatic noise calibration within a fully Bayesian approach. As a result, this analysis provides highly-detailed and accurate reconstructions of the present density field on scales larger than ~ 3 Mpc/h, constrained by SDSS observations. This approach also leads to the first quantitative inference of plausible formation histories of the dynamic large scale structure underlying the observed galaxy distribution. The results described in this work constitute the first full Bayesian non-linear analysis of the cosmic large scale structure with the demonstrated capability of uncertainty quantification. Some of these results will be made publicly available along with this work. The level of detail of inferred results and the high degree of control on observational uncertainties pave the path towards high precision chrono-cosmography, the subject of simultaneously studying the dynamics and the morphology of the inhomogeneous Universe.

31 Mar 2015
TL;DR: In this article, a reference model was built in a high resolution geocellular model, using public data from Namorado Field, Campos Basin, Brazil, and a simulation model with uncertainties (UNISIM-I-D) was created in a medium numerical grid resolution after the upscaling of a geomodel realization with some information of the reference model.
Abstract: Several methodologies related to reservoir management applications were created recently. Many times, it is difficult to know the applicability of these methodologies when applied in real reservoirs that are unknown. In order to test them, a synthetic model was created (UNISIM-I-R) where the real reservoir is substituted by a reference model with known properties, so methodologies can be tested and compared. The reference model was built in a high resolution geocellular model, using public data from Namorado Field, Campos Basin, Brazil. The level of details is high to ensure that geological model is reliable in order to guarantee derivative suitable models for simulations that honor the used data. In addition, a simulation model with uncertainties (UNISIM-I-D) was created in a medium numerical grid resolution after the upscaling of a geomodel realization with some information of the reference model. The reference and simulation models were then submitted to production and injection scheme to compare the results. The focus of the application was an initial stage of the field management. The comparison between reference and simulation model was done to check the consistency and highlight the differences, using a base production strategy to ensure the quality and reliability of the UNISIM-I-R. The main result of this work is then a model which can be used for future comparative project solutions.