scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic simulation published in 2010"


Journal ArticleDOI
TL;DR: The basic theory of kriging is extended, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables.
Abstract: We extend the basic theory of kriging, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting. Our goal is to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables, or uncontrollable environmental variables. To accomplish this, we characterize both the intrinsic uncertainty inherent in a stochastic simulation and the extrinsic uncertainty about the unknown response surface. We use tractable examples to demonstrate why it is critical to characterize both types of uncertainty, derive general results for experiment design and analysis, and present a numerical example that illustrates the stochastic kriging method.

576 citations


Book
04 Jun 2010
TL;DR: Stochastic Simulation Optimization addresses the pertinent efficiency issue via smart allocation of computing resource in the simulation experiments for optimization, and aims to provide academic researchers and industrial practitioners with a comprehensive coverage of OCBA approach for stochastic simulation optimization.
Abstract: With the advance of new computing technology, simulation is becoming very popular for designing large, complex and stochastic engineering systems, since closed-form analytical solutions generally do not exist for such problems. However, the added flexibility of simulation often creates models that are computationally intractable. Moreover, to obtain a sound statistical estimate at a specified level of confidence, a large number of simulation runs (or replications) is usually required for each design alternative. If the number of design alternatives is large, the total simulation cost can be very expensive. Stochastic Simulation Optimization addresses the pertinent efficiency issue via smart allocation of computing resource in the simulation experiments for optimization, and aims to provide academic researchers and industrial practitioners with a comprehensive coverage of OCBA approach for stochastic simulation optimization. Starting with an intuitive explanation of computing budget allocation and a discussion of its impact on optimization performance, a series of OCBA approaches developed for various problems are then presented, from the selection of the best design to optimization with multiple objectives. Finally, this book discusses the potential extension of OCBA notion to different applications such as data envelopment analysis, experiments of design and rare-event simulation.

337 citations


Journal ArticleDOI
TL;DR: A novel, principled and unified technique for pattern analysis and generation that ensures computational efficiency and enables a straightforward incorporation of domain knowledge will be presented and has the potential to reduce computational time significantly.
Abstract: The advent of multiple-point geostatistics (MPS) gave rise to the integration of complex subsurface geological structures and features into the model by the concept of training images Initial algorithms generate geologically realistic realizations by using these training images to obtain conditional probabilities needed in a stochastic simulation framework More recent pattern-based geostatistical algorithms attempt to improve the accuracy of the training image pattern reproduction In these approaches, the training image is used to construct a pattern database Consequently, sequential simulation will be carried out by selecting a pattern from the database and pasting it onto the simulation grid One of the shortcomings of the present algorithms is the lack of a unifying framework for classifying and modeling the patterns from the training image In this paper, an entirely different approach will be taken toward geostatistical modeling A novel, principled and unified technique for pattern analysis and generation that ensures computational efficiency and enables a straightforward incorporation of domain knowledge will be presented In the developed methodology, patterns scanned from the training image are represented as points in a Cartesian space using multidimensional scaling The idea behind this mapping is to use distance functions as a tool for analyzing variability between all the patterns in a training image These distance functions can be tailored to the application at hand Next, by significantly reducing the dimensionality of the problem and using kernel space mapping, an improved pattern classification algorithm is obtained This paper discusses the various implementation details to accomplish these ideas Several examples are presented and a qualitative comparison is made with previous methods An improved pattern continuity and data-conditioning capability is observed in the generated realizations for both continuous and categorical variables We show how the proposed methodology is much less sensitive to the user-provided parameters, and at the same time has the potential to reduce computational time significantly

287 citations


Journal ArticleDOI
01 Sep 2010
TL;DR: A general optimization framework GA-META is proposed, which integrates metamodels into the Genetic Algorithm, to improve the efficiency and reliability of the decision making process and indicate that GA-Support Vector Regression achieves the best solution among the metAModels.
Abstract: Simulation is a widely applied tool to study and evaluate complex systems. Due to the stochastic and complex nature of real world systems, simulation models for these systems are often difficult to build and time consuming to run. Metamodels are mathematical approximations of simulation models, and have been frequently used to reduce the computational burden associated with running such simulation models. In this paper, we propose to incorporate metamodels into Decision Support Systems to improve its efficiency and enable larger and more complex models to be effectively analyzed with Decision Support Systems. To evaluate the different metamodel types, a systematic comparison is first conducted to analyze the strengths and weaknesses of five popular metamodeling techniques (Artificial Neural Network, Radial Basis Function, Support Vector Regression, Kriging, and Multivariate Adaptive Regression Splines) for stochastic simulation problems. The results show that Support Vector Regression achieves the best performance in terms of accuracy and robustness. We further propose a general optimization framework GA-META, which integrates metamodels into the Genetic Algorithm, to improve the efficiency and reliability of the decision making process. This approach is illustrated with a job shop design problem. The results indicate that GA-Support Vector Regression achieves the best solution among the metamodels.

175 citations


Journal ArticleDOI
TL;DR: A new approach is proposed based on stochastic simulation methods that quantifies the whole range of possible peak loads and the probability of each interval and identifies the most important uncertainties.

166 citations


Journal ArticleDOI
TL;DR: Small-sample validity of the statistical test and ranking-and-selection procedure is proven for normally distributed data, and ISC is compared to the commercial optimization via simulation package OptQuest on five test problems that range from 2 to 20 decision variables and on the order of 104 to 1020 feasible solutions.
Abstract: Industrial Strength COMPASS (ISC) is a particular implementation of a general framework for optimizing the expected value of a performance measure of a stochastic simulation with respect to integer-ordered decision variables in a finite (but typically large) feasible region defined by linear-integer constraints. The framework consists of a global-search phase, followed by a local-search phase, and ending with a “clean-up” (selection of the best) phase. Each phase provides a probability 1 convergence guarantee as the simulation effort increases without bound: Convergence to a globally optimal solution in the global-search phase; convergence to a locally optimal solution in the local-search phase; and convergence to the best of a small number of good solutions in the clean-up phase. In practice, ISC stops short of such convergence by applying an improvement-based transition rule from the global phase to the local phase; a statistical test of convergence from the local phase to the clean-up phase; and a ranking-and-selection procedure to terminate the clean-up phase. Small-sample validity of the statistical test and ranking-and-selection procedure is proven for normally distributed data. ISC is compared to the commercial optimization via simulation package OptQuest on five test problems that range from 2 to 20 decision variables and on the order of 104 to 1020 feasible solutions. These test cases represent response-surface models with known properties and realistic system simulation problems.

155 citations


Journal ArticleDOI
TL;DR: This paper introduces the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters, and proposes a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz.
Abstract: Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie’s stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10 000 are demonstrated.

128 citations


Journal ArticleDOI
TL;DR: The approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial simulation systems like those for DNA or virus capsid self-assembly.
Abstract: We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS), we have chosen an already existing formalism (BioNetGen) for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules). When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial simulation systems like those for DNA or virus capsid self-assembly.

108 citations


Journal ArticleDOI
TL;DR: In this paper, the identification of high-dimensional polynomial chaos expansions with random coefficients for non-Gaussian tensor-valued random fields using partial and limited experimental data is studied.

103 citations


Journal ArticleDOI
Raghu Pasupathy1
TL;DR: This paper presents an overview of the conditions that guarantee the correct convergence of RA's iterates, and characterize a class of error-tolerance and sample-size sequences that are superior to others in a certain precisely defined sense.
Abstract: The stochastic root-finding problem is that of finding a zero of a vector-valued function known only through a stochastic simulation. The simulation-optimization problem is that of locating a real-valued function's minimum, again with only a stochastic simulation that generates function estimates. Retrospective approximation (RA) is a sample-path technique for solving such problems, where the solution to the underlying problem is approached via solutions to a sequence of approximate deterministic problems, each of which is generated using a specified sample size, and solved to a specified error tolerance. Our primary focus, in this paper, is providing guidance on choosing the sequence of sample sizes and error tolerances in RA algorithms. We first present an overview of the conditions that guarantee the correct convergence of RA's iterates. Then we characterize a class of error-tolerance and sample-size sequences that are superior to others in a certain precisely defined sense. We also identify and recommend members of this class and provide a numerical example illustrating the key results.

102 citations


Journal ArticleDOI
TL;DR: A fuzzy chance-constrained program model is designed based on fuzzy credibility theory and an improved differential evolution algorithm is integrated so as to use a hybrid intelligent algorithm to solve the OVRPFD model.
Abstract: According to the open vehicle routing problem (OVRP), a vehicle is not required to return to the distribution depot after servicing the last customer on its route. In this paper, the open vehicle routing problem with fuzzy demands (OVRPFD) is considered. A fuzzy chance-constrained program model is designed based on fuzzy credibility theory. Stochastic simulation and an improved differential evolution algorithm are integrated so as to use a hybrid intelligent algorithm to solve the OVRPFD model. The influence of the decision-maker's preference on the final outcome of the problem is analyzed using stochastic simulation, and the range of possible preferences is calculated.

Journal ArticleDOI
TL;DR: This paper shows formally that for a large class of models, this fluid-flow analysis can be directly derived from the stochastic process algebra model as an approximation to the mean number of component types within the model.

Journal ArticleDOI
01 May 2010
TL;DR: The implementation of Gillespie's stochastic simulation algorithm — SSA is described and some computational experiments illustrating the power of this technology for this important and challenging class of problems are presented.
Abstract: The small number of some reactant molecules in biological systems formed by living cells can result in dynamical behavior which cannot be captured by traditional deterministic models. In such a problem, a more accurate simulation can be obtained with discrete stochastic simulation Gillespie's stochastic simulation algorithm - SSA. Many stochastic realizations are required to capture accurate statistical information of the solution. This carries a very high computational cost. The current generation of graphics processing units GPU is well-suited to this task. In this paper we describe our implementation and present some computational experiments illustrating the power of this technology for this important and challenging class of problems.

Journal ArticleDOI
TL;DR: The method is shown to generate realizations of complex spatial patterns, reproduce bimodal data distributions, data variograms, and high-order spatial cumulants of the data, and it is shown that the available hard data dominate the simulation process and have a definitive effect on the simulated realizations.
Abstract: Spatially distributed and varying natural phenomena encountered in geoscience and engineering problem solving are typically incompatible with Gaussian models, exhibiting nonlinear spatial patterns and complex, multiple-point connectivity of extreme values. Stochastic simulation of such phenomena is historically founded on second-order spatial statistical approaches, which are limited in their capacity to model complex spatial uncertainty. The newer multiple-point (MP) simulation framework addresses past limits by establishing the concept of a training image, and, arguably, has its own drawbacks. An alternative to current MP approaches is founded upon new high-order measures of spatial complexity, termed “high-order spatial cumulants.” These are combinations of moments of statistical parameters that characterize non-Gaussian random fields and can describe complex spatial information. Stochastic simulation of complex spatial processes is developed based on high-order spatial cumulants in the high-dimensional space of Legendre polynomials. Starting with discrete Legendre polynomials, a set of discrete orthogonal cumulants is introduced as a tool to characterize spatial shapes. Weighted orthonormal Legendre polynomials define the so-called Legendre cumulants that are high-order conditional spatial cumulants inferred from training images and are combined with available sparse data sets. Advantages of the high-order sequential simulation approach developed herein include the absence of any distribution-related assumptions and pre- or post-processing steps. The method is shown to generate realizations of complex spatial patterns, reproduce bimodal data distributions, data variograms, and high-order spatial cumulants of the data. In addition, it is shown that the available hard data dominate the simulation process and have a definitive effect on the simulated realizations, whereas the training images are only used to fill in high-order relations that cannot be inferred from data. Compared to the MP framework, the proposed approach is data-driven and consistently reconstructs the lower-order spatial complexity in the data used, in addition to high order.


Journal ArticleDOI
TL;DR: A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport.
Abstract: We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.

Journal ArticleDOI
TL;DR: An adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes and based on estimates of the errors in the tau-leap method and the macroscopic diffusion is proposed.

Journal ArticleDOI
TL;DR: A software tool called RuleMonkey is presented, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method, and is suitable for rule- based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions.
Abstract: Background: The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these “network-free” simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results: Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie’s method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions: RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application http://public.tgen.org/rulemonkey. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models.

Journal ArticleDOI
TL;DR: Numerical experiments indicate that the computational efficiency of the CE method can be substantially improved if the ideas of computing budget allocation are applied, and the proposed approach improves the updating of the sampling distribution by carrying out this Computing budget allocation in an efficient manner.
Abstract: We propose to improve the efficiency of simulation optimization by integrating the notion of optimal computing budget allocation into the Cross-Entropy (CE) method, which is a global optimization search approach that iteratively updates a parameterized distribution from which candidate solutions are generated. This article focuses on continuous optimization problems. In the stochastic simulation setting where replications are expensive but noise in the objective function estimate could mislead the search process, the allocation of simulation replications can make a significant difference in the performance of such global optimization search algorithms. A new allocation scheme is developed based on the notion of optimal computing budget allocation. The proposed approach improves the updating of the sampling distribution by carrying out this computing budget allocation in an efficient manner, by minimizing the expected mean-squared error of the CE weight function. Numerical experiments indicate that the computational efficiency of the CE method can be substantially improved if the ideas of computing budget allocation are applied.


Journal ArticleDOI
TL;DR: In this article, the authors studied generalized random fields which arise as rescaling limits of spatial configurations of uniformly scattered random balls as the mean radius of the balls tends to 0 or infinity and proved that the centered and renormalized random balls field admits a limit with self-similarity properties.
Abstract: We study generalized random fields which arise as rescaling limits of spatial configurations of uniformly scattered random balls as the mean radius of the balls tends to 0 or infinity. Assuming that the radius distribution has a power-law behavior, we prove that the centered and renormalized random balls field admits a limit with self-similarity properties. Our main result states that all self-similar, translation- and rotation-invariant Gaussian fields can be obtained through a unified zooming procedure starting from a random balls model. This approach has to be understood as a microscopic description of macroscopic properties. Under specific assumptions, we also get a Poisson-type asymptotic field. In addition to investigating stationarity and self-similarity properties, we give L2-representations of the asymptotic generalized random fields viewed as continuous random linear functionals.

Book ChapterDOI
TL;DR: Instead of modeling regulatory networks in terms of the deterministic dynamics of concentrations, this work model the dynamics of the probability of a given copy number of the reactants in single cells.
Abstract: The past decade has seen a revived interest in the unavoidable or intrinsic noise in biochemical and genetic networks arising from the finite copy number of the participating species. That is, rather than modeling regulatory networks in terms of the deterministic dynamics of concentrations, we model the dynamics of the probability of a given copy number of the reactants in single cells. Most of the modeling activity of the last decade has centered on stochastic simulation of individual realizations, i.e., Monte-Carlo methods for generating stochastic time series. Here we review the mathematical description in terms of probability distributions, introducing the relevant derivations and illustrating several cases for which analytic progress can be made either instead of or before turning to numerical computation.

Journal ArticleDOI
TL;DR: In this article, the authors consider stochastic growth models in the Kardar-Parisi-Zhang universality class in 1+1 dimension and discuss the large time distribution and processes and their dependence on the class on initial condition.
Abstract: In this contribution we consider stochastic growth models in the Kardar-Parisi-Zhang universality class in 1+1 dimension. We discuss the large time distribution and processes and their dependence on the class on initial condition. This means that the scaling exponents do not uniquely determine the large time surface statistics, but one has to further divide into subclasses. Some of the fluctuation laws were first discovered in random matrix models. Moreover, the limit process for curved limit shape turned out to show up in a dynamical version of hermitian random matrices, but this analogy does not extend to the case of symmetric matrices. Therefore the connections between growth models and random matrices is only partial.

Journal ArticleDOI
TL;DR: In this paper, the authors propose an extension of the partition of unity method to the spectral stochastic framework, which allows the enrichment of approximation spaces with suitable functions based on an a priori knowledge of the irregularities in the solution.
Abstract: An eXtended Stochastic Finite Element Method has been recently proposed for the numerical solution of partial differential equations defined on random domains. This method is based on a marriage between the eXtended Finite Element Method and spectral stochastic methods. In this article, we propose an extension of this method for the numerical simulation of random multi-phased materials. The random geometry of material interfaces is described implicitly by using random level set functions. A fixed deterministic finite element mesh, which is not conforming to the random interfaces, is then introduced in order to approximate the geometry and the solution. Classical spectral stochastic finite element approximation spaces are not able to capture the irregularities of the solution field with respect to spatial and stochastic variables, which leads to a deterioration of the accuracy and convergence properties of the approximate solution. In order to recover optimal convergence properties of the approximation, we propose an extension of the partition of unity method to the spectral stochastic framework. This technique allows the enrichment of approximation spaces with suitable functions based on an a priori knowledge of the irregularities in the solution. Numerical examples illustrate the efficiency of the proposed method and demonstrate the relevance of the enrichment procedure. Copyright © 2010 John Wiley & Sons, Ltd.

01 Jan 2010
TL;DR: This tutorial describes some of the research directions and results available for discrete-decision-variable OvS, and provides some guidance for using the OvS heuristics that are built into simulation modeling software.
Abstract: Both the simulation research and software communities have been interested in optimization via simulation (OvS), by which we mean maximizing or minimizing the expected value of some output of a stochastic simulation. Continuous-decision-variable OvS, and gradient estimation to support it, has been an active research area with significant advances. However, the decision variables in many operations research and management science simulations are more naturally discrete, even categorical. In this tutorial we describe some of the research directions and results available for discrete-decision-variable OvS, and provide some guidance for using the OvS heuristics that are built into simulation modeling software.

Journal ArticleDOI
TL;DR: In this paper, a comparison of bi-point geostatistical simulation methods for characterization of lithoclasses, making use of a carbonated reservoir as a case study, is presented.

Journal ArticleDOI
TL;DR: It is proved that as many Wiener processes are sufficient to formulate the CLE as there are independent variables in the equation, which is just the rank of the stoichiometric matrix.
Abstract: The Chemical Langevin Equation (CLE), which is a stochastic differential equation driven by a multidimensional Wiener process, acts as a bridge between the discrete stochastic simulation algorithm and the deterministic reaction rate equation when simulating (bio)chemical kinetics. The CLE model is valid in the regime where molecular populations are abundant enough to assume their concentrations change continuously, but stochastic fluctuations still play a major role. The contribution of this work is that we observe and explore that the CLE is not a single equation, but a parametric family of equations, all of which give the same finite-dimensional distribution of the variables. On the theoretical side, we prove that as many Wiener processes are sufficient to formulate the CLE as there are independent variables in the equation, which is just the rank of the stoichiometric matrix. On the practical side, we show that in the case where there are m1 pairs of reversible reactions and m2 irreversible reactions t...

Journal ArticleDOI
TL;DR: In this paper, a parallel subset simulation algorithm is proposed to estimate small failure probabilities of multiple limit states with only a single subset simulation analysis, where the principal variable is correlated with all performance functions.

Journal ArticleDOI
TL;DR: The approach is based on the fact that the time-dependent probability distribution associated to the Markov process is explicitly known for monomolecular, autocatalytic and certain catalytic reaction channels and increases the efficiency considerably at the cost of a small approximation error.
Abstract: Stochastic reaction systems with discrete particle numbers are usually described by a continuous-time Markov process. Realizations of this process can be generated with the stochastic simulation algorithm, but simulating highly reactive systems is computationally costly because the computational work scales with the number of reaction events. We present a new approach which avoids this drawback and increases the efficiency considerably at the cost of a small approximation error. The approach is based on the fact that the time-dependent probability distribution associated to the Markov process is explicitly known for monomolecular, autocatalytic and certain catalytic reaction channels. More complicated reaction systems can often be decomposed into several parts some of which can be treated analytically. These subsystems are propagated in an alternating fashion similar to a splitting method for ordinary differential equations. We illustrate this approach by numerical examples and prove an error bound for the splitting error.

Journal ArticleDOI
TL;DR: In this article, a stochastic modeling sequence is applied to a large, steady-state 3-D heat flow model of a reservoir in The Hague, Netherlands, where the spatial thermal conductivity distribution is simulated based on available logging data.
Abstract: SUMMARY Quantifying and minimizing uncertainty is vital for simulating technically and economically successful geothermal reservoirs. To this end, we apply a stochastic modelling sequence, a Monte Carlo study, based on (i) creating an ensemble of possible realizations of a reservoir model, (ii) forward simulation of fluid flow and heat transport, and (iii) constraining postprocessing using observed state variables. To generate the ensemble, we use the stochastic algorithm of Sequential Gaussian Simulation and test its potential fitting rock properties, such as thermal conductivity and permeability, of a synthetic reference model and—performing a corresponding forward simulation—state variables such as temperature. The ensemble yields probability distributions of rock properties and state variables at any location inside the reservoir. In addition, we perform a constraining post-processing in order to minimize the uncertainty of the obtained distributions by conditioning the ensemble to observed state variables, in this case temperature. This constraining post-processing works particularly well on systems dominated by fluid flow. The stochastic modelling sequence is applied to a large, steady-state 3-D heat flow model of a reservoir in The Hague, Netherlands. The spatial thermal conductivity distribution is simulated stochastically based on available logging data. Errors of bottom-hole temperatures provide thresholds for the constraining technique performed afterwards. This reduce the temperature uncertainty for the proposed target location significantly from 25 to 12 K (full distribution width) in a depth of 2300 m. Assuming a Gaussian shape of the temperature distribution, the standard deviation is 1.8 K. To allow a more comprehensive approach to quantify uncertainty, we also implement the stochastic simulation of boundary conditions and demonstrate this for the basal specific heat flow in the reservoir of The Hague. As expected, this results in a larger distribution width and hence, a larger, but more realistic uncertainty estimate. However, applying the constraining post-processing the uncertainty is again reduced to the level of the post-processing without stochastic boundary simulation. Thus, constraining post-processing is a suitable tool for reducing uncertainty estimates by observed state variables.