scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic simulation published in 2017"


Journal ArticleDOI
TL;DR: The difficulties in SO as compared to algebraic model-based mathematical programming are emphasized, the different approaches used are examined, some of the diverse applications that have been tackled by these methods are reviewed, and future directions in the field are speculates.
Abstract: Simulation Optimization (SO) refers to the optimization of an objective function subject to constraints, both of which can be evaluated through a stochastic simulation. To address specific features of a particular simulation---discrete or continuous decisions, expensive or cheap simulations, single or multiple outputs, homogeneous or heterogeneous noise---various algorithms have been proposed in the literature. As one can imagine, there exist several competing algorithms for each of these classes of problems. This document emphasizes the difficulties in simulation optimization as compared to mathematical programming, makes reference to state-of-the-art algorithms in the field, examines and contrasts the different approaches used, reviews some of the diverse applications that have been tackled by these methods, and speculates on future directions in the field.

130 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a method of modelling the cell cycle that restores the memoryless property to the system and is therefore consistent with simulation via the Gillespie algorithm, and showed the importance of employing the correct cell cycle time distribution by recapitulating the results from two models incorporating cellular proliferation.

70 citations


Journal ArticleDOI
TL;DR: An analytical dynamic network model is formulated that is used as part of the metamodel and is used to address a time-dependent large-scale traffic signal control problem for the city of Lausanne.
Abstract: This paper addresses large-scale urban transportation optimization problems with time-dependent continuous decision variables, a stochastic simulation-based objective function, and general analytical differentiable constraints. We propose a metamodel approach to address, in a computationally efficient way, these large-scale dynamic simulation-based optimization problems. We formulate an analytical dynamic network model that is used as part of the metamodel. The network model formulation combines ideas from transient queueing theory and traffic flow theory. The model is formulated as a system of equations. The model complexity is linear in the number of road links and is independent of the link space capacities. This makes it a scalable model suitable for the analysis of large-scale problems. The proposed dynamic metamodel approach is used to address a time-dependent large-scale traffic signal control problem for the city of Lausanne. Its performance is compared to that of a stationary metamodel approach. ...

63 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the merits of replication, and provide methods for optimal design (including replicates), with the goal of obtaining globally accurate emulation of noisy computer simulation experiments, and show that replication can be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling.
Abstract: We investigate the merits of replication, and provide methods for optimal design (including replicates), with the goal of obtaining globally accurate emulation of noisy computer simulation experiments. We first show that replication can be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling. We then develop a lookahead based sequential design scheme that can determine if a new run should be at an existing input location (i.e., replicate) or at a new one (explore). When paired with a newly developed heteroskedastic Gaussian process model, our dynamic design scheme facilitates learning of signal and noise relationships which can vary throughout the input space. We show that it does so efficiently, on both computational and statistical grounds. In addition to illustrative synthetic examples, we demonstrate performance on two challenging real-data simulation experiments, from inventory management and epidemiology.

59 citations


Journal ArticleDOI
TL;DR: In this paper, a random function embedded Karhunen-Loeve expansion method is proposed for simulation of stochastic processes, where the random function is expressed as a single-elementary-random-variable orthogonal function in polynomial format or trigonometric format (non-Gaussian and Gaussian variables).

58 citations


Posted Content
TL;DR: A method of modelling the cell cycle is suggested that restores the memoryless property to the system and is therefore consistent with simulation via the Gillespie algorithm, and can restore the Markov property at the same time as more accurately approximating the appropriate cell cycle time distributions.
Abstract: The stochastic simulation algorithm commonly known as Gillespie's algorithm is now used ubiquitously in the modelling of biological processes in which stochastic effects play an important role. In well-mixed scenarios at the sub-cellular level it is often reasonable to assume that times between successive reaction/interaction events are exponentially distributed and can be appropriately modelled as a Markov process and hence simulated by the Gillespie algorithm. However, Gillespie's algorithm is routinely applied to model biological systems for which it was never intended. In particular, processes in which cell proliferation is important should not be simulated naively using the Gillespie algorithm since the history-dependent nature of the cell cycle breaks the Markov process. The variance in experimentally measured cell cycle times is far less than in an exponential cell cycle time distribution with the same mean. Here we suggest a method of modelling the cell cycle that restores the memoryless property to the system and is therefore consistent with simulation via the Gillespie algorithm. By breaking the cell cycle into a number of independent exponentially distributed stages we can restore the Markov property at the same time as more accurately approximating the appropriate cell cycle time distributions. The consequences of our revised mathematical model are explored analytically. We demonstrate the importance of employing the correct cell cycle time distribution by considering two models incorporating cellular proliferation (one spatial and one non-spatial) and demonstrating that changing the cell cycle time distribution makes quantitative and qualitative differences to their outcomes. Our adaptation will allow modellers and experimentalists alike to appropriately represent cellular proliferation, whilst still being able to take advantage of the Gillespie algorithm.

56 citations


Journal ArticleDOI
TL;DR: Through this approach an adaptive control for the accuracy of the metamodel is achieved, minimizing the number of simulations for the expensive system model.

54 citations


Journal ArticleDOI
TL;DR: In this article, a compositional framework for the construction of approximations of the interconnection of a class of stochastic hybrid systems is proposed, which can be used as a replacement of the original hybrid system in a controller design process.
Abstract: In this paper we propose a compositional framework for the construction of approximations of the interconnection of a class of stochastic hybrid systems. As special cases, this class of systems includes both jump linear stochastic systems and linear stochastic hybrid automata. In the proposed framework, an approximation is itself a stochastic hybrid system, which can be used as a replacement of the original stochastic hybrid system in a controller design process. We employ a notion of so-called stochastic simulation function to quantify the error between the approximation and the original system. In the first part of the paper, we derive sufficient conditions which facilitate the compositional quantification of the error between the interconnection of stochastic hybrid subsystems and that of their approximations using the quantified error between the stochastic hybrid subsystems and their corresponding approximations. In particular, we show how to construct stochastic simulation functions for approximations of interconnected stochastic hybrid systems using the stochastic simulation function for the approximation of each component. In the second part of the paper, we focus on a specific class of stochastic hybrid systems, namely, jump linear stochastic systems, and propose a constructive scheme to determine approximations together with their stochastic simulation functions for this class of systems. Finally, we illustrate the effectiveness of the proposed results by constructing an approximation of the interconnection of four jump linear stochastic subsystems in a compositional way.

49 citations


Journal ArticleDOI
Yu Huang1, Min Xiong1
TL;DR: In this article, a stochastic seismic response of a slope under random earthquake ground motion is analyzed using probability density evolution method (PDEM) and dynamic reliability evaluation of slope stability.

48 citations


Journal ArticleDOI
TL;DR: In this paper, the authors develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuatinghydrodynamics (FHD), which can be seen as a generalization of the reaction diffusion master equation (RDME) model.
Abstract: We develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuatinghydrodynamics (FHD). For hydrodynamicsystems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poissonfluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules, to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlogl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamicfluctuations to the formation of a two-dimensional Turing-like pattern and examine the effect of fluctuations on three-dimensional chemical front propagation. By comparing stochastic simulations to deterministic reaction-diffusion simulations, we show that fluctuations accelerate pattern formation in spatially homogeneous systems and lead to a qualitatively different disordered pattern behind a traveling wave.

40 citations


Dissertation
13 Jul 2017
TL;DR: How No-MASS has been extended to more comprehensively simulate the behaviours of agents occupying multiple buildings, including behaviours for which data is scarce, social interactions between agents, and a generalization of No- MASS to simulate electrical devices and their interactions are outlined.
Abstract: One of the principle causes for deviations between predicted and simulated performance of buildings relates to the stochastic nature of their occupants: their presence, activities whilst present, activity dependent behaviours and the consequent implications for their perceived comfort. A growing research community is active in the development and validation of stochastic models addressing these issues; and considerable progress has been made. Specifically models in the areas of presence, activities while present, shading devices, window openings and lighting usage. One key outstanding challenge relates to the integration of these prototype models with building simulation in a coherent and generalizable way; meaning that emerging models can be integrated with a range of building simulation software. This thesis describes our proof of concept platform that integrates stochastic occupancy models within a multi agent simulation platform, which communicates directly with building simulation software. The tool is called Nottingham Multi-Agent Stochastic Simulation (No-MASS). No-MASS is tested with a building performance simulation solver to demonstrate the effectiveness of the integrated stochastic models on a residential building and a non-residential building. To account for diversity between occupants No-MASS makes use of archetypical behaviours within the stochastic models of windows, shades and activities. Thus providing designers with means to evaluate the performance of their designs in response to the range of expected behaviours and to evaluate the robustness of their design solutions; which is not possible using current simplistic deterministic representations. A methodology for including rule based models is built into No-MASS, this allows for testing what-if scenarios with building performance simulation and provides a pragmatic basis for the modelling of the behaviours for which there is insufficient data to develop stochastic models. A Belief-Desire-Intention model is used to develop a set of goals and plans that an agent must follow to influence the environment based on their beliefs about current environmental conditions. Recommendations for the future development of stochastic models are presented based on the sensitivity analysis of the plans. A social interactions framework is developed within No-MASS to resolve conflicts between competing agents.This framework resolves situations where each agent may have different desires, for example one may wish to have a window open and another closed based on the outputs of the stochastic models. A votes casting system determines the agent choice, the most votes becomes the action acted on. No-MASS employs agent machine learning techniques that allow them to learn how to respond to the processes taking place within a building and agents can choose a strategy without the need for context specific rules. Employing these complementary techniques to support the comprehensive simulation of occupants presence and behaviour, integrated within a single platform that can readily interface with a range of building (and urban) energy simulation programs is the key contribution to knowledge from this thesis. Nevertheless, there is significant scope to extend this work to further reduce the performance gap between simulated and real world buildings.

Journal ArticleDOI
TL;DR: This work develops image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort and at the same time condition them to a variety of data.

Journal ArticleDOI
TL;DR: It is shown that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision, demonstrating a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.
Abstract: Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.

Journal ArticleDOI
TL;DR: It is argued that incorporating context is essential in modeling movement and a stochastic simulation model that incorporates contextual factors that affect local choices along its movement trajectory is proposed.
Abstract: Computational Movement Analysis focuses on the characterization of the trajectory of individuals across space and time. Various analytic techniques, including but not limited to random walks, Brownian motion models, and step selection functions have been used for modeling movement. These fall under the rubric of signal models which are divided into deterministic and stochastic models. The difficulty of applying these models to the movement of dynamic objects e.g. animals, humans, vehicles is that the spatiotemporal signal produced by their trajectories a complex composite that is influenced by the Geography through which they move i.e. the network or the physiography of the terrain, their behavioral state i.e. hungry, going to work, shopping, tourism, etc., and their interactions with other individuals. This signal reflects multiple scales of behavior from the local choices to the global objectives that drive movement. In this research, we propose a stochastic simulation model that incorporates contextual factors i.e. environmental conditions that affect local choices along its movement trajectory. We show how actual global positioning systems observations can be used to parameterize movement and validate movement models and argue that incorporating context is essential in modeling movement.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: This paper derives sufficient small-gain type conditions for the compositional quantification of the distance in probability between the interconnection of stochastic control subsystems and that of their abstractions, and proposes a computational scheme to construct abstractions together with their corresponding stoChastic simulation functions.
Abstract: This paper is concerned with a compositional approach for constructing abstractions of interconnected discrete-time stochastic control systems. The abstraction framework is based on new notions of so-called stochastic simulation functions, using which one can quantify the distance between original interconnected stochastic control systems and their abstractions in the probabilistic setting. Accordingly, one can leverage the proposed results to perform analysis and synthesis over abstract interconnected systems, and then carry the results over concrete ones. In the first part of the paper, we derive sufficient small-gain type conditions for the compositional quantification of the distance in probability between the interconnection of stochastic control subsystems and that of their abstractions. In the second part of the paper, we focus on the class of discrete-time linear stochastic control systems with independent noises in the abstract and concrete subsystems. For this class of systems, we propose a computational scheme to construct abstractions together with their corresponding stochastic simulation functions. We demonstrate the effectiveness of the proposed results by constructing an abstraction (totally 4 dimensions) of the interconnection of four discrete-time linear stochastic control subsystems (together 100 dimensions) in a compositional fashion.

Journal ArticleDOI
TL;DR: This work presents a new exact algorithm that employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing and provides a favorable scaling for the computational complexity in simulating large reaction networks.
Abstract: Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

Posted Content
TL;DR: In this paper, a compositional approach for constructing abstractions of interconnected discrete-time stochastic control systems is proposed, where the abstraction framework is based on new notions of so-called simulation functions, using which one can quantify the distance between original interconnected control systems and their abstractions in the probabilistic setting.
Abstract: This paper is concerned with a compositional approach for constructing abstractions of interconnected discrete-time stochastic control systems. The abstraction framework is based on new notions of so-called stochastic simulation functions, using which one can quantify the distance between original interconnected stochastic control systems and their abstractions in the probabilistic setting. Accordingly, one can leverage the proposed results to perform analysis and synthesis over abstract interconnected systems, and then carry the results over concrete ones. In the first part of the paper, we derive sufficient small-gain type conditions for the compositional quantification of the distance in probability between the interconnection of stochastic control subsystems and that of their abstractions. In the second part of the paper, we focus on the class of discrete-time linear stochastic control systems with independent noises in the abstract and concrete subsystems. For this class of systems, we propose a computational scheme to construct abstractions together with their corresponding stochastic simulation functions. We demonstrate the effectiveness of the proposed results by constructing an abstraction (totally 4 dimensions) of the interconnection of four discrete-time linear stochastic control subsystems (together 100 dimensions) in a compositional fashion.

Journal ArticleDOI
TL;DR: This paper partitions the design space into several subspaces and estimates the density of failure samples in each subspace by binning and constructing regression functions, which guarantees the accuracy of FPF approximation over there and ultimately over the entire design space.

Journal ArticleDOI
TL;DR: It is shown that SK with the proposed strategies applied holds great promise for achieving high predictive accuracy by striking a good balance between exploration and exploitation.

Journal ArticleDOI
TL;DR: A stochastic simulation methodology where the uncertain experimental data are modelled by a probability distribution at each sample location prior to the simulation of the rest of the grid nodes is presented.
Abstract: Most geostatistical estimation and simulation methodologies assume the experimental data as hard measurements, meaning that the measures of a given property of interest are not associated with uncertainty. The challenge of integrating uncertain experimental data at the geostatistical estimation or simulation models is not new. Several attempts have been made, either considering the uncertain data as soft data or interpreting it as inequality constraints, based on the indicator formalism or decreasing the weight of soft data in kriging procedures. This paper presents a stochastic simulation methodology where the uncertain experimental data are modelled by a probability distribution at each sample location. Data values are firstly drawn, by stochastic simulation, at these locations prior to the simulation of the rest of the grid nodes. This method is also extended to the simulation of categorical uncertain data, as well as to the simulation with uncertain block support data. To illustrate the proposed methodology, an application to a real case study of pore pressure prediction of oil reservoirs is presented, as well as an upscaling problem.

Journal ArticleDOI
TL;DR: This paper investigates new efficient formulations of the stochastic simulation algorithm to improve its computational efficiency and presents a new method for computing the firing time of the next reaction, based on recycling of random numbers.
Abstract: The stochastic simulation algorithm has been used to generate exact trajectories of biochemical reaction networks. For each simulation step, the simulation selects a reaction and its firing time according to a probability that is proportional to the reaction propensity. We investigate in this paper new efficient formulations of the stochastic simulation algorithm to improve its computational efficiency. We examine the selection of the next reaction firing and reduce its computational cost by reusing the computation in the previous step. For biochemical reactions with delays, we present a new method for computing the firing time of the next reaction. The principle for computing the firing time of our approach is based on recycling of random numbers. Our new approach for generating the firing time of the next reaction is not only computationally efficient but also easy to implement. We further analyze and reduce the number of propensity updates when a delayed reaction occurred. We demonstrate the applicability of our improvements by experimenting with concrete biological models.

Journal ArticleDOI
TL;DR: A novel global optimization algorithm, combing the expected hypervolume improvement of approximated Pareto front and the probability of feasibility of new points, is proposed to identify the Pare to front (set) with a minimal number of expensive simulations.

Journal ArticleDOI
TL;DR: A direct approach without utilizing the memoryless transformation is proposed to generate a beta random field, Though the marginal distribution is restricted to the beta distribution, the proposed approach is simple and efficient and would make it attractive in simulating large-scale random fields for material properties.


Journal ArticleDOI
TL;DR: This finding is applied to extend the classical stability classification of the zero-equilibrium point based on phase portrait to the random scenario and to establish the potentiality of the theoretical results established and their connection with their deterministic counterpart.

Journal ArticleDOI
28 Jul 2017
TL;DR: In this paper, the authors studied a class of random fractional linear differential equations where the initial condition and the forcing term are assumed to be second-order random variables and provided a sufficient condition to guarantee the existence of this operator.
Abstract: The aim to this paper is to study, in the mean square sense, a class of random fractional linear differential equation where the initial condition and the forcing term are assumed to be second-order random variables. The solution stochastic process of its associated Cauchy problem is constructed combining the application of a mean square chain rule for differentiating second- order stochastic processes and the random Frobenius method. To conduct our study, first the classical Caputo derivative is extended to the random framework, in mean square sense. Furthermore, a sufficient condition to guarantee the existence of this operator is provided. Afterwards, the solution of a random fractional initial value problem is built under mild conditions. The main statistical functions of the solution stochastic process are also computed. Finally, several examples illustrate our theoretical findings.

Posted Content
TL;DR: This work shows that the distribution of pre-activations in random neural networks can be exactly mapped onto lattice models in statistical physics, and argues that several previous investigations of stochastic networks actually studied a particular factorial approximation to the full lattice model.
Abstract: A number of recent papers have provided evidence that practical design questions about neural networks may be tackled theoretically by studying the behavior of random networks. However, until now the tools available for analyzing random neural networks have been relatively ad-hoc. In this work, we show that the distribution of pre-activations in random neural networks can be exactly mapped onto lattice models in statistical physics. We argue that several previous investigations of stochastic networks actually studied a particular factorial approximation to the full lattice model. For random linear networks and random rectified linear networks we show that the corresponding lattice models in the wide network limit may be systematically approximated by a Gaussian distribution with covariance between the layers of the network. In each case, the approximate distribution can be diagonalized by Fourier transformation. We show that this approximation accurately describes the results of numerical simulations of wide random neural networks. Finally, we demonstrate that in each case the large scale behavior of the random networks can be approximated by an effective field theory.

Journal ArticleDOI
TL;DR: Aiming to effectively model the uncertainties and avoid queue spillback in traffic networks, a stochastic expected value model with chance constraints for the objective function of the Stochastic MPC model is developed.
Abstract: This paper proposes a stochastic model predictive control (MPC) framework for traffic signal coordination and control in urban traffic networks. One of the important features of the proposed stochastic MPC model is that uncertain traffic demands and stochastic disturbances are taken into account. Aiming to effectively model the uncertainties and avoid queue spillback in traffic networks, we develop a stochastic expected value model with chance constraints for the objective function of the stochastic MPC model. The objective function is defined to minimize the queue length and the oscillation of green time between any two control steps. Furthermore, by embedding the stochastic simulation and neural networks into a genetic algorithm, we propose a hybrid intelligent algorithm to solve the stochastic MPC model. Finally, numerical results by means of simulation on a road network are presented, which illustrate the performance of the proposed approach.

Journal ArticleDOI
TL;DR: A direct non-translation approach is proposed to generate random fields with a marginal gamma distribution based on the additive reproductive property of the gamma distribution, which results in a conceptually simple algorithm that is straightforward to implement in Monte-Carlo simulations.

Journal ArticleDOI
TL;DR: This paper addresses the problem of simulating multivariate random fields with stationary Gaussian increments in a d-dimensional Euclidean space with a spectral turning-bands algorithm, in which the simulated field is a mixture of basic random fields made of weighted cosine waves associated with random frequencies and random phases.
Abstract: This paper addresses the problem of simulating multivariate random fields with stationary Gaussian increments in a d-dimensional Euclidean space. To this end, one considers a spectral turning-bands algorithm, in which the simulated field is a mixture of basic random fields made of weighted cosine waves associated with random frequencies and random phases. The weights depend on the spectral density of the direct and cross variogram matrices of the desired random field for the specified frequencies. The algorithm is applied to synthetic examples corresponding to different spatial correlation models. The properties of these models and of the algorithm are discussed, highlighting its computational efficiency, accuracy and versatility.