scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic simulation published in 2021"


Journal ArticleDOI
TL;DR: Results demonstrate that the posterior probability distributions of the unknown structural parameters can be successfully identified, and reliable probabilistic model updating and damage identification can be achieved.

37 citations


Journal ArticleDOI
TL;DR: A compositional framework for the construction of discrete-time finite abstractions from continuous-time stochastic hybrid systems by quantifying the distance between their outputs in a probabilistic setting is proposed.

32 citations


Journal ArticleDOI
TL;DR: This work considers stochastic control systems that are evolving over continuous spaces that are able to handle infinite-horizon properties and compute control strategies with lower- and upper-bounds for satisfying unbounded temporal logic specifications.
Abstract: Discrete-time stochastic systems are an essential modeling tool for many engineering systems. We consider stochastic control systems that are evolving over continuous spaces. For this class of models, methods for the formal verification and synthesis of control strategies are computationally hard and generally rely on the use of approximate abstractions. Building on approximate abstractions, we compute control strategies with lower- and upper-bounds for satisfying unbounded temporal logic specifications. First, robust dynamic programming mappings over the abstract system are introduced to solve the control synthesis and verification problem. These mappings yield a control strategy and a unique lower bound on the satisfaction probability for temporal logic specifications that is robust to the incurred approximation errors. Second, upper-bounds on the satisfaction probability are quantified, and properties of the mappings are analyzed and discussed. Finally, we show the implications of these results to continuous state space of linear stochastic dynamic systems. This abstraction-based synthesis framework is shown to be able to handle infinite-horizon properties. Approximation errors expressed as deviations in the outputs of the models and as deviations in the probabilistic transitions are allowed and are quantified using approximate stochastic simulation relations.

21 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic bi-objective two-stage open shop scheduling problem that models a vehicle maintenance process where tasks are appointed to be completed by multiple third-party companies with professional equipment is proposed.
Abstract: Nowadays, many manufacturing and service industries prefer to share resources such as facilities and workers to cooperatively perform tasks, which can efficiently improve resource utilization and customer satisfaction. Generally, the decision-makers need to pay more for resource usage, leading to an urgent demand to decrease operational costs. This article proposes a stochastic bi-objective two-stage open shop scheduling problem that models a vehicle maintenance process where tasks are appointed to be completed by multiple third-party companies with professional equipment. We formulate this optimization problem by minimizing the total tardiness and processing cost subject to various resource constraints. A hybrid multiobjective migrating birds optimization combined with a genetic operation and a discrete event system is designed by considering problem characteristics to solve the problem. In this method, the migrating birds optimization with some particular strategies aims at searching candidate solutions from the entire solution domain. Simultaneously, the discrete event system, by using stochastic simulation and discrete event-based simulation approaches, focuses on evaluating the performance of searched solutions. Simulation experiments are performed, and state-of-the-art algorithms are used as competitive approaches. The results confirm that this approach has an excellent performance in handling our considered problem.

16 citations


Journal ArticleDOI
TL;DR: A stochastic multiobjective hybrid flow shop scheduling problem aiming at maximizing processing quality and minimizing total tardiness, where the processing time of jobs obeys a known random distribution is proposed.
Abstract: Currently, manufacturing enterprises attach great importance to improving processing quality and customer satisfaction. Hybrid flow shops have widespread applications in real-world manufacturing systems such as steel production and chemical industry. In a practical production process, uncertainty commonly arises due to the difficulty of knowing exact information of facilities and jobs beforehand. In order to improve processing quality and customer satisfaction of manufacturing systems in uncertain environments, this article proposes a stochastic multiobjective hybrid flow shop scheduling problem aiming at maximizing processing quality and minimizing total tardiness, where the processing time of jobs obeys a known random distribution. To describe jobs’ processing quality mathematically, a quality-based cost function is presented, and further a chance-constrained programming approach is used to formulate this problem. Then, a multiobjective artificial bee colony algorithm incorporating a stochastic simulation approach is designed by considering its characteristics. Simulation experiments are performed on a set of instances and several state-of-the-art multiobjective optimization algorithms are chosen as peer approaches. Experiment results confirm that the proposed algorithm has an excellent performance in handling this problem.

15 citations


Journal ArticleDOI
TL;DR: This study was devoted to investigating stochastic model updating in a Bayesian inference framework based on a frequency response function (FRF) vector without any post-processing such as smoothing and windowing.

12 citations


Posted ContentDOI
26 May 2021
TL;DR: A new methodology for genuine simulation of stochastic processes with any dependence and any marginal distribution is outlined and tested, which approximate the marginal distribution of the process, irrespective of its type, using a number of its cumulants.
Abstract: We outline and test a new methodology for genuine simulation of stochastic processes with any dependence structure and any marginal distribution We reproduce time dependence with a generalized, time symmetric or asymmetric, moving-average scheme This implements linear filtering of non-Gaussian white noise, with the weights of the filter determined by analytical equations, in terms of the autocovariance of the process We approximate the marginal distribution of the process, irrespective of its type, using a number of its cumulants, which in turn determine the cumulants of white noise, in a manner that can readily support the generation of random numbers from that approximation, so that it be applicable for stochastic simulation The simulation method is genuine as it uses the process of interest directly, without any transformation (eg, normalization) We illustrate the method in a number of synthetic and real-world applications, with either persistence or antipersistence, and with non-Gaussian marginal distributions that are bounded, thus making the problem more demanding These include distributions bounded from both sides, such as uniform, and bounded from below, such as exponential and Pareto, possibly having a discontinuity at the origin (intermittence) All examples studied show the satisfactory performance of the method

11 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic generation mathematical framework is applied based on a Generalized Hurst-Kolmogorov process embedded in a Symmetric-Moving-Average scheme, which is used for the simulation of a wind process while preserving explicitly the marginal moments, wind's intermittency and long-term persistence.

11 citations


Journal ArticleDOI
TL;DR: The concepts of survival signature, fuzzy probability theory and the two versions of non-intrusive stochastic simulation (NISS) methods are adapted and merged, providing an efficient approach to quantify the reliability of complex systems taking into account the whole uncertainty spectrum.

10 citations


Journal ArticleDOI
TL;DR: A new methodology for genuine simulation of stochastic processes with any dependence structure and any marginal distribution is outlined and tested, which approximate the marginal distribution of the process, irrespective of its type, using a number of its cumulants.
Abstract: We outline and test a new methodology for genuine simulation of stochastic processes with any dependence structure and any marginal distribution. We reproduce time dependence with a generalized, time symmetric or asymmetric, moving-average scheme. This implements linear filtering of non-Gaussian white noise, with the weights of the filter determined by analytical equations, in terms of the autocovariance of the process. We approximate the marginal distribution of the process, irrespective of its type, using a number of its cumulants, which in turn determine the cumulants of white noise, in a manner that can readily support the generation of random numbers from that approximation, so that it be applicable for stochastic simulation. The simulation method is genuine as it uses the process of interest directly, without any transformation (e.g., normalization). We illustrate the method in a number of synthetic and real-world applications, with either persistence or antipersistence, and with non-Gaussian marginal distributions that are bounded, thus making the problem more demanding. These include distributions bounded from both sides, such as uniform, and bounded from below, such as exponential and Pareto, possibly having a discontinuity at the origin (intermittence). All examples studied show the satisfactory performance of the method.

10 citations


Journal ArticleDOI
Liang Zheng1, Youpeng Yang1, Xinfeng Xue1, Xiaoru Li1, Chengcheng Xu1 
TL;DR: Numerical results show that only hundreds of simulations are cost to obtain the ultimate non-dominated non-coordinated and coordinated signal timing plans, and validate a competing relationship between traffic conflicts and total delay.
Abstract: This study proposes a stochastic simulation-based network-wide signal timing optimization model with the balance consideration of traffic safety and efficiency, and solves it with a Bi-objective Stochastic Simulation-based Optimization (BOSSO) algorithm. During numerical experiments, an urban road network with 15 signalized and 5 non-signalized intersections in Changsha, China is modelled as an experimental scenario. The calibrated simulator VISSIM, Surrogate Safety Assessment Model (SSAM) and MATLAB are integrated to construct the VISSIM-SSAM-Matlab platform, based on which the network-wide signal timing optimization problems without and with coordination are solved by the BOSSO algorithm to trade-off traffic conflicts and total delay. Numerical results show that only hundreds of simulations are cost to obtain the ultimate non-dominated non-coordinated and coordinated signal timing plans, and validate a competing relationship between traffic conflicts and total delay. By the bi-objective comparison of three various signal plans from the overall and local aspects, the coordinated signal plan outperforms the non-coordinated one, which is followed by the field implemented one. It successfully demonstrates the effectiveness of the BOSSO method.

Journal ArticleDOI
TL;DR: A novel geostatistical methodology is proposed, based on the integration into one approach of multi-source data fusion and stochastic simulation, to estimate the risk of extreme (shallow) water table depth, and a demonstrative example of application is illustrated to a case study in a Cerrado conservation area in Brazil.

Journal ArticleDOI
TL;DR: Results demonstrate the feasibility of using topology graphs as a summary statistic to restrict the generation of geomodel ensembles with known geological information and to obtain improvedEnsembles of probableGeomodels which respect the known topology information and exhibit reduced uncertainty using stochastic simulation methods.
Abstract: . Structural geomodeling is a key technology for the visualization and quantification of subsurface systems. Given the limited data and the resulting necessity for geological interpretation to construct these geomodels, uncertainty is pervasive and traditionally unquantified. Probabilistic geomodeling allows for the simulation of uncertainties by automatically constructing geomodel ensembles from perturbed input data sampled from probability distributions. But random sampling of input parameters can lead to construction of geomodels that are unrealistic, either due to modeling artifacts or by not matching known information about the regional geology of the modeled system. We present a method to incorporate geological information in the form of known geomodel topology into stochastic simulations to constrain resulting probabilistic geomodel ensembles using the open-source geomodeling software GemPy. Simulated geomodel realizations are checked against topology information using an approximate Bayesian computation approach to avoid the specification of a likelihood function. We demonstrate how we can infer the posterior distributions of the model parameters using topology information in two experiments: (1) a synthetic geomodel using a rejection sampling scheme (ABC-REJ) to demonstrate the approach and (2) a geomodel of a subset of the Gullfaks field in the North Sea comparing both rejection sampling and a sequential Monte Carlo sampler (ABC-SMC). Possible improvements to processing speed of up to 10.1 times are discussed, focusing on the use of more advanced sampling techniques to avoid the simulation of unfeasible geomodels in the first place. Results demonstrate the feasibility of using topology graphs as a summary statistic to restrict the generation of geomodel ensembles with known geological information and to obtain improved ensembles of probable geomodels which respect the known topology information and exhibit reduced uncertainty using stochastic simulation methods.

Journal ArticleDOI
TL;DR: In this paper, a high-order sequential simulation method is proposed, which is based on the efficient inference of highorder spatial statistics from the available sample data, using a statistical learning framework in kernel space.
Abstract: A training image free, high-order sequential simulation method is proposed herein, which is based on the efficient inference of high-order spatial statistics from the available sample data. A statistical learning framework in kernel space is adopted to develop the proposed simulation method. Specifically, a new concept of aggregated kernel statistics is proposed to enable sparse data learning. The conditioning data in the proposed high-order sequential simulation method appear as data events corresponding to the attribute values associated with the so-called spatial templates of various geometric configurations. The replicates of the data events act as the training data in the learning framework for inference of the conditional probability distribution and generation of simulated values. These replicates are mapped into spatial Legendre moment kernel spaces, and the kernel statistics are computed thereafter, encapsulating the high-order spatial statistics from the available data. To utilize the incomplete information from the replicates, which partially match the spatial template of a given data event, the aggregated kernel statistics combine the ensemble of the elements in different kernel subspaces for statistical inference, embedding the high-order spatial statistics of the replicates associated with various spatial templates into the same kernel subspace. The aggregated kernel statistics are incorporated into a learning algorithm to obtain the target probability distribution in the underlying random field, while preserving in the simulations the high-order spatial statistics from the available data. The proposed method is tested using a synthetic dataset, showing the reproduction of the high-order spatial statistics of the sample data. The comparison with the corresponding high-order simulation method using TIs emphasizes the generalization capacity of the proposed method for sparse data learning.

Journal ArticleDOI
TL;DR: In this article, a comparative study is performed on the most promising search methods, using the stochastic simulation algorithm and considering system size variations from 102 to 106 employing 107 randomly selected targets.

Journal ArticleDOI
TL;DR: A new algorithm for the approximation and simulation of twofold iterated stochastic integrals together with the corresponding Levy areas driven by a multidimensional Brownian motion is proposed.
Abstract: A new algorithm for the approximation and simulation of twofold iterated stochastic integrals together with the corresponding Levy areas driven by a multidimensional Brownian motion is proposed. Th...

Journal ArticleDOI
TL;DR: A new stochastic simulation-based methodology for pricing discretely-monitored double barrier options and estimating the corresponding probabilities of execution and it is demonstrated clearly that this treatment always outperforms the standard Monte Carlo approach and becomes substantially more efficient when the underlying asset has high volatility and the barriers are set close to the spot price.

Journal ArticleDOI
TL;DR: In this article, a learning-based stochastic simulation method that incorporates high-order spatial statistics at multiple scales from sources with different resolutions is presented, where the highorder spatial information from different sources is encapsulated as aggregated kernel statistics in a spatial Legendre moment kernel space, and the probability distribution of the underlying random field model is derived by a statistical learning algorithm.

Journal ArticleDOI
TL;DR: In this article, the Gibbs-Laguerre tessellation model is used to construct an energy function of the Gibbs point process, such that the resulting TESsellation matches some desired geometrical properties.
Abstract: Random tessellations are well suited for probabilistic modeling of three-dimensional (3D) grain microstructures of polycrystalline materials. The present paper is focused on so-called Gibbs-Laguerre tessellations, in which the generators of the Laguerre tessellation form a Gibbs point process. The goal is to construct an energy function of the Gibbs point process such that the resulting tessellation matches some desired geometrical properties. Since the model is analytically intractable, our main tool of analysis is stochastic simulation based on Markov chain Monte Carlo. Such simulations enable us to investigate the properties of the models, and, in the next step, to apply the knowledge gained to the statistical reconstruction of the 3D microstructure of an aluminum alloy extracted from 3D tomographic image data.

Journal ArticleDOI
TL;DR: An efficient stochastic method to produce fully nonstationary records having multiple peaks in power spectrum is introduced to resimulate 252 non-pulse-like, horizontal near-field records with rupture distance of less than 10 km and strike–slip mechanism.
Abstract: This paper introduces an efficient stochastic method to produce fully nonstationary records having multiple peaks in power spectrum. The zero-crossing characteristics of the acceleration, velocity,...

Proceedings ArticleDOI
09 Aug 2021
TL;DR: In this article, an adaptive noise generator circuit for on-chip simulations of stochastic chemical kinetics is described, which uses amplified BJT white noise and adaptive low-pass filtering to emulate the power spectrum and auto-correlation of random telegraph signals with Poisson-distributed level transitions.
Abstract: This paper describes an adaptive noise generator circuit suitable for on-chip simulations of stochastic chemical kinetics. The circuit uses amplified BJT white noise and adaptive low-pass filtering to emulate the power spectrum and auto-correlation of random telegraph signals (RTS) with Poisson-distributed level transitions. A current-mode implementation in the IHP 0.25 µm BiCMOS process shows excellent agreement with theoretical results from the Gillespie stochastic simulation algorithm over a 60 dB range in mean current levels (modeling molecule count numbers). The circuit has an estimated layout area of 0.01 mm2 and typically consumes 100 µA, which are 10× and 8× better, respectively, than prior implementations.

Journal ArticleDOI
22 Jun 2021-PLOS ONE
TL;DR: In this article, a multi-point geostatistical reservoir facies modeling algorithm based on the Deep Forward Neural Network (DFNN) is proposed to optimize the training data organization and repeated simulation of grid nodes.
Abstract: Reservoir facies modeling is an important way to express the sedimentary characteristics of the target area. Conventional deterministic modeling, target-based stochastic simulation, and two-point geostatistical stochastic modeling methods are difficult to characterize the complex sedimentary microfacies structure. Multi-point geostatistics (MPG) method can learn a priori geological model and can realize multi-point correlation simulation in space, while deep neural network can express nonlinear relationship well. This article comprehensively utilizes the advantages of the two to try to optimize the multi-point geostatistical reservoir facies modeling algorithm based on the Deep Forward Neural Network (DFNN). Through the optimization design of the multi-grid training data organization form and repeated simulation of grid nodes, the simulation results of diverse modeling algorithm parameters, data conditions and deposition types of sedimentary microfacies models were compared. The results show that by optimizing the organization of multi-grid training data and repeated simulation of nodes, it is easier to obtain a random simulation close to the real target, and the simulation of sedimentary microfacies of different scales and different sedimentary types can be performed.

Book ChapterDOI
19 Mar 2021
TL;DR: In this paper, a random simulation evaluation model is proposed to evaluate the advantages of the evaluation object by calculating the winning degree of each evaluation object. And the results showed that each index of the experimental group was higher than that of the control group.
Abstract: In the classical comprehensive evaluation theory, the information form of evaluation conclusion is usually absolute. In view of the absoluteness of judging the advantages and disadvantages of traditional physical education (PE) teaching evaluation and the inconsistency of multiple evaluation conclusions, this paper constructs an independent advantage evaluation method which highlights its own advantages, and puts forward a comprehensive evaluation mode of random simulation. In other words, by setting parameters, the traditional evaluation mode can be transformed into a random mode, and the possibility ranking conclusion of the comparison between the advantages and disadvantages of the schemes can be obtained. Because of the independence of the stochastic simulation solution method, this paper applies it to the evaluation model of PE, and constructs a new independent evaluation method to evaluate the advantages of the evaluation object by calculating the winning degree of each evaluation object. Finally, the random simulation comprehensive evaluation model was taken as the experimental group, and the traditional sports evaluation model as the control group. The results showed that each index of the experimental group was higher than that of the control group, and the comprehensive score was 0.83 higher than that of the control group. Thus, the random simulation evaluation model is an extension of the traditional evaluation model, providing a structural framework for various information forms and evaluators’ preferences, so that the evaluation process is no longer limited by single or limited data form and information structure, and can further enhance the practical application scope of comprehensive evaluation method.

Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, a hybrid approach called a genetic algorithm-based fuzzy programming method is proposed to handle multi-objective stochastic transportation problems, which is a hybridization of the evolutionary algorithm called a GA and a classical mathematical programming technique called fuzzy programming.
Abstract: In real-life situations, it is difficult to handle multi-objective stochastic transportation problems. It can’t be solved directly using traditional mathematical programming approaches. In this paper, we proposed a solution procedure to handle the above problem. The proposed solution procedure is a hybridization of the evolutionary algorithm called a genetic algorithm and a classical mathematical programming technique called fuzzy programming method. This hybrid approach is called a genetic algorithm-based fuzzy programming method. The supply and demand parameters of the constraints follow a three-parameter Weibull distribution. To complete the proposed problem a total of three steps are required. Initially, the probabilistic constraints are handled using stochastic simulation. Then, we checked the feasibility of probability constraints by the stochastic programming with the genetic algorithm without deriving the deterministic equivalents. Then, the genetic algorithm-based fuzzy programming method is considered to generate non-dominated solutions for the given problem. Finally, a numerical case study is presented to illustrate the methodology.

Journal ArticleDOI
TL;DR: In this paper, a new Gaussian process regression (GPR) method called physics information aided Kriging (PhIK) was proposed. But the unknown function of interest is usually t.
Abstract: In this work, we propose a new Gaussian process regression (GPR) method: physics information aided Kriging (PhIK). In the standard data-driven Kriging, the unknown function of interest is usually t...

Journal ArticleDOI
TL;DR: Analytical result of noise, measured by the Fano factor, indicates that, both in delay and non-delay cases, the gene expression system follows sub-Poissonian processes when the values of parameters are far from asymPTotic values and that it becomes Poissonian at asymptotic values of the system parameters.
Abstract: Noise can drive the dynamics of stochastic systems to different important states. Delay is another significant parameter that may impart non-Markovian behavior in the system dynamics. The interplay of noise and delay can exhibit interesting, complex behaviors in stochastic systems. In this work, we considered the stochastic gene expression model and studied this interplay of noise and delay in describing the functioning of a gene via transcription and translation processes. The calculated probability distributions of mRNA and protein, both in non-delay and delay, are found to obey certain universal classes, namely Poisson distribution at $$u,N\rightarrow large$$ limit, and Normal distribution at $$u,\langle u\rangle ,N\rightarrow large$$ limit. Analytical result of noise, measured by the Fano factor, indicates that, both in delay and non-delay cases, the gene expression system follows sub-Poissonian processes when the values of parameters are far from asymptotic values and that it becomes Poissonian at asymptotic values of the system parameters. We provided a detailed study of the noise using the Fano Factor with respect to different parameters such as mean, initial population, and time delay for the gene expression process. Again, the stochastic simulation results of the model indicate the transition of mRNA states (low and high transcription and translation) driven by the translation rate.

Journal ArticleDOI
TL;DR: StREEQ as discussed by the authors is a robust verification tool for computational codes, which is suitable for both stochastic and deterministic computational codes and is generalizable to any number of discretization variables.

Journal ArticleDOI
TL;DR: In this paper, Park et al. proposed an improved version of PFM, namely IPFM, which can combine the PFM with any simulation budget allocation procedure that satisfies some conditions within a general DOvS framework.
Abstract: Penalty function with memory (PFM) in Park and Kim [2015] is proposed for discrete optimization via simulation problems with multiple stochastic constraints where performance measures of both an objective and constraints can be estimated only by stochastic simulation. The original PFM is shown to perform well, finding a true best feasible solution with a higher probability than other competitors even when constraints are tight or near-tight. However, PFM applies simple budget allocation rules (e.g., assigning an equal number of additional observations) to solutions sampled at each search iteration and uses a rather complicated penalty sequence with several user-specified parameters. In this article, we propose an improved version of PFM, namely IPFM, which can combine the PFM with any simulation budget allocation procedure that satisfies some conditions within a general DOvS framework. We present a version of a simulation budget allocation procedure useful for IPFM and introduce a new penalty sequence, namely PS2+, which is simpler than the original penalty sequence yet holds convergence properties within IPFM with better finite-sample performances. Asymptotic convergence properties of IPFM with PS2+ are proved. Our numerical results show that the proposed method greatly improves both efficiency and accuracy compared to the original PFM.

Journal ArticleDOI
TL;DR: A stochastic simulation-based comprehensive evaluation solution algorithm based on the idea of “Monte Carlo simulation” is proposed, and the corresponding ranking method is investigated, which is characterized by generating evaluation conclusions with probabilistic information, and thus has more advantages than the absolute conclusion form in terms of problem interpretability.
Abstract: The probability ranking conclusion is an extension of the absolute form evaluation conclusion. Firstly, the random simulation evaluation model is introduced; then, the general idea of converting the traditional evaluation method to the random simulation evaluation model is analyzed; on this basis, based on the rule of “further ensuring the stability of the ranking chain on the basis of increasing the possibility of the ranking chain,” two methods of solving the probability ranking conclusion are given. Based on the rule of “further guaranteeing the stability of the ranking chain on the basis of improving the likelihood of the ranking chain,” two methods are given to solve the likelihood conclusion. This paper argues that this absolute form of conclusion hinders the approximation of the theory to the essence of the actual problem and is an important reason for the problem of “non-consistency of multi-evaluation conclusions.” To address this problem, a stochastic simulation-based comprehensive evaluation solution algorithm based on the idea of “Monte Carlo simulation” is proposed, and the corresponding ranking method is investigated, which is characterized by generating evaluation conclusions with probability (reliability) information, and thus has more advantages than the absolute conclusion form in terms of problem interpretability. The method is characterized by the generation of evaluation conclusions with probabilistic (reliability) information and thus has more advantages than the absolute conclusion form in terms of problem interpretation. Because of the independence of the stochastic simulation solution method, it is applied to the “bottom-up” evaluation model as an example, and a novel autonomous evaluation method is constructed. Finally, the application of the stochastic simulation evaluation model is illustrated by an example and compared with the absolute form evaluation. The evaluation model is an extension of the traditional evaluation model, which can further broaden the practical application of comprehensive evaluation theory.

Journal ArticleDOI
TL;DR: In this article, an approach that permits the reconstruction of entire molecular trajectories, including bimolecular encounters, in retrospect, after a simulated time step, while avoiding inefficient draws from non-standard distributions is presented.
Abstract: Computational models of reaction–diffusion systems involving low copy numbers or strongly heterogeneous molecular spatial distributions, such as those frequently found in cellular signaling pathways, require approaches that account for the stochastic dynamics of individual particles, as opposed to approaches representing them through their average concentrations. Efforts to remedy the high computational cost associated with particle-based stochastic approaches by taking advantage of Green’s functions are hampered by the need to draw random numbers from complicated, and therefore costly, non-standard probability distributions to update particle positions. Here, we introduce an approach that permits the reconstruction of entire molecular trajectories, including bimolecular encounters, in retrospect, after a simulated time step, while avoiding inefficient draws from non-standard distributions. This means that highly accurate stochastic simulations can be performed for system sizes that would be prohibitively costly to simulate with conventional Green’s function based methods. The algorithm applies equally well to one, two, and three dimensional systems and can be readily extended to include deterministic forces specified by an interaction potential, such as the Coulomb potential.