scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2020"


Proceedings Article
01 Jan 2020
TL;DR: One of the earliest commonly-used packages is Spearmint, which implements a variety of modeling techniques such as MCMC hyperparameter sampling and input warping.
Abstract: Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design. We introduce BoTorch, a modern programming framework for Bayesian optimization that combines Monte-Carlo (MC) acquisition functions, a novel sample average approximation optimization approach, auto-differentiation, and variance reduction techniques. BoTorch's modular design facilitates flexible specification and optimization of probabilistic models written in PyTorch, simplifying implementation of new acquisition functions. Our approach is backed by novel theoretical convergence results and made practical by a distinctive algorithmic foundation that leverages fast predictive distributions, hardware acceleration, and deterministic optimization. We also propose a novel "one-shot" formulation of the Knowledge Gradient, enabled by a combination of our theoretical and software contributions. In experiments, we demonstrate the improved sample efficiency of BoTorch relative to other popular libraries.

307 citations


Journal ArticleDOI
TL;DR: The Python interface allows users to combine HOOMD-blue with with other packages in the Python ecosystem to create simulation and analysis workflows.

261 citations


Journal Article
TL;DR: A broad and accessible survey of the methods for Monte Carlo gradient estimation in machine learning and across the statistical sciences can be found in this article, where the authors explore three strategies: pathwise, score function, and measure-valued gradient estimators.
Abstract: This paper is a broad and accessible survey of the methods we have at our disposal for Monte Carlo gradient estimation in machine learning and across the statistical sciences: the problem of computing the gradient of an expectation of a function with respect to parameters defining the distribution that is integrated; the problem of sensitivity analysis. In machine learning research, this gradient problem lies at the core of many learning problems, in supervised, unsupervised and reinforcement learning. We will generally seek to rewrite such gradients in a form that allows for Monte Carlo estimation, allowing them to be easily and efficiently used and analysed. We explore three strategies--the pathwise, score function, and measure-valued gradient estimators--exploring their historical development, derivation, and underlying assumptions. We describe their use in other fields, show how they are related and can be combined, and expand on their possible generalisations. Wherever Monte Carlo gradient estimators have been derived and deployed in the past, important advances have followed. A deeper and more widely-held understanding of this problem will lead to further advances, and it is these advances that we wish to support.

217 citations


Journal ArticleDOI
TL;DR: This work proposes a specialized neural- network architecture that supports efficient and exact sampling, completely circumventing the need for Markov-chain sampling, and demonstrates the ability to obtain accurate results on larger system sizes than those currently accessible to neural-network quantum states.
Abstract: Artificial neural networks were recently shown to be an efficient representation of highly entangled many-body quantum states. In practical applications, neural-network states inherit numerical schemes used in variational Monte Carlo method, most notably the use of Markov-chain Monte Carlo (MCMC) sampling to estimate quantum expectations. The local stochastic sampling in MCMC caps the potential advantages of neural networks in two ways: (i) Its intrinsic computational cost sets stringent practical limits on the width and depth of the networks, and therefore limits their expressive capacity; (ii) its difficulty in generating precise and uncorrelated samples can result in estimations of observables that are very far from their true value. Inspired by the state-of-the-art generative models used in machine learning, we propose a specialized neural-network architecture that supports efficient and exact sampling, completely circumventing the need for Markov-chain sampling. We demonstrate our approach for two-dimensional interacting spin models, showcasing the ability to obtain accurate results on larger system sizes than those currently accessible to neural-network quantum states.

171 citations


Journal ArticleDOI
TL;DR: HEPfit as discussed by the authors is an open-source tool that allows to fit the model parameters to a set of experimental observables, and obtain predictions for observables for a given point in the parameter space of the model, allowing it to be used in any statistical framework.
Abstract: HEPfit is a flexible open-source tool which, given the Standard Model or any of its extensions, allows to (i) fit the model parameters to a given set of experimental observables; (ii) obtain predictions for observables. HEPfit can be used either in Monte Carlo mode, to perform a Bayesian Markov Chain Monte Carlo analysis of a given model, or as a library, to obtain predictions of observables for a given point in the parameter space of the model, allowing HEPfit to be used in any statistical framework. In the present version, around a thousand observables have been implemented in the Standard Model and in several new physics scenarios. In this paper, we describe the general structure of the code as well as models and observables implemented in the current release.

148 citations


Journal ArticleDOI
TL;DR: This study presents a probabilistic transmission expansion planning model incorporating distributed series reactors, which are aimed at improving network flexibility and utilises the Monte Carlo simulation method to take into account uncertainty of wind generations and demands.
Abstract: This study presents a probabilistic transmission expansion planning model incorporating distributed series reactors, which are aimed at improving network flexibility. Although the whole problem is a mixed-integer non-linear programming problem, this study proposes an approximation method to linearise it in the structure of the Benders decomposition (BD) algorithm. In the first stage of the BD algorithm, optimal number of new transmission lines and distributed series reactors are determined. In the second stage, the developed optimal power flow problem, as a linear sub-problem, is performed for different scenarios of uncertainties and a set of probable contingencies. The Benders cuts are iteratively added to the first stage problem to decrease the optimality gap below a given threshold. The proposed model utilises the Monte Carlo simulation method to take into account uncertainty of wind generations and demands. Several case studies on three test systems are presented to validate the efficacy of the proposed approach.

123 citations


Journal ArticleDOI
TL;DR: Cobaya is a general-purpose Bayesian analysis code aimed at models with complex internal interdependencies, and includes interfaces to a set of cosmological Boltzmann codes and likelihoods, and automatic installers for external dependencies.
Abstract: We present Cobaya, a general-purpose Bayesian analysis code aimed at models with complex internal interdependencies. It allows exploration of arbitrary posteriors using a range of Monte Carlo samplers, and also has functions for maximization and importance-reweighting of Monte Carlo samples with new priors and likelihoods. Interdependencies of the different stages of a model pipeline and their individual computational costs are automatically exploited for sampling efficiency, cacheing intermediate results when possible and optimally grouping parameters in blocks, which are sorted so as to minimize the cost of their variation. Cobaya is written in Python in a modular way that allows for extendability, use of calculations provided by external packages, and dynamical reparameterization without modifying its source. It exploits hybrid OpenMP/MPI parallelization, and has sub-millisecond overhead per posterior evaluation. Though Cobaya is a general purpose statistical framework, it includes interfaces to a set of cosmological Boltzmann codes and likelihoods (the latter being agnostic with respect to the choice of the former), and automatic installers for external dependencies.

121 citations


Journal ArticleDOI
TL;DR: In this article, a cutting surface algorithm for scanning OPE coefficients makes it possible to find islands in high-dimensional spaces, which enables bootstrap studies of much larger systems of correlation functions than was previously practical.
Abstract: We develop new tools for isolating CFTs using the numerical bootstrap. A “cutting surface” algorithm for scanning OPE coefficients makes it possible to find islands in high-dimensional spaces. Together with recent progress in large-scale semidefinite programming, this enables bootstrap studies of much larger systems of correlation functions than was previously practical. We apply these methods to correlation functions of charge-0, 1, and 2 scalars in the 3d O(2) model, computing new precise values for scaling dimensions and OPE coefficients in this theory. Our new determinations of scaling dimensions are consistent with and improve upon existing Monte Carlo simulations, sharpening the existing decades-old 8σ discrepancy between theory and experiment.

117 citations


Book ChapterDOI
TL;DR: A practical guide on how to obtain and use optimized weights that can be used to calculate other properties and distributions of the conformational ensemble of a biomolecular system and discuss shortcomings of the method.
Abstract: We describe a Bayesian/Maximum entropy (BME) procedure and software to construct a conformational ensemble of a biomolecular system by integrating molecular simulations and experimental data. First, an initial conformational ensemble is constructed using, for example, Molecular Dynamics or Monte Carlo simulations. Due to potential inaccuracies in the model and finite sampling effects, properties predicted from simulations may not agree with experimental data. In BME we use the experimental data to refine the simulation so that the new conformational ensemble has the following properties: (1) the calculated averages are close to the experimental values taking uncertainty into account and (2) it maximizes the relative Shannon entropy with respect to the original simulation ensemble. The output of this procedure is a set of optimized weights that can be used to calculate other properties and distributions of these. Here, we provide a practical guide on how to obtain and use such weights, how to choose adjustable parameters and discuss shortcomings of the method.

117 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyzed the sensitivity and robustness of two Artificial Intelligence (AI) techniques, namely Gaussian Process Regression (GPR) with five different kernels (Matern32, Matern52, Exponential, Squared Exponential and Rational Quadratic) and an Artificial Neural Network (ANN) using a Monte Carlo simulation for prediction of high-performance concrete (HPC) compressive strength.
Abstract: This study aims to analyze the sensitivity and robustness of two Artificial Intelligence (AI) techniques, namely Gaussian Process Regression (GPR) with five different kernels (Matern32, Matern52, Exponential, Squared Exponential, and Rational Quadratic) and an Artificial Neural Network (ANN) using a Monte Carlo simulation for prediction of High-Performance Concrete (HPC) compressive strength. To this purpose, 1030 samples were collected, including eight input parameters (contents of cement, blast furnace slag, fly ash, water, superplasticizer, coarse aggregates, fine aggregates, and concrete age) and an output parameter (the compressive strength) to generate the training and testing datasets. The proposed AI models were validated using several standard criteria, namely coefficient of determination (R2), Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE). To analyze the sensitivity and robustness of the models, Monte Carlo simulations were performed with 500 runs. The results showed that the GPR using the Matern32 kernel function outperforms others. In addition, the sensitivity analysis showed that the content of cement and the testing age of the HPC were the most sensitive and important factors for the prediction of HPC compressive strength. In short, this study might help in selecting suitable AI models and appropriate input parameters for accurate and quick estimation of the HPC compressive strength.

105 citations


Journal ArticleDOI
TL;DR: The results indicate that SALK can locally approximate the limit-state surfaces around the finalSRBDO solution and efficiently reduce the computational cost on the refinement of the region far from the final SR BDO solution.

Journal ArticleDOI
TL;DR: In this paper, a machine learning algorithm, based on deep artificial neural networks, was proposed to predict the underlying input parameters to observable map from a few training samples (computed realizations of this map).

Proceedings Article
30 Apr 2020
TL;DR: This work proves the consistency of the method under general conditions, provides a detailed error analysis, and demonstrates strong empirical performance on benchmark tasks, including off-line PageRank and off-policy policy evaluation.
Abstract: An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain. In many real-world applications, access to the underlying transition operator is limited to a fixed set of data that has already been collected, without additional interaction with the environment being available. We show that consistent estimation remains possible in this scenario, and that effective estimation can still be achieved in important applications. Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions, derived from fundamental properties of the stationary distribution, and exploiting constraint reformulations based on variational divergence minimization. The resulting algorithm, GenDICE, is straightforward and effective. We prove the consistency of the method under general conditions, provide a detailed error analysis, and demonstrate strong empirical performance on benchmark tasks, including off-line PageRank and off-policy policy evaluation.

Posted Content
TL;DR: It is demonstrated to the reader that studying PDEs as well as control and variational problems in very high dimensions might very well be among the most promising new directions in mathematics and scientific computing in the near future.
Abstract: In recent years, tremendous progress has been made on numerical algorithms for solving partial differential equations (PDEs) in a very high dimension, using ideas from either nonlinear (multilevel) Monte Carlo or deep learning. They are potentially free of the curse of dimensionality for many different applications and have been proven to be so in the case of some nonlinear Monte Carlo methods for nonlinear parabolic PDEs. In this paper, we review these numerical and theoretical advances. In addition to algorithms based on stochastic reformulations of the original problem, such as the multilevel Picard iteration and the Deep BSDE method, we also discuss algorithms based on the more traditional Ritz, Galerkin, and least square formulations. We hope to demonstrate to the reader that studying PDEs as well as control and variational problems in very high dimensions might very well be among the most promising new directions in mathematics and scientific computing in the near future.

Journal ArticleDOI
TL;DR: In this article, the linear attenuation coefficients (μ) of various marble concretes have been studied in gamma energies 662, 1173, and 1332 keV by using the GEANT4 Monte Carlo simulation.
Abstract: The linear attenuation coefficients (μ) of various marble concretes have been studied in gamma energies 662, 1173, and 1332 keV by using the GEANT4 Monte Carlo simulation. The simulated results were then compared with experimental data, and a good agreement was observed for the concretes involved. The obtained results reveal that the concrete namely NM20 has higher μ and lower transmission factor among the selected concretes.

Journal ArticleDOI
15 Oct 2020-Energy
TL;DR: The optimal sizing problem of the micro-grid’s resources in two different modes in the presence of the electric vehicle using the multi-objective particle swarm optimization algorithm is investigated and it is demonstrated that the design of both systems is feasible.

Journal ArticleDOI
TL;DR: The results demonstrate that the GPDEM shows promise as an approach that can reliably analyze strongly nonlinear structures, such as earth-rockfill dams and other geotechnical engineering structures.

Journal ArticleDOI
TL;DR: Huggins et al. as discussed by the authors proposed an extension to the variational quantum eigensolver that approximates the ground state of a system by solving a generalized eigenvalue problem in a subspace spanned by a collection of parametrized quantum states.
Abstract: Author(s): Huggins, WJ; Lee, J; Baek, U; O'Gorman, B; Whaley, KB | Abstract: Variational algorithms for strongly correlated chemical and materials systems are one of the most promising applications of near-term quantum computers. We present an extension to the variational quantum eigensolver that approximates the ground state of a system by solving a generalized eigenvalue problem in a subspace spanned by a collection of parametrized quantum states. This allows for the systematic improvement of a logical wavefunction ansatz without a significant increase in circuit complexity. To minimize the circuit complexity of this approach, we propose a strategy for efficiently measuring the Hamiltonian and overlap matrix elements between states parametrized by circuits that commute with the total particle number operator. This strategy doubles the size of the state preparation circuits but not their depth, while adding a small number of additional two-qubit gates relative to standard variational quantum eigensolver. We also propose a classical Monte Carlo scheme to estimate the uncertainty in the ground state energy caused by a finite number of measurements of the matrix elements. We explain how this Monte Carlo procedure can be extended to adaptively schedule the required measurements, reducing the number of circuit executions necessary for a given accuracy. We apply these ideas to two model strongly correlated systems, a square configuration of H4 and the π-system of hexatriene (C6H8).

Journal ArticleDOI
TL;DR: In this paper, the critical exponents ν, η and ω of O(N) models for various values of N were computed by implementing the derivative expansion of the nonperturbative renormalization group up to next-to-next-toleading order [usually denoted O(∂^{4})].
Abstract: We compute the critical exponents ν, η and ω of O(N) models for various values of N by implementing the derivative expansion of the nonperturbative renormalization group up to next-to-next-to-leading order [usually denoted O(∂^{4})]. We analyze the behavior of this approximation scheme at successive orders and observe an apparent convergence with a small parameter, typically between 1/9 and 1/4, compatible with previous studies in the Ising case. This allows us to give well-grounded error bars. We obtain a determination of critical exponents with a precision which is similar or better than those obtained by most field-theoretical techniques. We also reach a better precision than Monte Carlo simulations in some physically relevant situations. In the O(2) case, where there is a long-standing controversy between Monte Carlo estimates and experiments for the specific heat exponent α, our results are compatible with those of Monte Carlo but clearly exclude experimental values.

Journal ArticleDOI
TL;DR: In this paper, a novel integrator based on normalizing flows is proposed to improve the unweighting efficiency of Monte Carlo event generators for collider physics simulations, in contrast to machine learning approaches based on surrogate models, which generate the correct result even if the underlying neural networks are not optimally trained.
Abstract: We present a novel integrator based on normalizing flows which can be used to improve the unweighting efficiency of Monte Carlo event generators for collider physics simulations. In contrast to machine learning approaches based on surrogate models, our method generates the correct result even if the underlying neural networks are not optimally trained. We exemplify the new strategy using the example of Drell-Yan type processes at the LHC, both at leading and partially at next-to-leading order QCD.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the accelerating up transient vibrations of a rotor system under both the random and uncertain-but-bounded parameters, and used the Polynomial Chaos Expansion (PCE) coupled with the Chebyshev Surrogate Method (CSM) to analyze the propagations of the two categorizes of uncertainties.

Journal ArticleDOI
TL;DR: In this paper, statistical machine learning theories are proposed to quickly solve the optimal planning for capacitors by comparing the method with the scenario reduction algorithm and the Monte Carlo method in a 33-bus distribution system.
Abstract: Distributed generation and reactive power resource allocation will affect the economy and security of distribution networks. Deterministic scenario planning cannot solve the problem of network uncertainties, which are introduced by intermittent renewable generators and a variable demand for electricity. However, stochastic programming becomes a problem of great complexity when there is a large number of scenarios to be analyzed and when the computational burden has an adverse effect on the programming solution. In this paper, statistical machine learning theories are proposed to quickly solve the optimal planning for capacitors. Various technologies are used: Markov chains and copula functions are formulated to capture the variability and correlation of weather; consumption behavior probability is involved in the weather-sensitive load model; nearest neighbor theory and nonnegative matrix decomposition are combined to reduce the dimensions and scenario scale of stochastic variables; the stochastic response surface is used to calculate the probabilistic power flow; and probabilistic inequality theory is introduced to directly estimate the objective and constraint functions of the stochastic programming model. The effectiveness and efficiency of the proposed method are verified by comparing the method with the scenario reduction algorithm and the Monte Carlo method in a 33-bus distribution system.

Journal ArticleDOI
TL;DR: It is demonstrated and illustrated that the Monte Carlo technique leads to overly precise conclusions on the values of estimated parameters, and to incorrect hypothesis tests, thus pointing out a fundamental flaw.
Abstract: The Monte Carlo technique is widely used and recommended for including uncertainties LCA. Typically, 1000 or 10,000 runs are done, but a clear argument for that number is not available, and with the growing size of LCA databases, an excessively high number of runs may be a time-consuming thing. We therefore investigate if a large number of runs are useful, or if it might be unnecessary or even harmful. We review the standard theory or probability distributions for describing stochastic variables, including the combination of different stochastic variables into a calculation. We also review the standard theory of inferential statistics for estimating a probability distribution, given a sample of values. For estimating the distribution of a function of probability distributions, two major techniques are available, analytical, applying probability theory and numerical, using Monte Carlo simulation. Because the analytical technique is often unavailable, the obvious way-out is Monte Carlo. However, we demonstrate and illustrate that it leads to overly precise conclusions on the values of estimated parameters, and to incorrect hypothesis tests. We demonstrate the effect for two simple cases: one system in a stand-alone analysis and a comparative analysis of two alternative systems. Both cases illustrate that statistical hypotheses that should not be rejected in fact are rejected in a highly convincing way, thus pointing out a fundamental flaw. Apart form the obvious recommendation to use larger samples for estimating input distributions, we suggest to restrict the number of Monte Carlo runs to a number not greater than the sample sizes used for the input parameters. As a final note, when the input parameters are not estimated using samples, but through a procedure, such as the popular pedigree approach, the Monte Carlo approach should not be used at all.

Journal ArticleDOI
TL;DR: For a long time it has been well-known that high-dimensional linear parabolic partial differential equations (PDEs) can be approximated by Monte Carlo methods with a computational effort which grows...
Abstract: For a long time it has been well-known that high-dimensional linear parabolic partial differential equations (PDEs) can be approximated by Monte Carlo methods with a computational effort which grow...

Journal ArticleDOI
TL;DR: Monte Carlo reference-based consensus clustering (M3C) is developed, which simulates null distributions of stability scores for a range of K values thus enabling a comparison with real data to remove bias and statistically test for the presence of structure.
Abstract: Genome-wide data is used to stratify patients into classes for precision medicine using clustering algorithms. A common problem in this area is selection of the number of clusters (K). The Monti consensus clustering algorithm is a widely used method which uses stability selection to estimate K. However, the method has bias towards higher values of K and yields high numbers of false positives. As a solution, we developed Monte Carlo reference-based consensus clustering (M3C), which is based on this algorithm. M3C simulates null distributions of stability scores for a range of K values thus enabling a comparison with real data to remove bias and statistically test for the presence of structure. M3C corrects the inherent bias of consensus clustering as demonstrated on simulated and real expression data from The Cancer Genome Atlas (TCGA). For testing M3C, we developed clusterlab, a new method for simulating multivariate Gaussian clusters.

Journal ArticleDOI
TL;DR: A thorough review of MC methods for the estimation of static parameters in signal processing applications is performed, describing many of the most relevant MCMC and IS algorithms, and their combined use.
Abstract: Statistical signal processing applications usually require the estimation of some parameters of interest given a set of observed data. These estimates are typically obtained either by solving a multi-variate optimization problem, as in the maximum likelihood (ML) or maximum a posteriori (MAP) estimators, or by performing a multi-dimensional integration, as in the minimum mean squared error (MMSE) estimators. Unfortunately, analytical expressions for these estimators cannot be found in most real-world applications, and the Monte Carlo (MC) methodology is one feasible approach. MC methods proceed by drawing random samples, either from the desired distribution or from a simpler one, and using them to compute consistent estimators. The most important families of MC algorithms are the Markov chain MC (MCMC) and importance sampling (IS). On the one hand, MCMC methods draw samples from a proposal density, building then an ergodic Markov chain whose stationary distribution is the desired distribution by accepting or rejecting those candidate samples as the new state of the chain. On the other hand, IS techniques draw samples from a simple proposal density and then assign them suitable weights that measure their quality in some appropriate way. In this paper, we perform a thorough review of MC methods for the estimation of static parameters in signal processing applications. A historical note on the development of MC schemes is also provided, followed by the basic MC method and a brief description of the rejection sampling (RS) algorithm, as well as three sections describing many of the most relevant MCMC and IS algorithms, and their combined use. Finally, five numerical examples (including the estimation of the parameters of a chaotic system, a localization problem in wireless sensor networks and a spectral analysis application) are provided in order to demonstrate the performance of the described approaches.

Journal ArticleDOI
TL;DR: By analyzing a highly nonlinear numerical case, a non-linear oscillator system, a simplified wing box structural model, an aero-engine turbine disk and a planar ten-bar structure, the effectiveness and the accuracy of the proposed AK-ARBIS method for estimating the small failure probability are verified.

Proceedings ArticleDOI
TL;DR: Using the statistical properties from Monte-Carlo Markov chains of images, it is shown how this code can place statistical limits on image features such as unseen binary companions.
Abstract: We present a flexible code created for imaging from the bispectrum and visibility-squared. By using a simulated annealing method, we limit the probability of converging to local chi-squared minima as can occur when traditional imaging methods are used on data sets with limited phase information. We present the results of our code used on a simulated data set utilizing a number of regularization schemes including maximum entropy. Using the statistical properties from Monte-Carlo Markov chains of images, we show how this code can place statistical limits on image features such as unseen binary companions.

Journal ArticleDOI
Abstract: We numerically study the single-flavor Schwinger model with a topological $\theta$-term, which is practically inaccessible by standard lattice Monte Carlo simulations due to the sign problem. By using numerical methods based on tensor networks, especially the one-dimensional matrix product states, we explore the non-trivial $\theta$-dependence of several lattice and continuum quantities in the Hamiltonian formulation. In particular, we compute the ground-state energy, the electric field, the chiral fermion condensate, and the topological vacuum susceptibility for positive, zero, and even negative fermion mass. In the chiral limit, we demonstrate that the continuum model becomes independent of the vacuum angle $\theta$, thus respecting CP invariance, while lattice artifacts still depend on $\theta$. We also confirm that negative masses can be mapped to positive masses by shifting $\theta\rightarrow \theta +\pi$ due to the axial anomaly in the continuum, while lattice artifacts non-trivially distort this mapping. This mass regime is particularly interesting for the (3+1)-dimensional QCD analog of the Schwinger model, the sign problem of which requires the development and testing of new numerical techniques beyond the conventional Monte Carlo approach.

Journal ArticleDOI
TL;DR: In this article, a two-parameter Fisher-Snedecor distribution was proposed to model atmospheric turbulence-induced fading in free space optical communication systems, which is based on a doubly stochastic theory of turbulence induced fading.
Abstract: In this article we propose the use of the so-called Fisher-Snedecor $\mathcal {F}$ -distribution to model atmospheric turbulence-induced fading in free space optical communication systems. The proposed model is a two-parameter distribution, defined as the ratio of two independent gamma random variables. In this context, the proposed model is based on a doubly stochastic theory of turbulence-induced fading, assuming that small-scale irradiance variations of the propagating wave, modeled by a gamma distribution, are a subject to large-scale irradiance fluctuations, modeled by an inverse gamma distribution. It is shown that the $\mathcal {F}$ -distribution yields at least as good, or even a better fit to experimental and computer simulation data as compared to the well known gamma-gamma distribution. Also, important statistical measures such as cumulative distribution function (CDF) and moment generating function (MGF) are mathematically simpler than those of the gamma-gamma distribution. Motivated by these facts, the performance of single-input—multiple output (SIMO) and multiple-input—multiple output (MIMO) systems operating in the presence of $\mathcal {F}$ turbulence is assessed. The proposed analysis is substantiated by numerically evaluated results and Monte Carlo simulations.