scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 1996"


01 Jan 1996

3,908 citations


Journal ArticleDOI
TL;DR: A new algorithm based on a Monte Carlo method that can be applied to a broad class of nonlinear non-Gaussian higher dimensional state space models on the provision that the dimensions of the system noise and the observation noise are relatively low.
Abstract: A new algorithm for the prediction, filtering, and smoothing of non-Gaussian nonlinear state space models is shown. The algorithm is based on a Monte Carlo method in which successive prediction, filtering (and subsequently smoothing), conditional probability density functions are approximated by many of their realizations. The particular contribution of this algorithm is that it can be applied to a broad class of nonlinear non-Gaussian higher dimensional state space models on the provision that the dimensions of the system noise and the observation noise are relatively low. Several numerical examples are shown.

2,406 citations


Journal ArticleDOI
TL;DR: In this article, the FIGARCH (Fractionally Integrated Generalized AutoRegressive Conditionally Heteroskedastic) process is introduced and the conditional variance of the process implies a slow hyperbolic rate of decay for the influence of lagged squared innovations.

2,274 citations


Journal ArticleDOI
TL;DR: In this article, an efficient Monte Carlo algorithm for simulating a "hardly relaxing" system, in which many replicas with different temperatures are simultaneously simulated and a virtual process exchanging configurations of these replicas is introduced.
Abstract: We propose an efficient Monte Carlo algorithm for simulating a “hardly-relaxing” system, in which many replicas with different temperatures are simultaneously simulated and a virtual process exchanging configurations of these replicas is introduced. This exchange process is expected to let the system at low temperatures escape from a local minimum. By using this algorithm the three-dimensional ± J Ising spin glass model is studied. The ergodicity time in this method is found much smaller than that of the multi-canonical method. In particular the time correlation function almost follows an exponential decay whose relaxation time is comparable to the ergodicity time at low temperatures. It suggests that the system relaxes very rapidly through the exchange process even in the low temperature phase.

2,197 citations


Journal ArticleDOI
TL;DR: All of the methods in this work can fail to detect the sorts of convergence failure that they were designed to identify, so a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence are recommended.
Abstract: A critical issue for users of Markov chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but to date has yielded relatively little of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of 13 convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all of the methods can fail to detect the sorts of convergence failure that they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including ap...

1,860 citations


Journal ArticleDOI
TL;DR: In this article, an entropy criterion is proposed to estimate the number of clusters arising from a mixture model, which is derived from a relation linking the likelihood and the classification likelihood of a mixture.
Abstract: In this paper, we consider an entropy criterion to estimate the number of clusters arising from a mixture model. This criterion is derived from a relation linking the likelihood and the classification likelihood of a mixture. Its performance is investigated through Monte Carlo experiments, and it shows favorable results compared to other classical criteria.

1,689 citations


Journal ArticleDOI
TL;DR: In this paper, a new method of global sensitivity analysis of nonlinear models is proposed based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction.

1,662 citations


Journal ArticleDOI
TL;DR: A method (HOLE) that allows the analysis of the dimensions of the pore running through a structural model of an ion channel is presented and can be used to predict the conductance of channels using a simple empirically corrected ohmic model.

1,390 citations


Journal ArticleDOI
TL;DR: In this paper, the turbulent exchanges of CO2 and water vapour between an aggrading deciduous forest in the north-eastern United States (Harvard Forest) and the atmosphere were measured from 1990 to 1994 using the eddy covariance technique.
Abstract: The turbulent exchanges of CO2 and water vapour between an aggrading deciduous forest in the north-eastern United States (Harvard Forest) and the atmosphere were measured from 1990 to 1994 using the eddy covariance technique. We present a detailed description of the methods used and a rigorous evaluation of the precision and accuracy of these measurements. We partition the sources of error into three categories: (1) uniform systematic errors are constant and independent of measurement conditions (2) selective systematic errors result when the accuracy of the exchange measurement varies as a function of the physical environment, and (3) sampling uncertainty results when summing an incomplete data set to calculate long-term exchange. Analysis of the surface energy budget indicates a uniform systematic error in the turbulent exchange measurements of -20 to 0%. A comparison of nocturnal eddy flux with chamber measurements indicates a selective systematic underestimation during calm (friction velocity < 0.17 m S-l) nocturnal periods. We describe an approach to correct for this error. The integrated carbon sequestration in 1994 was 2.1 t C ha-l y-l with a 90% confidence interval due to sampling uncertainty of :!:0.3 t C ha-l y-l determined by Monte Carlo simulation. Sampling uncertainty may be reduced by estimating the flux as a function of the physical environment during periods when direct observations are unavailable, and by minimizing the length of intervals without flux data. These analyses lead us to place an overall uncertainty on the annual carbon sequestration in 1994 of --0.3 to +0.8 t C ha-l y-l.

1,390 citations


Journal ArticleDOI
TL;DR: The motivation for this work comes from a desire to preserve the dependence structure of the time series while bootstrapping (resampling it with replacement), and the method is data driven and is preferred where the investigator is uncomfortable with prior assumptions.
Abstract: A nonparametric method for resampling scalar or vector-valued time series is introduced. Multivariate nearest neighbor probability density estimation provides the basis for the resampling scheme developed. The motivation for this work comes from a desire to preserve the dependence structure of the time series while bootstrapping (resampling it with replacement). The method is data driven and is preferred where the investigator is uncomfortable with prior assumptions as to the form (e.g., linear or nonlinear) of dependence and the form of the probability density function (e.g., Gaussian). Such prior assumptions are often made in an ad hoc manner for analyzing hydrologic data. Connections of the nearest neighbor bootstrap to Markov processes as well as its utility in a general Monte Carlo setting are discussed. Applications to resampling monthly streamflow and some synthetic data are presented. The method is shown to be effective with time series generated by linear and nonlinear autoregressive models. The utility of the method for resampling monthly streamflow sequences with asymmetric and bimodal marginal probability densities is also demonstrated.

713 citations


Proceedings Article
01 Jan 1996
TL;DR: A new latent variable modeling approach is provided that can give more accurate estimates of such interaction effects by accounting for the measurement error in measures which attenuates the estimated relationships.
Abstract: The ability to detect and accurately estimate the strength of interaction effects are critical issues that are fundamental to social science research in general and IS research in particular. Within the IS discipline, a large percentage of research has been devoted to examining the conditions and contexts under which relationships may vary, often under the general umbrella of contingency theory ( McKeen, Guimaraes, and Wetherbe 1994; Weill and Olson 1989). In our survey of such studies where such moderating variables are explored, a majority fail to either detect and/or provide an estimate of the effect size. In cases where effects sizes are estimated, the numbers are generally small. These results have, in turn, led some to question the usefulness of contingency theory and the need to detect interaction effects (e.g., Weill and Olson 1989). This paper addresses this issue by providing a new latent variable modeling approach that can give more accurate estimates of such interaction effects by accounting for the measurement error in measures which attenuates the estimated relationships. The feasibility of this approach at recovering the true effects is demonstrated in two studies: a simulated data set where the underlying true effects are known and a Voice Mail adoption data set where the emotion of enjoyment is shown to have both a substantial direct and interaction effect on adoption intention.

Journal ArticleDOI
15 Jan 1996
TL;DR: In this paper, the site-site pair correlation functions for a fluid of molecules can be used to derive a set of empirical site site potential energy functions, which reproduce the fluid structure accurately but at the present time do not reproduce thermodynamic information on the fluid, such as the internal energy or pressure.
Abstract: It is shown that data on the site-site pair correlation functions for a fluid of molecules can be used to derive a set of empirical site-site potential energy functions. These potential functions reproduce the fluid structure accurately but at the present time do not reproduce thermodynamic information on the fluid, such as the internal energy or pressure. The method works in an iterative manner, starting from a reference fluid in which only Lennard-Jones interactions are included, and generates, by Monte Carlo simulation, successive corrections to those potentials which eventually lead to the correct site-site pair correlation functions. Using the approach the structure of water as determined from neuron scattering experiments is compared to the structure of water obtained from the simple point charge extended (SPCE) model of water interactions. The empirical potentials derived from both experiment and SPCE water show qualitative similarities with the true SPCE potential, although there are quantitative differences. The simulation is driven by a set of potential energy functions, with equilibration of the energy of the distribution, and not, as in the reverse Monte Carlo method, by equilibrating the value of χ2, which measures how closely the simulated site-site pair correlation functions fit a set of diffraction data. As a result the simulation proceeds on a true random walk and samples a wide range of possible molecular configurations.

Journal ArticleDOI
TL;DR: A post-simulation improvement for two common Monte Carlo methods, the Accept-Reject and Metropolis algorithms, is proposed, based on a Rao-Blackwellisation method that integrates over the uniform random variables involved in the algorithms, and thus post-processes the standard estimators.
Abstract: SUMMARY This paper proposes a post-simulation improvement for two common Monte Carlo methods, the Accept-Reject and Metropolis algorithms. The improvement is based on a Rao-Blackwellisation method that integrates over the uniform random variables involved in the algorithms, and thus post-processes the standard estimators. We show how the Rao-Blackwellised versions of these algorithms can be implemented and, through examples, illustrate the improvement in variance brought by these new procedures. We also compare the improved version of the Metropolis algorithm with ordinary and Rao-Blackwellised importance sampling procedures for independent and general Metropolis set-ups.

Journal ArticleDOI
TL;DR: A new kind of eligibility trace, thereplacing trace, is introduced theoretically, analyzed theoretically, and it is shown that it results in faster, more reliable learning than the conventional trace, and that replacing traces significantly improve performance and reduce parameter sensitivity on the "Mountain-Car" task.
Abstract: The eligibility trace is one of the basic mechanisms used in reinforcement learning to handle delayed reward. In this paper we introduce a new kind of eligibility trace, thereplacing trace, analyze it theoretically, and show that it results in faster, more reliable learning than the conventional trace. Both kinds of trace assign credit to prior events according to how recently they occurred, but only the conventional trace gives greater credit to repeated events. Our analysis is for conventional and replace-trace versions of the offline TD(1) algorithm applied to undiscounted absorbing Markov chains. First, we show that these methods converge under repeated presentations of the training set to the same predictions as two well known Monte Carlo methods. We then analyze the relative efficiency of the two Monte Carlo methods. We show that the method corresponding to conventional TD is biased, whereas the method corresponding to replace-trace TD is unbiased. In addition, we show that the method corresponding to replacing traces is closely related to the maximum likelihood solution for these tasks, and that its mean squared error is always lower in the long run. Computational results confirm these analyses and show that they are applicable more generally. In particular, we show that replacing traces significantly improve performance and reduce parameter sensitivity on the "Mountain-Car" task, a full reinforcement-learning problem with a continuous state space, when using a feature-based function approximator.

Journal ArticleDOI
TL;DR: In this paper, the authors examined alternative generalized method of moments procedures for estimation of a stochastic autoregressive volatility model by Monte Carlo methods and provided guidelines that help achieve desirable small-sample properties in settings characterized by strong conditional heteroscedasticity and correlation among the moments.
Abstract: We examine alternative generalized method of moments procedures for estimation of a stochastic autoregressive volatility model by Monte Carlo methods. We document the existence of a tradeoff between the number of moments, or information, included in estimation and the quality, or precision, of the objective function used for estimation. Furthermore, an approximation to the optimal weighting matrix is used to explore the impact of the weighting matrix for estimation, specification testing, and inference procedures. The results provide guidelines that help achieve desirable small-sample properties in settings characterized by strong conditional heteroscedasticity and correlation among the moments.

Book
01 Dec 1996
TL;DR: An introduction to Monte Carlo Simulation Modelling Probability and Statistics Theory Review and a guide to Probability Distributions.
Abstract: Introduction to Monte Carlo Simulation Modelling Probability and Statistics Theory Review A Guide to Probability Distributions Building a Risk Analysis Model Determining Input Distributions from Expert Opinion Determining Input Distributions from Available Data Modelling Dependencies Between Distributions Project Risk Analysis Adding Uncertainty to Forecasts Presenting and Interpreting Risk Analysis Results A Selection of Worked Problems.

Journal ArticleDOI
TL;DR: In this article, conditions under which the bootstrap provides asymptotic refinements to the critical values of t tests and the test of over-identifying restrictions are given, with particular attention given to the case of dependent data.
Abstract: Monte Carlo experiments have shown that tests based on generalized-method-ofmoments estimators often have true levels that differ greatly from their nominal levels when asymptotic critical values are used. This paper gives conditions under which the bootstrap provides asymptotic refinements to the critical values of t tests and the test of overidentifying restrictions. Particular attention is given to the case of dependent data. It is shown that with such data, the bootstrap must sample blocks of data and that the formulae for the bootstrap versions of test statistics differ from the formulae that apply with the original data. The results of Monte Carlo experiments on the numerical performance of the bootstrap show that it usually reduces the errors in level that occur when critical values based on first-order asymptotic theory are used. The bootstrap also provides an indication of the accuracy of critical values obtained from first-order asymptotic theory.

Journal ArticleDOI
TL;DR: In this article, the authors used a combination of molecular dynamics based upon the Landau-Lifshitz-Gilbert equation of motion and a Monte Carlo method for dealing with magnetic viscosity to identify the thermal stability limits on data storage density in longitudinal recording on thin film media.
Abstract: Simulations have been carried out with the purpose of identifying the thermal stability limits on data storage density in longitudinal recording on thin film media. The simulations use a combination of molecular dynamics based upon the Landau-Lifshitz-Gilbert equation of motion and a Monte Carlo method for dealing with magnetic viscosity. Based upon the limits on media coercivity imposed by available heads and SNR considerations, but assuming that sufficient head resolution can be achieved, an upper bound of about 36 Gbit/in./sup 2/ is projected.

Journal ArticleDOI
TL;DR: This work presents a new approach for clustering, based on the physical properties of an inhomogeneous ferromagnetic model, which outperforms other algorithms for toy problems as well as for real data.
Abstract: We present a new approach for clustering, based on the physical properties of an inhomogeneous ferromagnetic model. We do not assume any structure of the underlying distribution of the data. A Potts spin is assigned to each data point and short range interactions between neighboring points are introduced. Spin-spin correlations, measured ( by Monte Carlo procedure) in a superparamagnetic regime in which aligned domains appear, serve to partition the data points into clusters. Our method outperforms other algorithms for toy problems as well as for real data. [S0031-9007(96)00104-4] Many natural phenomena can be viewed as optimization processes, and the drive to understand and analyze them yielded powerful mathematical methods. Thus when wishing to solve a hard optimization problem, it may be advantageous to identify a related physical problem, for

Journal ArticleDOI
TL;DR: In this paper, the Monte Carlo Singular Systems Analysis (SSA) algorithm is used to identify intermittent or modulated oscillations in geophysical and climatic time series, and the results show that the strength of the evidence provided by SSA for interannual and interdecadal climate oscillations has been considerably overestimated.
Abstract: Singular systems (or singular spectrum) analysis (SSA) was originally proposed for noise reduction in the analysis of experimental data and is now becoming widely used to identify intermittent or modulated oscillations in geophysical and climatic time series. Progress has been hindered by a lack of effective statistical tests to discriminate between potential oscillations and anything but the simplest form of noise, that is, “white” (independent, identically distributed) noise, in which power is independent of frequency. The authors show how the basic formalism of SSA provides a natural test for modulated oscillations against an arbitrary “colored noise” null hypothesis. This test, Monte Carlo SSA, is illustrated using synthetic data in three situations: (i) where there is prior knowledge of the power-spectral characteristics of the noise, a situation expected in some laboratory and engineering applications, or when the “noise” against which the data is being tested consists of the output of an independently specified model, such as a climate model; (ii) where a simple hypothetical noise model is tested, namely, that the data consists only of white or colored noise; and (iii) where a composite hypothetical noise model is tested, assuming some deterministic components have already been found in the data, such as a trend or annual cycle, and it needs to be established whether the remainder may be attributed to noise. The authors examine two historical temperature records and show that the strength of the evidence provided by SSA for interannual and interdecadal climate oscillations in such data has been considerably overestimated. In contrast, multiple inter- and subannual oscillatory components are identified in an extended Southern Oscillation index at a high significance level. The authors explore a number of variations on the Monte Carlo SSA algorithm and note that it is readily applicable to multivariate series, covering standard empirical orthogonal functions and multichannel SSA.

Journal ArticleDOI
TL;DR: In this paper, a series of specification tests of Markov-switching time-series models are proposed, including omitted autocorrelation, omitted ARCH, misspecification of the Markovian dynamics, and omitted explanatory variables.

Journal ArticleDOI
TL;DR: In this article, the authors consider two phase accretion disk-corona models for active galactic nuclei and some X-ray binaries and describe how to exactly solve the polarized radiative transfer and Comptonization using the iterative scattering method, while simultaneously solving the energy and pair balance equation for both the cold and hot phases.
Abstract: We consider two phase accretion disk-corona models for active galactic nuclei and some X-ray binaries. We describe in detail how one can exactly solve the polarized radiative transfer and Comptonization using the iterative scattering method, while simultaneously solving the energy and pair balance equation for both the cold and hot phases. We take into account Compton scattering, photon-photon pair production, pair annihilation, bremsstrahlung, and double Compton scattering, as well as exact reflection from the cold disk. We consider coronae having slab geometry as well as coronae consisting of one or more well separated active regions of cylinder or hemisphere geometry. The method is useful for determining the spectral intensity and the polarization emerging in different directions from disk-corona systems. The code is tested against a Monte-Carlo code. We also compare with earlier, less accurate, work. The method is more than an order of magnitude faster than applying Monte Carlo methods to the same problem and has the potential of being used in spectral fitting software such as XSPEC.

Journal ArticleDOI
TL;DR: In this article, a large number of different pseudo-R2 measures for some common limited dependent variable models are surveyed, including those based solely on the maximized likelihoods with and without the restriction that slope coefficients are zero, those which require further calculations based on parameter estimates of the coefficients and variances, and those that are based on whether the qualitative predictions of the model are correct or not.
Abstract: A large number of different Pseudo-R2 measures for some common limited dependent variable models are surveyed. Measures include those based solely on the maximized likelihoods with and without the restriction that slope coefficients are zero, those which require further calculations based on parameter estimates of the coefficients and variances and those that are based solely on whether the qualitative predictions of the model are correct or not. The theme of the survey is that while there is no obvious criterion for choosing which Pseudo-R2 to use, if the estimation is in the context of an underlying latent dependent variable model, a case can be made for basing the choice on the strength of the numerical relationship to the OLS-R2 in the latent dependent variable. As such an OLS-R2 can be known in a Monte Carlo simulation, we summarize Monte Carlo results for some important latent dependent variable models (binary probit, ordinal probit and Tobit) and find that a Pseudo-R2 measure due to McKelvey and Zavoina scores consistently well under our criterion. We also very briefly discuss Pseudo-R2 measures for count data, for duration models and for prediction-realization tables.

Journal ArticleDOI
Qi Li1
TL;DR: Based on the kernel integrated square difference and applying a central limit theorem for degenerate V-statistic proposed by Hall (1984), the authors proposed a consistent nonparametric test of closeness between two unknown density functions under quite mild conditions.
Abstract: Based on the kernel integrated square difference and applying a central limit theorem for degenerate V-statistic proposed by Hall (1984), this paper proposes a consistent nonparametric test of closeness between two unknown density functions under quite mild conditions. We only require the unknown density functions to be bounded and continuous. Monte Carlo simulations show that the proposed tests perform well for moderate sample sizes.

Journal ArticleDOI
TL;DR: Evaluation studies in phantoms with large scatter fractions show that the method yields images with quantitative accuracy equivalent to that of slice-collimated PET in clinically useful times.
Abstract: A method is presented that directly calculates the mean number of scattered coincidences in data acquired with fully 3D positron emission tomography (PET). This method uses a transmission scan, an emission scan, the physics of Compton scatter, and a mathematical model of the scanner in a forward calculation of the number of events for which one photon has undergone a single Compton interaction. The distribution of events for which multiple Compton interactions have occurred is modelled as a linear transformation of the single-scatter distribution. Computational efficiency is achieved by sampling at rates no higher than those required by the scatter distribution and by implementing the algorithm using look-up tables. Evaluation studies in phantoms with large scatter fractions show that the method yields images with quantitative accuracy equivalent to that of slice-collimated PET in clinically useful times.

Journal ArticleDOI
TL;DR: A new model for calculating electron beam dose based on a two- or three-dimensional geometry defined by computerized tomography images, based on the Voxel Monte Carlo model (VMC), was tested in comparison to calculations by EGS4 and the "Hogstrom algorithm" (MDAH) using several fictive phantoms.
Abstract: A new model for calculating electron beam dose has been developed The algorithm is based on a two- or three-dimensional geometry defined by computerized tomography (CT) images The Monte Carlo technique was used to solve the electron transport equation However, in contrast to conventional Monte Carlo models (EGS4) several approximations and simplifications in the description of elementary electron processes were introduced reducing in this manner the computational time by a factor of about 35 without significant loss in accuracy The Monte Carlo computer program does not need any precalculated data The random access memory required is about 16 Mbytes for a 128(2) X 50 matrix, depending on the resolution of the CT cube The Voxel Monte Carlo model (VMC) was tested in comparison to calculations by EGS4 and the "Hogstrom algorithm" (MDAH) using several fictive phantoms In all cases a good coincidence has been found between EGS4 and VMC, especially near tissue inhomogeneities, whereas the MDAH algorithm has produced dose underestimations of up to 40%

Journal ArticleDOI
TL;DR: In this paper, the authors consider two phase accretion disk-corona models for active galactic nuclei and some X-ray binaries and describe how to exactly solve the polarized radiative transfer and Comptonization using the iterative scattering method, while simultaneously solving the energy and pair balance equation for both the cold and hot phases.
Abstract: We consider two phase accretion disk-corona models for active galactic nuclei and some X-ray binaries. We describe in detail how one can exactly solve the polarized radiative transfer and Comptonization using the iterative scattering method, while simultaneously solving the energy and pair balance equation for both the cold and hot phases. We take into account Compton scattering, photon-photon pair production, pair annihilation, bremsstrahlung, and double Compton scattering, as well as exact reflection from the cold disk. We consider coronae having slab geometry as well as coronae consisting of one or more well separated active regions of cylinder or hemisphere geometry. The method is useful for determining the spectral intensity and the polarization emerging in different directions from disk-corona systems. The code is tested against a Monte-Carlo code. We also compare with earlier, less accurate, work. The method is more than an order of magnitude faster than applying Monte Carlo methods to the same problem and has the potential of being used in spectral fitting software such as XSPEC.

Journal ArticleDOI
TL;DR: A model for light interaction with forest canopies is presented, based on Monte Carlo simulation of photon transport, which shows close agreement between model predictions and field measurements of bidirectional reflectance, high-resolution spectra and hemispherical albedo.
Abstract: A model for light interaction with forest canopies is presented, based on Monte Carlo simulation of photon transport. A hybrid representation is used to model the discontinuous nature of the forest canopy. Large scale structure is represented by geometric primitives defining shapes and positions of the tree crowns and trunks. Foliage is represented within crowns by volume-averaged parameters describing the structural and optical properties of the scattering elements. Simulation of three-dimensional photon trajectories allows accurate evaluation of multiple scattering within crowns, and between distinct crowns, trunks and the ground surface. The sky radiance field is treated as anisotropic and decoupled from bidirectional reflectance calculation. Validation has been performed on an example of dense spruce forest. Results show close agreement between model predictions and field measurements of bidirectional reflectance, high-resolution spectra and hemispherical albedo.

Journal ArticleDOI
TL;DR: This paper describes a new concept for the implementation of the direct simulation Monte Carlo (DSMC) method that uses a localized data structure based on a computational cell to achieve high performance, especially on workstation processors, which can also be used in parallel.

Journal ArticleDOI
TL;DR: An approximate equation is derived, which predicts the effect on variability at a neutral locus of background selection due to a set of partly linked deleterious mutations, and it is shown that background selection can produce a considerable overall reduction in variation in organisms with small numbers of chromosomes and short maps, such as Drosophila.
Abstract: An approximate equation is derived, which predicts the effect on variability at a neutral locus of background selection due to a set of partly linked deleterious mutations. Random mating, multiplicative fitnesses, and sufficiently large population size that the selected loci are in mutation/selection equilibrium are assumed. Given these assumptions, the equation is valid for an arbitrary genetic map, and for an arbitrary distribution of selection coefficients across loci. Monte Carlo computer simulations show that the formula performs well for small population sizes under a wide range of conditions, and even seems to apply when there are epistatic fitness interactions among the selected loci. Failure occurred only with very weak selection and tight linkage. The formula is shown to imply that weakly selected mutations are more likely than strongly selected mutations to produce regional patterning of variability along a chromosome in response to local variation in recombination rates. Loci at the extreme tip of a chromosome experience a smaller effect of background selection than loci closer to the centre. It is shown that background selection can produce a considerable overall reduction in variation in organisms with small numbers of chromosomes and short maps, such as Drosophila. Large overall effects are less likely in species with higher levels of genetic recombination, such as mammals, although local reductions in regions of reduced recombination might be detectable.