# Particle number control for direct simulation Monte-Carlo methodology using kernel estimates

TL;DR: This study focuses on particle management based on a kernel density approach, and devise a bandwidth with a bounded bias error for the density inference and sampling, which results in a convenient and efficient implementation of the proposed scheme.

Abstract: The efficiency of stochastic particle schemes for large scale simulations relies on the ability to preserve a uniform distribution of particles in the whole physical domain. While simple particle split and merge algorithms have been considered previously, this study focuses on particle management based on a kernel density approach. The idea is to estimate the probability density of particles and subsequently draw independent samples from the estimated density. To cope with that, novel methods are devised in this work leading to efficient algorithms for density estimation and sampling. For the density inference, we devise a bandwidth with a bounded bias error. Furthermore, the sampling problem is reduced to drawing realizations from a normal distribution, augmented by stratified sampling. Thus, a convenient and efficient implementation of the proposed scheme is realized. Numerical studies using the devised method for direct simulation Monte-Carlo show encouraging performance.

## Summary (2 min read)

Jump to: [Introduction] – [II. REVIEW OF KERNEL DENSITY ESTIMATION] – [A. Univariate distributions] – [B. Multivariate distributions] – [III. PARTICLE MANAGEMENT] – [A. Sampling from uniform weight KDE] – [B. Sampling from non-uniform weight KDE] – [C. Summary] – [IV. SIMULATION STUDIES] and [V. CONCLUSION]

### Introduction

- The idea is that if the governing distribution is known or can be inferred, then new independent samples can be drawn.
- Kernel based density estimators have been employed for reconstructing the probability density of particles in low-variance DSMC or Fokker-Planck Monte-Carlo schemes.20,21.
- Once those two steps of the probability density function construction and sampling are taken, the scheme can be employed for generating new particles.

### II. REVIEW OF KERNEL DENSITY ESTIMATION

- Density estimation is the problem of finding the underlying probability density from a set of scattered data points.
- Two nonparametric density estimation approaches rely on histograms and kernels.
- Typically, kernel estimates provide a faster convergence rate.
- 24 Also, as shown later, the sampling problem is quite straightforward.

### A. Univariate distributions

- There exist different approaches for estimating f (.).
- The introduced noise which reflects the uncertainty of original samples results in a smoother probability density estimate compared to the histogram approach.
- Published under license by AIP Publishing (12) By inserting Eq. (12) into Eq. (10), the authors get a criterion to choose based on the normalized error in thermal energy N−2/5 2 (in the onedimensional setting).
- In practice, provided an error tolerance, can be then computed for a given number of particles per cell (ppc).

### B. Multivariate distributions

- More relevant for gas kinetic simulations are multivariate density estimations.
- (14) Even though the discussion here is focused only on monatomics, generalization of the proposed approach to polyatomic molecules with continuous degrees of freedom is possible.
- The sample space of the KDE probability density should be augmented to include all internal degrees of freedom of the considered species.
- Considering a diatomic molecule without active vibrational degrees of freedom, the sample space of f̂N,H becomes five dimensional, i.e., three velocities and two rotational degrees of freedom.

### III. PARTICLE MANAGEMENT

- Furthermore, each particle i has a weight w(i), velocity M(i), and position X(i).
- The idea of particle management is to adjust the number of particles in each cell such that better statistics can be obtained from Monte-Carlo simulations.
- Based on mass conservation, this results in the required number of samples, i.e., Nj =Wj/wj.

### A. Sampling from uniform weight KDE

- One of the main advantages of a kernel estimate of the type given by Eq. (13) with the Gaussian kernel is that sampling from the estimate does not require an acceptance-rejection method.
- Suppose the authors have a density f̂N,H given by Eq. (13), and let Ms denote the velocity of a newly generated particle, and Xs is the corresponding position.
- If all original particles have similar weights, then generating new samples from f̂N,H is straightforward.
- Here, U{a, b} is the discrete uniform density between a and b. Furthermore, N(a, b) denotes the normal distribution with mean a and variance b.

### B. Sampling from non-uniform weight KDE

- Since the authors are interested in cases where particles ending up in a cell may have different weights, they need to account for different weights.
- In fact, the data of particles with larger weights should have a stronger influence on the samples.
- Then, the authors determine the interval where a uniformly distributed random number between 0 and 1 falls in.
- Once the authors find that interval, they can simply draw a normally distributed random number with mean being the velocity of the particle whose interval has been chosen and the covariance H.

### C. Summary

- Initially, each cell j gets populated by N(αj) with total mass.
- The following summary describes each step of the algorithm for a given time step.
- If the particles have similar weights, run Algorithm 1, otherwise Algorithm 2 to generate N j new particles.
- This is because the devised approach does not alter the governing equation, yet it targets an improved particle distribution in the physical domain.

### IV. SIMULATION STUDIES

- While improvements resulting from particle number control schemes can be highlighted in scenarios with sharp density variations, in the following simulation study, the authors pursue another objective.
- For each case, two simulations were conducted, i.e., one without particle number control and one with the algorithm summarized in Sec. III C.
- This further confirms that the bias introduced through Eq. (12) is negligible.
- A more detailed quantitative comparison is performed by extracting the flow Mach number at a distance of two mean-freepaths upstream of the leading edge.

### V. CONCLUSION

- In contrast to deterministic solvers, many aspects of particle Monte-Carlo methods have not been fully explored so far.
- This study tackles the issue of particle number control for DSMC, with an emphasis on the kernel based bootstrapping.
- The issue of resampling from an estimated kernel was analyzed in terms of both bias and variance.
- Furthermore, an efficient sampling technique was introduced which does not rely on the acceptance-rejection scheme.
- A straightforward simulation study was presented for the supersonic flow around an inclined flat plate.

Did you find this useful? Give us your feedback

Phys. Fluids 31, 062008 (2019); https://doi.org/10.1063/1.5097902 31, 062008

© 2019 Author(s).

Particle number control for direct

simulation Monte-Carlo methodology using

kernel estimates

Cite as: Phys. Fluids 31, 062008 (2019); https://doi.org/10.1063/1.5097902

Submitted: 29 March 2019 . Accepted: 27 May 2019 . Published Online: 14 June 2019

Hossein Gorji, Stephan Küchlin, and Patrick Jenny

ARTICLES YOU MAY BE INTERESTED IN

A dusty gas model-direct simulation Monte Carlo algorithm to simulate flow in micro-

porous media

Physics of Fluids 31, 062007 (2019); https://doi.org/10.1063/1.5094637

Controlling the bias error of Fokker-Planck methods for rarefied gas dynamics simulations

Physics of Fluids 31, 062005 (2019); https://doi.org/10.1063/1.5097884

On the concept and theory of induced drag for viscous and incompressible steady flow

Physics of Fluids 31, 065106 (2019); https://doi.org/10.1063/1.5090165

Physics of Fluids

ARTICLE

scitation.org/journal/phf

Particle number control for direct simulation

Monte-Carlo methodology using kernel estimates

Cite as: Phys. Fluids 31, 062008 (2019); doi: 10.1063/1.5097902

Submitted: 29 March 2019 • Accepted: 27 May 2019 •

Published Online: 14 June 2019

Hossein Gorji,

1,a),b)

Stephan Küchlin,

2,a),c)

and Patrick Jenny

2,d)

AFFILIATIONS

1

Computational Mathematics and Simulation Science, EPFL, MA C2 642, CH-1015 Lausanne, Switzerland

2

Institute of Fluid Dynamics, ETH Zurich, Sonneggstrasse 3, CH-8092 Zurich, Switzerland

Note: This paper is part of the special issue on Direct Simulation Monte Carlo—The Legacy of Graeme A. Bird.

a)

Contributions: H. Gorji and S. Küchlin contributed equally to this work.

b)

Electronic mail: mohammadhossein.gorji@epﬂ.ch

c)

Electronic mail: kuechlin@ifd.mavt.ethz.ch

d)

Electronic mail: jenny@ifd.mavt.ethz.ch

ABSTRACT

The efﬁciency of stochastic particle schemes for large scale simulations relies on the ability to preserve a uniform distribution of particles

in the whole physical domain. While simple particle split and merge algorithms have been considered previously, this study focuses on

particle management based on a kernel density approach. The idea is to estimate the probability density of particles and subsequently draw

independent samples from the estimated density. To cope with that, novel methods are devised in this work leading to efﬁcient algorithms

for density estimation and sampling. For the density inference, we devise a bandwidth with a bounded bias error. Furthermore, the sampling

problem is reduced to drawing realizations from a normal distribution, augmented by stratiﬁed sampling. Thus, a convenient and efﬁcient

implementation of the proposed scheme is realized. Numerical studies using the devised method for direct simulation Monte-Carlo show

encouraging performance.

Published under license by AIP Publishing. https://doi.org/10.1063/1.5097902

I. INTRODUCTION

The kinetic theory of rareﬁed gases and corresponding numer-

ical methods comprise an essential framework to accurately account

for thermo-ﬂuid phenomena far from equilibrium. Direct Simula-

tion Monte-Carlo (DSMC) pioneered by Bird

1–4

is the most success-

ful and widely used rareﬁed gas dynamics solution method, which

allows for stochastic simulations of gas ﬂows via a particle Monte-

Carlo algorithm. At the level of the distribution, DSMC has been

shown to be consistent with solutions of the Boltzmann equation.

5,6

Beyond that, DSMC can be employed to extract spatiotemporal

autocorrelations along particle trajectories besides thermal ﬂuctu-

ations, both consistent with statistical thermodynamics.

7

Alterna-

tive particle Monte-Carlo schemes have been introduced in order

to improve the efﬁciency of DSMC at low Knudsen numbers

8–12

or

low Mach numbers.

13,14

One of the main issues that may hinder efﬁcient particle based

simulations arises due to large variations of the particle density

throughout the physical domain. The problem lies in the fact that the

variance associated with estimating an observable scales inversely

with the number of samples. Thus, the particle density variations

can lead to poorly resolved ﬁeld values and hence particle number

control (pnc) measures become a crucial step toward efﬁcient par-

ticle Monte-Carlo simulations, especially for situations where large

variations of the density are expected. This issue can be encoun-

tered in various rareﬁed gas ﬂow examples, including near-vacuum,

hypersonic, and axisymmetric ﬂows.

2,15,16

It is important to bear

in mind, however, that particle number control schemes aim at

reproducing the original solution only at the (Eulerian) distribu-

tion level. Therefore, they are not relevant for situations where

Lagrangian statistics (e.g., statistics along a particle trajectory) are of

interest.

The issue of particle split and merge has been addressed in par-

ticle Monte-Carlo methods arising in different contexts. The most

basic approach is to simply copy the particles to increase and ran-

domly remove some to reduce their number. This naive approach

Phys. Fluids 31, 062008 (2019); doi: 10.1063/1.5097902 31, 062008-1

Published under license by AIP Publishing

Physics of Fluids

ARTICLE

scitation.org/journal/phf

can actually be shown to be statistically consistent, providing appro-

priate scaling of particle weights. This algorithm has been applied

for DSMC in Ref. 15, as well as in other contexts including, e.g.,

turbulent ﬂows.

17

Although simple and straightforward to implement, the men-

tioned split and elimination algorithm is prone to numerical errors.

In the split phase, the newly generated samples are not independent

from the original ones. Therefore, strictly speaking, a Monte-Carlo

approach should not be employed on the new samples. While over

some spatiotemporal scales the newly generated samples become

independent ones, the associated length and time scales may become

huge and thus a constraint if, e.g., high Knudsen number ﬂows are

of interest.

An alternative particle splitting scheme can be constructed

based on probability density estimators. The idea is that if the gov-

erning distribution is known or can be inferred, then new inde-

pendent samples can be drawn. This idea has been employed in

Ref. 18 in the context of Particle-In-Cell (PIC) methods. Yet, the

mentioned study utilized a quite expensive scheme for generating

new particles. Furthermore, the optimality of their considered ker-

nel bandwidth was not taken into account, which further limits

the efﬁciency. A particle number control scheme for the stochastic

weighted particle method has been derived in Ref. 19 by introduc-

ing a weight transfer function. Kernel based density estimators have

been employed for reconstructing the probability density of particles

in low-variance DSMC or Fokker-Planck Monte-Carlo schemes.

20,21

On the reduction side, elaborate schemes have been introduced in

numerous works.

18,22,23

For example, in Ref. 23, an octree structure

is employed for merging nearest neighbor particles in the sample

space.

This paper addresses the issue of split and merge as a sampling

problem with emphasis on two aspects. First, we try to construct

an accurate kernel density estimate (KDE), which can be practically

applied in DSMC type simulations. Second, an efﬁcient sampling

scheme from the KDE is devised, which allows us to efﬁciently inte-

grate the algorithm into existing particle Monte-Carlo codes. Once

those two steps of the probability density function construction and

sampling are taken, the scheme can be employed for generating

new particles. Since the generated particles replace the old ones, the

scheme can be used for increasing as well as reducing the number of

employed particles.

Along the above-mentioned scopes, the remaining article is

structured as the following. Section II addresses the estimation of

an accurate probability density function from scattered data points.

We rely on a KDE approach with Gaussian kernels. The reason

for choosing this scheme is its physical relevance to kinetic models

besides the fact that the sampling is not much involved. Section III

derives the corresponding algorithms for efﬁcient sampling from

the devised kernel estimate. Validations are provided in Sec. III,

and in Sec. V, the paper concludes with a summary of the high-

light of this contribution and some suggestions of possible future

extensions.

II. REVIEW OF KERNEL DENSITY ESTIMATION

Density estimation is the problem of ﬁnding the underly-

ing probability density from a set of scattered data points. Two

nonparametric density estimation approaches rely on histograms

and kernels. Typically, kernel estimates provide a faster conver-

gence rate.

24

Also, as shown later, the sampling problem is quite

straightforward. Therefore, we focus on the KDE approach in the

following.

A. Univariate distributions

Suppose a set of N particles with velocities {M

(i)

}

N

i=1

is given.

In the univariate density estimation, we wish to infer the probability

density f (v), where v is a scalar. Hence, the set {M

(i)

}

N

i=1

is com-

prised of N independent realizations in the probability space over

which f (v ) is deﬁned. There exist different approaches for estimating

f (.). In the KDE approach,

25,26

the ansatz

ˆ

f

N,h

(v)=

1

Nh

N

i=1

K

v −M

(i)

h

(1)

is considered, where h is the smoothing parameter. It turns out that

the accuracy of

ˆ

f

N,h

as an approximation of f crucially depends on h

rather than on the choice of K(.). For convenience of generating new

realizations from

ˆ

f

N,h

, we choose

K(v )=

1

2π

e

−v

2

/2

(2)

to be the (univariate) Gaussian kernel.

A common approach to compute h is to minimize the expected

L

2

distance between

ˆ

f

N,h

and f. Consider the mean integrated squared

error

MISE =E

R

(f (v)−

ˆ

f

N,h

(v))

2

dv. (3)

By minimizing MISE,

27,28

we obtain the optimal h as

h

opt

=

R(K)

1/5

m

2

(K)

2/5

R(f

′′

)

1/5

N

1/5

(4)

in the asymptotic limit, where

R(K)=

R

K

2

(v)dv (5)

and

m

2

(K)=

R

v

2

K(v )dv. (6)

Note that f

′′

is the second derivative of f. The terms R(K) and

m

2

(K) can be computed easily for K given by Eq. (2). They read

R(K)=1(2

√

π)and m

2

(K) = 1. Since f is unknown, the bandwidth

Eq. (4) is not closed. However, one can consider, e.g., a truncated

orthogonal expansion of f

′′

in order to close the h

opt

formula (4).

There exists an immense amount of work on the KDE approach,

where its accuracy and convergence are established for a large class

of probability densities (see the review

29

).

The physical intuition behind this approach becomes clear by

considering that new samples from a KDE approximation derive

their states from original samples subject to a Gaussian noise. The

introduced noise which reﬂects the uncertainty of original samples

results in a smoother probability density estimate compared to the

histogram approach.

Phys. Fluids 31, 062008 (2019); doi: 10.1063/1.5097902 31, 062008-2

Published under license by AIP Publishing

Physics of Fluids

ARTICLE

scitation.org/journal/phf

While h

opt

provides an optimal bandwidth in terms of MISE,

its application for a particle number control routine may not be

beneﬁcial. Note that h

opt

provides a trade-off between the bias

Bias

ˆ

f

N,h

and the variance Var

ˆ

f

N,h

of the estimator by minimizing MISE

=Bias

2

ˆ

f

N,h

+ Var

ˆ

f

N,h

. To illustrate this point, consider the asymptotic

case of h →0, in which the estimator collapses to the empirical

density

f

emp

=

1

N

N

i=1

δv −M

(i)

, (7)

where δ(.) is the Dirac delta. Here, since E[f

emp

]=f , the bias error

vanishes. However, the variance is inﬁnitely large since

E

f

2

emp

→∞ (8)

for ﬁnite N. In the other words, the noise in samples drawn from

f

emp

is maximal since there is no smoothing effect present in the esti-

mator. On the other hand,

ˆ

f

N,h

has a bias error of O(h

2

), while the

variance is of O(h

−1

)for a nonvanishing h.

30

In order to come up with a bias-variance trade-off suitable

for DSMC type simulations, we consider the bias in second order

moments (proportional to temperature). According to asymptotic

analysis, every choice of h proportional to N

−1/5

recovers the correct

density as N →∞. Hence, we make the ansatz

h ∝

1

N

1/5

(9)

for the bandwidth. Furthermore, note that

R

ˆ

f

N,h

v

2

dv =

R

f v

2

dv + h

2

. (10)

Now, let us choose an upper bound for the error in the second

moment to be a fraction of thermal energy. Thus, we require that

h

2

≤𝜖

2

kT

m

(11)

with small 𝜖, e.g., 𝜖 = 0.01 as chosen in this study. Here, k is the

Boltzmann constant, T is the temperature, and m is the molecular

mass. Therefore, to minimize the variance and to honor the inequal-

ity (11), a closure for h is made as a function of 𝜖, N, and thermal

speed,

h =

𝜖

N

1/5

kT

m

. (12)

By inserting Eq. (12) into Eq. (10), we get a criterion to choose 𝜖

based on the normalized error in thermal energy N

−2/5

𝜖

2

(in the one-

dimensional setting). In practice, provided an error tolerance, 𝜖 can

be then computed for a given number of particles per cell (ppc). For

example, 𝜖 = 0.01 gives the thermal energy relative error of 10

−4

for

100 particles per cell.

B. Multivariate distributions

More relevant for gas kinetic simulations are multivariate den-

sity estimations. The previous ansatz can easily be generalized to

ˆ

f

N,H

(v)=

1

N

H

−1

N

i=1

KH

−1

v −M

(i)

, (13)

where H is a positive deﬁnite bandwidth characterizing both size and

orientation of the kernel. Note that for any d ×d positive-deﬁnite H

that scales with N

−1/(d+4)

, the correct density is recovered as N →∞.

Following the same analysis as in the univariate case, we use the

ansatz

H

ij

=

𝜖

N

1/7

kT

m

δ

ij

. (14)

Even though the discussion here is focused only on monatomics,

generalization of the proposed approach to polyatomic molecules

with continuous degrees of freedom is possible. The sample space

of the KDE probability density should be augmented to include

all internal degrees of freedom of the considered species. For

example, considering a diatomic molecule without active vibra-

tional degrees of freedom, the sample space of

ˆ

f

N,H

becomes ﬁve

dimensional, i.e., three velocities and two rotational degrees of

freedom.

III. PARTICLE MANAGEMENT

In a typical particle Monte-Carlo scheme, the physical domain

Ω ⊂ R

3

is discretized into a set of N

c

computational cells δΩ

j

such that Ω =∪

N

c

j=1

δΩ

j

. Furthermore, each particle i has a weight

w

(i)

, velocity M

(i)

, and position X

(i)

. Their states determine the

probability density through

f (x, v, t)=

1

Ωρ(x, t)

∞

i=1

w

(i)

δ(v −M

(i)

)δ(x −X

(i)

), (15)

where ρ is the gas density. By assuming that the probability density

is uniform in each cell and by using a ﬁnite number of particles, the

particle representation provides the approximation

˜

f (v, tx)=

1

δΩ

j

ρ(x, t)

N

(α

j

)

i=1

w

(i)

δ(v −M

(i)

) (16)

of f for ∀x ∈δΩ

j

. Here, the index set α

j

denotes the set of parti-

cle indices whose positions belong to the cell δΩ

j

, and N

(α

j

)

is the

number of these particles. Furthermore, for simplicity of the deriva-

tions, let here and henceforth {1, . . . , N

(α

j

)

}be the set of particle

indices that are in the cell j. Therefore, the total mass in the cell reads

W

j

=

∑

N

(α

j

)

i=1

w

(i)

.

The idea of particle management is to adjust the number of par-

ticles in each cell such that better statistics can be obtained from

Monte-Carlo simulations. Therefore, particle weights have to vary

in space and time. In order to keep the methodology applicable for

conventional DSMC, we devise the algorithm such that all particles

in each cell have the same particle weight. Therefore, we require that

∀i ∈α

j

:w

(i)

= w

j

, where w

j

with j ∈{1, . . ., N

c

} is the input. Based on

mass conservation, this results in the required number of samples,

i.e., N

j

=W

j

w

j

.

Once the particles move for a time step, particles with differ-

ent weights may cross boundaries of their cells. Therefore, we end

up with a situation where we have particles of different weights in a

computational cell j. The particle management algorithm presented

in Secs. III A and III B introduces efﬁcient sampling schemes based

on uniform/nonuniform weight KDEs.

Phys. Fluids 31, 062008 (2019); doi: 10.1063/1.5097902 31, 062008-3

Published under license by AIP Publishing

Physics of Fluids

ARTICLE

scitation.org/journal/phf

ALGORITHM 1. Sampling from

ˆ

f

N,H

assuming identical weights.

1: for l ∈{1, . . ., N

j

} do

2: Draw i ∼U{1, N

j

}

3: Draw M

(l)

s

∼N(M

(i)

, H)

4: X

(l)

s

←X

(i)

5: end for

A. Sampling from uniform weight KDE

One of the main advantages of a kernel estimate of the type

given by Eq. (13) with the Gaussian kernel is that sampling from

the estimate does not require an acceptance-rejection method. Sup-

pose we have a density

ˆ

f

N,H

given by Eq. (13), and let M

s

denote the

velocity of a newly generated particle, and X

s

is the corresponding

position.

If all original particles have similar weights, then generating

new samples from

ˆ

f

N,H

is straightforward. Algorithm 1 provides N

j

samples.

Here, U{a, b}is the discrete uniform density between a and b.

Furthermore, N(a, b)denotes the normal distribution with mean a

and variance b.

B. Sampling from non-uniform weight KDE

Since we are interested in cases where particles ending up in

a cell may have different weights, we need to account for different

weights. In fact, the data of particles with larger weights should have

a stronger inﬂuence on the samples. To formulate that, instead from

ˆ

f

N,H

given by Eq. (13), we need to sample from

ˆ

f

N,H

(v)=

1

W

j

H

−1

N

(α

j

)

i=1

w

(i)

KH

−1

v −M

(i)

. (17)

Intuitively, ﬁrst, we need to assign to each particle i a partition of

[0, 1], proportional to its weight. Then, we determine the interval

where a uniformly distributed random number between 0 and 1 falls

in. Once we ﬁnd that interval, we can simply draw a normally dis-

tributed random number with mean being the velocity of the particle

whose interval has been chosen and the covariance H. An efﬁcient

formulation of this idea is known as stratiﬁed sampling, which is

reviewed in the following.

31

Let us deﬁne the partitions by setting

ˆ

S

0

= 0 and

ˆ

S

k

=W

−1

j

k

i=1

w

(i)

(18)

with corresponding intervals

I

k

=[

ˆ

S

k−1

,

ˆ

S

k

] (19)

for k ∈{1, . . . , N

(α

j

)

}. The idea is then to generate a set of ordered

uniformly distributed random numbers to choose from which inter-

val we should sample. Algorithm 2 shows how to generate N

j

particles from

ˆ

f

N,H

[Eq. (17)].

ALGORITHM 2. Sampling from

ˆ

f

N,H

composed of different weights.

1: for l ∈{1, . . ., N

j

} do

2: Draw R

f

∼U[0, 1]

3: R

f

←

(l −1)+ R

f

N

j

4: Find k such that R

f

∈I

k

5: Draw M

(i)

s

∼N(M

(k)

, H)

6: end for

Here, U[a, b]is the continuous counterpart of U{a, b}. The fact

that there is no need to employ an acceptance-rejection algorithm

makes Algorithm 2 very attractive.

C. Summary

In this subsection, we provide a brief summary of the particle

management algorithm. Suppose the physical domain is discretized

into N

c

computational cells and that weights w

j

are provided for

j ∈{1, . . ., N

c

}. Initially, each cell j gets populated by N

(α

j

)

with total

mass W

j

=

∑

i∈α

j

w

(i)

. The following summary describes each step of

the algorithm for a given time step.

1. Move particles in the physical space.

2. Apply the boundary conditions.

3. For each cell j

(a) Compute the target number of particles N

j

=W

j

w

j

.

(b) If N

j

=N

(α

j

)

and if particles have similar weights, go to 3f.

(c) Compute the bandwidth from Eqs. (12) and (14).

(d) If the particles have similar weights, run Algorithm 1,

otherwise Algorithm 2 to generate N

j

new particles.

(e) Remove the original particles.

(f) Update particle velocities according to the governing equa-

tions.

4. Estimate the observables.

Note that treatment of boundary conditions remains identical

without/with the particle number control scheme. This is because

the devised approach does not alter the governing equation, yet it

targets an improved particle distribution in the physical domain.

IV. SIMULATION STUDIES

While improvements resulting from particle number con-

trol schemes can be highlighted in scenarios with sharp density

variations, in the following simulation study, we pursue another

objective. In particular, we evaluate the proposed algorithm by

achieving the target of a uniformly distributed set of particles while

the observables remain unbiased. Even though the results discussed

here serve as a preliminary evaluation of the algorithm, we antici-

pate that more challenging test cases (omitted due to brevity) can be

treated by the introduced scheme in a straightforward manner. Nev-

ertheless, we postpone those more demanding simulation settings to

future studies.

The test case is a two-dimensional supersonic ﬂow around

an inclined plate as studied in Ref. 2. Flow of argon at Mach 5

is considered around a ﬂat plate with an incidence angle of 30

○

.

Phys. Fluids 31, 062008 (2019); doi: 10.1063/1.5097902 31, 062008-4

Published under license by AIP Publishing

##### Citations

More filters

••

TL;DR: In this paper, a modified particle based Ellipsoidal Statistical Bhatnagar-Gross-Krook (ESBGK) solver is used to simulate micro-nozzles.

Abstract: This paper demonstrates the efficiency of a modified particle based Ellipsoidal Statistical Bhatnagar–Gross–Krook (ESBGK) solver to simulate micro-nozzles. For this, the common particle ESBGK algorithm is adapted to handle variable particle weights including the creation of additional particles in regions with low statistical samples and merging of particles in dense regions. After the description of the methods and their implementation, the simulation results of a micro-nozzle geometry using the Direct Simulation Monte Carlo, the common particle ESBGK, and the proposed modified ESBGK method are compared concerning accuracy and efficiency. All three methods show good agreement; however, the modified ESBGK method has the highest efficiency, saving a factor of around 500 of computational time to produce a comparable statistical sample size in the rarefied expansion region.

8 citations

••

TL;DR: In this paper , a new scheme for simulation of collisions with multiple possible outcomes in variable-weight DSMC computations is proposed, which offers a significant improvement in the level of stochastic noise over the usual acceptance-rejection algorithm, even when controlling for the slight additional computational costs.

1 citations

••

TL;DR: In this paper , the accuracy and efficiency of particle-based Fokker-Planck (FP) method has been extensively studied to reduce the computational costs of the direct simulation Monte Carlo method for near-continuum flows.

Abstract: In the past decade, the particle-based Fokker–Planck (FP) method has been extensively studied to reduce the computational costs of the direct simulation Monte Carlo method for near-continuum flows. The FP equation describes a continuous stochastic process through the combined effects of systematic forces and random fluctuations. A few different FP models have been proposed to fulfill consistency with the Boltzmann equation, but a comprehensive comparative study is needed to assess their performance. The present paper investigates the accuracy and efficiency of four different FP models—Cubic-FP, ellipsoidal-statistical FP (ES-FP), and quadratic entropic FP (Quad-EFP)—under rarefied conditions. The numerical test cases include one-dimensional Couette and Fourier flows and an argon flow past a cylinder at supersonic and hypersonic velocities. It is found that the Quad-EFP model gives the best accuracy in low-Mach internal flows, whereas the ES-FP model performs best at predicting shock waves. In terms of numerical efficiency, the Linear-FP and ES-FP models run faster than the Cubic-FP and Quad-EFP models due to their simple algebraic nature. However, it is observed that the computational advantages of the FP models diminish as the spatiotemporal resolution becomes smaller than the collisional scales. In order to take advantage of their numerical efficiency, high-order joint velocity-position integration schemes need to be devised to ensure the accuracy of FP models with very coarse resolution.

••

TL;DR: In this article , a partition-coupled Eulerian-Lagrangian method (PCELM) was proposed for tracking a free interface and a contact discontinuity of the compressible fluid with large deformation.

Abstract: We present a partition-coupled Eulerian–Lagrangian method (PCELM) for accurately tracking a free interface and a contact discontinuity of the compressible fluid with large deformation. This method tracks the interface by arranging splittable Lagrangian particles on an Eulerian grid and adopts a partition-weighted bidirectional mapping between particles and grids using a cubic B-spline as interpolation function. PCELM suppresses oscillation of the discontinuous surface by this partition-weighted remapping method and solves the problem of numerical fracture by a particle splitting method. A virtual particle method is also proposed to deal with discontinuity of particle flow at the boundary and to maintain interpolation accuracy at the boundary. The conservation of mass, momentum, and energy of PCELM is proved by conservation analysis. Accuracy tests and simulations of discontinuous surfaces and free interfaces are performed to verify the accuracy and stability of PCELM. The results show that PCELM has strong energy conservation and low energy dissipation and that it is not only better at suppressing oscillations than the original method, but can also simulate a compressible fluid with large deformation more accurately than weighted essentially nonoscillatory schemes.

##### References

More filters

••

TL;DR: In this paper, the problem of the estimation of a probability density function and of determining the mode of the probability function is discussed. Only estimates which are consistent and asymptotically normal are constructed.

Abstract: : Given a sequence of independent identically distributed random variables with a common probability density function, the problem of the estimation of a probability density function and of determining the mode of a probability function are discussed. Only estimates which are consistent and asymptotically normal are constructed. (Author)

10,114 citations

••

1,713 citations

••

TL;DR: A new adaptive kernel density estimator based on linear diffusion processes that builds on existing ideas for adaptive smoothing by incorporating information from a pilot density estimate and a new plug-in bandwidth selection method that is free from the arbitrary normal reference rules used by existing methods.

Abstract: We present a new adaptive kernel density estimator based on linear diffusion processes. The proposed estimator builds on existing ideas for adaptive smoothing by incorporating information from a pilot density estimate. In addition, we propose a new plug-in bandwidth selection method that is free from the arbitrary normal reference rules used by existing methods. We present simulation examples in which the proposed approach outperforms existing methods in terms of accuracy and reliability.

1,410 citations

••

TL;DR: In this article, a new adaptive kernel density estimator based on linear diffusion processes is proposed, which builds on existing ideas for adaptive smoothing by incorporating information from a pilot density estimate.

Abstract: We present a new adaptive kernel density estimator based on linear diffusion processes. The proposed estimator builds on existing ideas for adaptive smoothing by incorporating information from a pilot density estimate. In addition, we propose a new plug-in bandwidth selection method that is free from the arbitrary normal reference rules used by existing methods. We present simulation examples in which the proposed approach outperforms existing methods in terms of accuracy and reliability.

1,410 citations

••

TL;DR: In this article, the limit of the random empirical measures associated with the Bird algorithm is shown to be a deterministic measure-valued function satisfying an equation close (in a certain sense) to the Boltzmann equation.

Abstract: Bird's direct simulation Monte Carlo method for the Boltzmann equation is considered. The limit (as the number of particles tends to infinity) of the random empirical measures associated with the Bird algorithm is shown to be a deterministic measure-valued function satisfying an equation close (in a certain sense) to the Boltzmann equation. A Markov jump process is introduced, which is related to Bird's collision simulation procedure via a random time transformation. Convergence is established for the Markov process and the random time transformation. These results, together with some general properties concerning the convergence of random measures, make it possible to characterize the limiting behavior of the Bird algorithm.

476 citations

##### Related Papers (5)

##### Frequently Asked Questions (2)

###### Q2. What have the authors stated for future works in "Particle number control for direct simulation monte-carlo methodology using kernel estimates" ?

Future studies focus on assessment of the devised particle number control scheme for flows with sharper density gradients.