scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Numerical strategy for unbiased homogenization of random materials

06 Jul 2013-International Journal for Numerical Methods in Engineering (John Wiley & Sons, Ltd)-Vol. 95, Iss: 1, pp 71-90
TL;DR: In this paper, the authors proposed a self-consistent approach to the prediction of the value of homogenized tensors in elliptic problems by solving a coupled problem, in which the complex microstructure is confined to a small region and surrounded by a tentative homogenised medium.
Abstract: SUMMARY This paper presents a numerical strategy that allows to lower the costs associated to the prediction of the value of homogenized tensors in elliptic problems. This is performed by solving a coupled problem, in which the complex microstructure is confined to a small region and surrounded by a tentative homogenized medium. The characteristics of this homogenized medium are updated using a self-consistent approach and are shown to converge to the actual solution. The main feature of the coupling strategy is that it really couples the random microstructure with the deterministic homogenized model, and not one (deterministic) realization of the random medium with a homogenized model. The advantages of doing so are twofold: (a) the influence of the boundary conditions is significantly mitigated, and (b) the ergodicity of the random medium can be used in full through appropriate definition of the coupling operator. Both of these advantages imply that the resulting coupled problem is less expensive to solve, for a given bias, than the computation of homogenized tensor using classical approaches. Examples of 1-D and 2-D problems with continuous properties, as well as a 2-D matrix-inclusion problem, illustrate the effectiveness and potential of the method. Copyright © 2013 John Wiley & Sons, Ltd.

Summary (5 min read)

1. INTRODUCTION

  • There exist to date fewer theoretical results on the homogenization of random media than of periodic media.
  • It has been proved [8, 15] that, whatever the choice of boundary conditions, the limit of the estimated tensors was indeed the effective tensor.
  • It has been observed that, even though these schemes converge, they do so to biased values.
  • The main feature of the coupling strategy is that it really couples the random microstructure with the homogenized model, and not each realization of the random medium with a homogenized model, in a fully independent manner.
  • Concluding remarks are provided in Section 6.

2. HOMOGENIZATION OF A RANDOM MICROSTRUCTURE

  • The authors describe the random medium for which they intend to find the homogenized effective properties.
  • The authors also recall some definitions related to the homogenization of the heat equation.

2.1. Definition of the model and hypotheses on the random field

  • Let us introduce a domain D ∈ R d , with a typical length scale L, a loading field f (x) and a field u(x) governed by the heat equation: find EQUATION for a random field k(x) fluctuating over a length scale c (usually defined through the correlation length), and with appropriate boundary conditions.
  • In order to obtain a homogenized material, the random parameter field k(x) is required to verify certain hypotheses.

2.2. Definition of homogenization

  • The following sequence of problems is therefore considered: EQUATION where k (x) = k(x/ ), and with appropriate boundary conditions, for instance u (x) = 0, ∀x ∈ ∂D (see Subsection 2.3 for the definition of Dirichlet and Neumann approximations of the homogenized coefficients).
  • Under suitable hypotheses, in particular on the random field k (x) (described in the previous Subsection 2.1), each of these problems admits a unique solution.
  • Using different sets of hypotheses and with different methods, many authors (see the references provided in the introduction) have shown that, independently of the load f (x), the sequence of solutions u (x) converges when → 0 to the solution u * (x) of the following deterministic problem: EQUATION with corresponding boundary conditions.
  • A priori, the effective coefficient K * is a full second-order tensor, meaning that the homogenized material potentially exhibits anisotropy.
  • The constructive definition of the effective tensor requires the solution of the so-called corrector problem, which states: find w (x) such that, ∀x ∈ D, almost surely: 4 R. COTTEREAU.

2.3. Numerical estimation of homogenized tensor

  • A very efficient alternative to these two techniques consists in using periodic boundary conditions (see for instance [28] for mathematical details, or [29] for an efficient FFT implementation of this technique).
  • Note that the tensors Ǩ N and K N (as well as any other obtained through a similar approach with other boundary conditions) depend obviously on both the number N of Monte Carlo samples that are used to approximate the mathematical expectation and on the value of .
  • They also depend on the boundary conditions that were used to approximate the corrector problems and are therefore a priori different one from the other.
  • For elliptic equations, the influence of these boundary conditions disappears for → 0 (see the proof for the KUBC, SUBC, and periodic boundary conditions in [15] ), but may become extremely important for small domains (see the examples in Section 5).

2.4. Particular case in 1D

  • The 1D case is very particular, in the sense that the corrector problem can be solved analytically, whatever the choice of probability law for k .
  • The homogenized tensor (in that case a scalar) is then: EQUATION.
  • It is interesting to note that the KUBC and SUBC approximates can also be computed for any ratio .
  • Indeed, simple algebraic manipulation yields 5 where EQUATION ) Note, however, that this property is very specific to the one-dimensional case, and is not true in higher dimensions.

2.5. Particular case in 2D

  • As noted in [30, chapter 3] (see a proof in the book in a more general setting, and references therein for original contributions to that result), the homogenized coefficient of this random medium is then necessarily equal to EQUATION where I 2 is the two-dimensional identity tensor.
  • Note that, in the two-dimensional case, there is no analytic result for the value of the KUBC and SUBC homogenized approximates at finite .
  • The following bounds always hold true (see [27] for example): EQUATION.
  • Further, as the authors will be illustrated in the examples at the end of this paper, both the KUBC and SUBC are biased for finite (see Section 5).

3. COUPLING OF A RANDOM MICROSTRUCTURE WITH AN EFFECTIVE MODEL

  • In the previous section, the authors have introduced classical numerical techniques to obtain estimates of the homogenized tensors.
  • These estimates are widely developed and used in the literature, but are unfortunately biased in the general case.
  • These weight functions mainly mean to split appropriately the total energy among the two models.
  • Hence the mediator space W c can be seen as composed of functions with a spatially-varying ensemble average and perfectly spatially-correlated randomness.

4. A NEW METHOD FOR THE DETERMINATION OF THE HOMOGENIZED TENSOR

  • In the previous two sections, the authors have described classical approaches to the numerical homogenization of random structures (Section 2) and a new coupling method between stochastic and deterministic models (Section 3).
  • The authors will show here how the latter method can be used for the design of a novel numerical homogenization technique for random media.

4.1. Principle of the method

  • The general motivation for the design of this technique lies in the observation that the biases observed in the SUBC and KUBC estimates of the homogenized coefficients originate from the boundary conditions chosen for each realization of the random corrector problems.
  • If the tentative medium is indeed the homogenized medium corresponding to the heterogeneous fine scale model, then it is expected that the influence of the boundary conditions will be reduced.
  • In particular, on one side of the interface, the properties fluctuate, while they are constant on the other side.
  • The approach that the authors propose here builds on this initial idea.
  • Apart from this idea of computing a coupled problem, the authors also introduce an optimization scheme to gather the value of the homogenized tensor, because it is indeed the objective of their work.

4.2. Description of the algorithm

  • Note that the iterative loop can be efficiently implemented through classical general-purpose optimization schemes.
  • In particular, the authors have used the Nelder-Mead algorithm (see [36] for details), but others could be considered.
  • Similarly, the authors chose as initial value K 0 = E[k ]I, but other choices are equally reasonable.
  • The authors have chosen here to drive the iterative scheme with the minimization of the potential D ∇u − I dx, consistent with the intuitive idea developed above (Section 4.1).
  • The results obtained were exactly the same.

4.3. Evaluation of numerical costs

  • Let us finally discuss the comparative numerical costs between the standard numerical homogenization schemes described in Section 2.3 and the proposal made here.
  • Basically, their approach becomes interesting when the gain from the last item overcomes the costs induced by the first three.
  • In that respect, it should be noted that the discretization of the functional space V ⊂ H 1 (D) is very coarse compared to the discretization of H 1 (D) because the mechanical properties are constant in the homogenized model while they are heterogeneous over the stochastic model.
  • Finally, concerning the iterative scheme, it should be noted that as the realizations of the random model do not change between two iterations (only the homogenized model evolves), the assembly of the Monte Carlo samples of stiffness matrices does not have to be repeated.

5. APPLICATIONS

  • The authors consider the implementation of their homogenization approach on two problems for which analytical solutions are available and one classical problem in periodic homogenization.
  • The software used for the solution of the coupled Arlequin systems is freely available at https://github.com/cottereau/CArl.
  • The realizations of the continuous random fields k (x) have been generated using the spectral representation method [37] , and its Fast Fourier Transform implementation.

5.1.1. Description of the model. We consider a two-dimensional problem, within a domain

  • The power spectrum is considered triangular (which corresponds to a square cardinal sine correlation), with correlation length c .
  • 1.2. Computation of KUBC and SUBC estimates.
  • First, the authors consider the KUBC and SUBC estimates of the homogenized coefficient, as described in Section 2.3.
  • In the next section, the authors present a 1D example, for which they still know analytically the homogenized tensor, but for which the random medium is not locally invariant by inversion.
  • Arlequin estimate K N obtained for different values of the initial coefficient K 0 initializing the optimization in algorithm 1, and corresponding number of iterations for convergence.

5.2. 1D bar with random properties

  • This second example is very similar to the previous one, except that it is one-dimensional.
  • As in the 2D case (Section 5.1.2), the authors first consider the KUBC and SUBC estimates of the homogenized coefficient.
  • Note also that, if the authors had used Neumann boundary conditions for their Arlequin estimate (results not shown), they would have obtained the same results as the SUBC.

5.3. 2D periodized bi-phasic material with spherical inclusions

  • This last example aims, on the first hand, to present an example with discontinuous properties, and, on the other hand, to compare the behavior of the method proposed here to the classical method of periodic homogenization. ), with an average concentration of c = 0.3.
  • The computational domain is then periodized, that is to say the centers inside the computational cell D are repeated outside of it before the spheres are constructed.
  • Note that the discretization of the spheres is exaggerated in the smaller cells in order not to keep the shape of the inclusions exactly the same (up to homothety) in all the cases considered.

5.3.2. Computation of KUBC, SUBC and periodic estimates.

  • The KUBC, SUBC and periodic estimates are computed for different values of the number N of realizations of the random medium over which averages are taken and, each time, for n = 5 different ensembles of N realizations.
  • The second observation concerns the first case.
  • As the realizations are all homogeneous, the periodic boundary conditions therefore provide exactly the same estimates as the KUBC, which are very bad.
  • On the other hand, the Arlequin estimate provides a reasonable value of the homogenized coefficient.

6. CONCLUSIONS AND PROSPECTS

  • The authors have introduced a new computational method for the homogenization of random media.
  • It is based on two major ingredients: (1) a stochastic-deterministic coupling method that limits the influence of the boundary conditions in the homogenization experiments, and (2) an iterative technique for updating the value of the tentative deterministic model.
  • The results obtained for the chosen 2D example are spectacular.
  • In that case, the bias observed in the KUBC and SUBC estimates totally disappears, even for very large correlation length = c /L.
  • Other promising examples include the coupling of wave propagation models with kinetic models (where the variable of interest is not a displacement field but a phase-space energy density) [38, 39] .

Did you find this useful? Give us your feedback

Figures (11)

Content maybe subject to copyright    Report

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING
Int. J. Numer. Meth. Engng 2013; 00:120
Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/nme
Numerical strategy for unbiased homogenization of random
materials
R. Cottereau
Laboratoire MSSMat UMR 8579,
´
Ecole Centrale Paris, CNRS, grande voie des vignes, F-92295 Ch
ˆ
atenay-Malabry,
France
SUMMARY
This paper presents a numerical strategy that allows to lower the costs associated to the prediction of the
value of homogenized tensors in elliptic problems. This is done by solving a coupled problem, in which the
complex microstructure is confined to a small region and surrounded by a tentative homogenized medium.
The characteristics of this homogenized medium are updated using a self-consistent approach and are shown
to converge to the actual solution. The main feature of the coupling strategy is that it really couples the
random microstructure with the deterministic homogenized model, and not one (deterministic) realization
of the random medium with a homogenized model. The advantages of doing so are twofold: (a) the influence
of the boundary conditions is significantly mitigated, and (b) the ergodicity of the random medium can be
used in full through appropriate definition of the coupling operator. Both of these advantages imply that the
resulting coupled problem is less expensive to solve, for a given bias, than the computation of homogenized
tensor using classical approaches. Examples of 1D and 2D problems with continuous properties, as well as
a 2D matrix-inclusion problem, illustrate the effectiveness and potential of the method. Copyright
c
2013
John Wiley & Sons, Ltd.
Received . . .
KEY WORDS: Homogenization; Random material; Arlequin method; self-consistent model; Numerical
mesoscope; Representative volume element
1. INTRODUCTION
There exist to date fewer theoretical results on the homogenization of random media than of
periodic media. Nevertheless, some results, in the case of linear elliptic partial differential equations
for example, have shown that one can find a uniform deterministic tensor that produces an
accurate approximation of the original solution obtained with the fluctuating stochastic tensor.
Such convergence results have been made possible by using the energy method [1] of Tartar [2],
by considering the direct construction of the so-called correctors [3], by resorting to strong G-
convergence of operators in a general stochastic setting [4], or by using the Γ-convergence [5].
Convergence was obtained either in a mean-square sense (for example, in [6, 7] or [1]) or in an
almost-sure sense (for example in [3]). Later on, more complex equations were also treated, and
weaker hypotheses on the random fields introduced (see for instance [8, 9, 10, 11, 12]).
However, the actual computation of the value of this effective tensor is not always a simple
task, besides some particular cases for which analytical (1D problems in particular) or specific
numerical solutions are available (see for instance [13, 14] in a random quasi-periodic setting).
Correspondence to: regis.cottereau@ecp.fr
Contract/grant sponsor: ANR project TYCHE (Advanced methods using stochastic modeling in high dimension for
uncertainty modeling, quantification and propagation in computational mechanics of solids and fluids) and DIGITEO
R
´
egion Ile-de-France; contract/grant number: ANR-2010-BLAN-0904 and 2009-26D
Copyright
c
2013 John Wiley & Sons, Ltd.
Prepared using nmeauth.cls [Version: 2010/05/13 v3.00]

2 R. COTTEREAU
Indeed, the prediction of the effective tensor involves the solution of a corrector problem which
is a priori posed on a domain of infinite size. In order to approximate the effective tensor through
numerical simulations, the domain therefore has to be truncated at some finite distance and boundary
conditions to be introduced. For these bounded domains, the estimated tensor is then a random
variable, the variance of which goes to zero when the size of the domain is increased. It has been
proved [8, 15] that, whatever the choice of boundary conditions, the limit of the estimated tensors
was indeed the effective tensor. However, convergence with respect to the size of the domain may be
very slow. Alternatively, it is also possible (see [8, 16]) to use a smaller domain and perform averages
over several realizations of the random medium. Several authors have followed this path (see for
instance [17, 18, 19, 20]), even putting up schemes to accelerate convergence (through angular
averaging in [21] among others, or through the use of antithetic variables in [22]). However, it has
been observed that, even though these schemes converge, they do so to biased values. Further, these
biases only cancel when the size of the domain becomes very large (with respect to the correlation
length).
This paper presents a numerical strategy to identify the homogenized tensor of a random medium.
It allows to extend the size of the domain in a cost-effective manner and to play simultaneously
with the size of the domain and the discretization along the random dimension (number of
Monte Carlo samples) to yield the effective tensor. This is achieved through the coupling of the
random microstructure with a homogenized macrostructure, the characteristics of which are updated
iteratively using a self-consistent approach. Using this coupled approach, the size of the complex
microstructure is limited, while the boundary conditions are pushed away and their influence limited
through the tentative homogenized medium. The main feature of the coupling strategy is that it
really couples the random microstructure with the deterministic homogenized model, and not each
(deterministic) realization of the random medium with a homogenized model, in a fully independent
manner. Hence, the ergodicity of the random medium can be used in full to accelerate convergence
and minimize the bias introduced by the finite size of the domain.
The idea of coupling the microstructure to a homogenized medium to limit the influence of the
boundary conditions was already developed in [23] and [24], but with three major differences:
(1) the microstructure is here random, while it was deterministic (and heterogeneous) in the previous
papers, (2) the coupling is here made over a volume rather than along a surface, and (3) the approach
is coupled to an iterative scheme in order to identify the value of the effective tensor, while it was
previously only used to perform direct computations, for a given value of the homogenized tensor.
In Section 2 of the paper, the random medium and model equation that we consider are described
in detail, and the classical Dirichlet and Neumann homogenization schemes are presented. In
Section 3, we briefly recall the main ingredient of our approach, which is the deterministic-stochastic
coupling scheme, previously described in [25, 26]. Section 4 concentrates on the main novelty of
this paper, which is the iterative technique to derive the homogenized tensor. Finally, the last section
presents a series of 1D and 2D experiments to demonstrate the effectiveness and potential of the
proposed approach. Concluding remarks are provided in Section 6.
Throughout the paper, we will use bold characters for random quantities, lowercase characters for
scalars and vectors, and uppercase characters for matrices and tensors.
2. HOMOGENIZATION OF A RANDOM MICROSTRUCTURE
In this section, we describe the random medium for which we intend to find the homogenized
effective properties. We also recall some definitions related to the homogenization of the heat
equation.
2.1. Definition of the model and hypotheses on the random field
Let us introduce a domain D R
d
, with a typical length scale L, a (deterministic) loading field
f(x) and a field u(x) governed by the heat equation: find u(x) L
2
, H
1
(D)) such that, x D,
almost surely:
· (k(x)u(x)) = f(x), (1)
Copyright
c
2013 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng (2013)
Prepared using nmeauth.cls DOI: 10.1002/nme

NUMERICAL HOMOGENIZATION OF RANDOM MATERIALS 3
for a random field k(x) fluctuating over a length scale `
c
(usually defined through the correlation
length), and with appropriate boundary conditions. Here, , F, P ) is a complete probability space,
with Θ a set of outcomes, F a σ-algebra of events in Θ, and P : F [0, 1] a probability measure.
In order to obtain a homogenized material, the random parameter field k(x) is required to verify
certain hypotheses. In particular, it is assumed to be bounded and uniformly coercive, that is to say
κ
m
, κ
M
(0, +), such that
0 < κ
m
k(x) κ
M
< , x D, almost surely. (2)
Also, it is required to be stationary and ergodic.
2.2. Definition of homogenization
Homogenization deals with cases when the ratio = `
c
/L is small. We then scale the fluctuations
of the microstructure by 1/, and look at the fluctuations of the solution u(x) at the original scale.
The following sequence of problems is therefore considered: find u
(x) L
2
, L
2
(D)) such that,
x D, almost surely:
· (k
(x)u
(x)) = f(x), (3)
where k
(x) = k(x/), and with appropriate boundary conditions, for instance u
(x) = 0, x D
(see Subsection 2.3 for the definition of Dirichlet and Neumann approximations of the homogenized
coefficients). Under suitable hypotheses, in particular on the random field k
(x) (described in the
previous Subsection 2.1), each of these problems admits a unique solution.
f(x)
D
L
l
c
f(x)
D
L
Figure 1. Description of one realization of the random medium (left), with fluctuating coefficient k
(x), and
corresponding effective medium (right), with constant deterministic effective tensor K
.
Using different sets of hypotheses and with different methods, many authors (see the references
provided in the introduction) have shown that, independently of the load f (x), the sequence of
solutions u
(x) converges when 0 to the solution u
(x) of the following deterministic problem:
find u
(x) such that, x D:
· (K
u
(x)) = f(x), (4)
with corresponding boundary conditions. A priori, the effective coefficient K
is a full second-order
tensor, meaning that the homogenized material potentially exhibits anisotropy.
The constructive definition of the effective tensor requires the solution of the so-called corrector
problem, which states: find w
(x) such that, x D, almost surely:
· (k
(x) (I + w
(x))) = 0. (5)
As w
is a vector, w
(x) is a tensor, and this equation is a d-dimensional equation. The tensor I
is the identity tensor in R
d
× R
d
. The homogenized tensor is then defined as:
K
= lim
0
E
h
(I + w
(x))
T
k
(x) (I + w
(x))
i
. (6)
Note that, in the limit when 0, the tensor K
does not depend on the position.
Copyright
c
2013 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng (2013)
Prepared using nmeauth.cls DOI: 10.1002/nme

4 R. COTTEREAU
2.3. Numerical estimation of homogenized tensor
For 0, the corrector equation (5) is either set in a domain of infinite size or with infinitely
small details. Also, the mathematical expectation E[·] in equation (6) is an integral operator over
an infinite-dimensional space. Approximations of K
are then constructed by performing at the
same time a truncation of D (hence bounding to a finite value), introducing particular boundary
conditions at the boundary D of the domain, and replacing the mathematical expectation in the
evaluation of the homogenized tensor by a sum over a finite number of Monte Carlo samples.
The Kinematic Uniform Boundary Conditions (KUBC) approach consists in using homogeneous
Dirichlet boundary conditions at the boundary (w
= 0, x D, almost surely), and
approximating the homogenized tensor, hereafter denoted
ˇ
K
N
, by
ˇ
K
N
=
1
N|D|
N
X
i=1
Z
D
I + w
i
(x)
T
k
i
(x)
I + w
i
(x)
dx, (7)
where |D| =
R
D
dx, the k
i
(x) are realizations of the stochastic field k
(x) and the w
i
(x) are the
solution of the corresponding (deterministic) corrector problems, posed over the truncated domain
D with finite , and with the chosen set of boundary conditions. More details on the derivation of
this formula can be found in [16, Eq. (16)] or [27, Eq. (5.8)].
The Static Uniform Boundary Conditions (SUBC) approach consists in using Neumann
boundary conditions (k
(x)(I + w
) · n = I · n, x D, almost surely), and approximating the
homogenized tensor, hereafter denoted
ˆ
K
N
, by
ˆ
K
N
=
"
1
N|D|
N
X
i=1
Z
D
I + w
i
(x)
T
k
i
(x)
I + w
i
(x)
dx
#
1
. (8)
More details on the derivation of this formula can be found in [16, Eq. (16)] or [27, Eq. (5.9)].
A very efficient alternative to these two techniques consists in using periodic boundary conditions
(see for instance [28] for mathematical details, or [29] for an efficient FFT implementation of this
technique). This method works very well, but it requires, on the other hand, that the microstructure
be itself periodic. Its application in the context of random media therefore requires some hypotheses
on the correlation structure, or a modification of the distribution for periodization. Comparison of
periodic, SUBC and KUBC estimates to the method that we propose in this paper will be made in
the applications (see in particular Section 5.3).
Note that the tensors
ˇ
K
N
and
ˆ
K
N
(as well as any other obtained through a similar approach with
other boundary conditions) depend obviously on both the number N of Monte Carlo samples that
are used to approximate the mathematical expectation and on the value of . They also depend on
the boundary conditions that were used to approximate the corrector problems and are therefore a
priori different one from the other. For elliptic equations, the influence of these boundary conditions
disappears for 0 (see the proof for the KUBC, SUBC, and periodic boundary conditions in [15]),
but may become extremely important for small domains (see the examples in Section 5).
2.4. Particular case in 1D
The 1D case is very particular, in the sense that the corrector problem can be solved analytically,
whatever the choice of probability law for k
. The homogenized tensor (in that case a scalar) is then:
K
= E[k
1
]
1
= lim
0
Z
D
(k
(x))
1
dx
1
. (9)
It is interesting to note that the KUBC and SUBC approximates can also be computed for any
ratio . Indeed, simple algebraic manipulation yields
ˇ
K
= E [K
D
] , (10)
Copyright
c
2013 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng (2013)
Prepared using nmeauth.cls DOI: 10.1002/nme

NUMERICAL HOMOGENIZATION OF RANDOM MATERIALS 5
where K
D
= |D|/
R
D
k
1
(x)dx and
ˆ
K
= E[K
1
D
]
1
= K
. (11)
When the random field k
(x) becomes, at the scale of the domain D, a random variable
(with no fluctuation in space). Hence, we have K
D
= k
, and
ˇ
K
= E[k
], and this quantity does
not depend on the position x thanks to the hypothesis of stationarity of the field. When 0, the
ergodicity hypothesis on the random field and the definition of K
D
yield the expected result of
equation (9)
ˇ
K
0
= K
. In general, when 6= 0, we have
ˇ
K
6= K
. This means that the KUBC
estimate is different from the homogenized coefficient, even though an infinite number of Monte
Carlo trials is considered. In this paper, we will refer to this misfit by the word ”bias”.
It is interesting to note that in 1D, whatever the value of , the SUBC approach yields the exact
value of the homogenized tensor (when N ). In the words defined above, the SUBC estimate
in 1D is therefore unbiased. Note, however, that this property is very specific to the one-dimensional
case, and is not true in higher dimensions.
2.5. Particular case in 2D
We discuss in this section a very particular type of 2D medium that has a specific kind a duality
property: the random field k
(x) is statistically equivalent to the random field c/k
(x), where c is
a scalar constant. As noted in [30, chapter 3] (see a proof in the book in a more general setting,
and references therein for original contributions to that result), the homogenized coefficient of this
random medium is then necessarily equal to
K
=
cI
2
, (12)
where I
2
is the two-dimensional identity tensor.
Note that, in the two-dimensional case, there is no analytic result for the value of the KUBC
and SUBC homogenized approximates at finite . However, the following bounds always hold true
(see [27] for example):
ˆ
K
K
ˇ
K
. (13)
Further, as we will be illustrated in the examples at the end of this paper, both the KUBC and SUBC
are biased for finite (see Section 5).
3. COUPLING OF A RANDOM MICROSTRUCTURE WITH AN EFFECTIVE MODEL
In the previous section, we have introduced classical numerical techniques to obtain estimates of
the homogenized tensors. These estimates are widely developed and used in the literature, but are
unfortunately biased in the general case. In this paper, we propose a novel technique for obtaining
such estimates. This technique will be presented in Section 4 and relies heavily on a stochastic-
deterministic coupling approach originally introduced in [25, 26]. The objective of this section is to
recall the main features of this coupling method, without too much emphasis on technical details
(those can be found in particular in [26]).
It is important to stress from the start that this method is very different from an approach where
a sequence of realizations of the random medium would be coupled each to an exterior effective
model. In such an approach, the displacement fields in the effective model would be different for
each realization of the random medium. Contrarily, in our approach, the coupling is really posed
in a stochastic setting and couples the entire set of realizations of the random medium to a single
effective model.
This coupling strategy is based on the introduction and superposition of two models and three
domains (see figure 2): the (stochastic) microstructure is defined over a domain D with a stochastic
parameter field k
(x), and the (deterministic) effective model is defined over a domain D, with a
constant parameter K
. The supports of the two models are such that D D, and the two models
communicate through a coupling volume D
c
, with D
c
D and D
c
D. These definitions mean
Copyright
c
2013 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng (2013)
Prepared using nmeauth.cls DOI: 10.1002/nme

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, stochastic homogenization analysis of heterogeneous materials is addressed in the context of elasticity under finite deformations, and random effective quantities such as tangent tensor, first Piola---Kirchhoff stress, and strain energy along with their numerical characteristics are tackled under different boundary conditions by a multiscale finite element strategy combined with the Montecarlo method.
Abstract: In this work, stochastic homogenization analysis of heterogeneous materials is addressed in the context of elasticity under finite deformations. The randomness of the morphology and of the material properties of the constituents as well as the correlation among these random properties are fully accounted for, and random effective quantities such as tangent tensor, first Piola---Kirchhoff stress, and strain energy along with their numerical characteristics are tackled under different boundary conditions by a multiscale finite element strategy combined with the Montecarlo method. The size of the representative volume element (RVE) with randomly distributed particles for different particle volume fractions is first identified by a numerical convergence scheme. Then, different types of displacement-controlled boundary conditions are applied to the RVE while fully considering the uncertainty in the microstructure. The influence of different random cases including correlation on the random effective quantities is finally analyzed.

38 citations


Cites background from "Numerical strategy for unbiased hom..."

  • ...The uncertainty existing in the input andmaterial parameters [16] recently motivated an increasing attention to random heterogeneous materials [17–24]....

    [...]

  • ...Cottereau [18] presented a numerical coupling strategy to lower the costs associated to the prediction of the value of homogenized tensors in elliptic problems, and coupled a random microstructure with a deterministic homogenized model....

    [...]

  • ...Generally, prior work on numerical homogenization mainly focused on the heterogeneous materials under small deformations and tended to reduce the number of random variables related to the microstructure, and can be broadly classified into two categories, namely: homogenization only considering the uncertainty in morphology [18–20,22–26], and homogenization directly accounting for the uncertainty of some material properties of the constituents such as Young’s modulus or Poisson ration [21]....

    [...]

Posted Content
TL;DR: In this paper, the authors considered the problem of approximating homogenized coefficients of second order divergence form elliptic operators with random statistically homogeneous coefficients, by means of periodization and other cut-off procedures.
Abstract: This Note deals with approximations of homogenized coefficients of second order divergence form elliptic operators with random statistically homogeneous coefficients, by means of “periodization” and other ”cut-off” procedures. For instance in the case of periodic approximation, we consider a cubic sample (0, ) of the random medium, extend it periodically in and use the effective coefficients of the obtained periodic operators as an approximation of the effective coefficients of the original random operator. It is shown that these approximations converge a.s. as → ∞ and give back the effective coefficients of the original random operator. Moreover, under additional mixing conditions on the coefficients, the rate of convergence can be estimated by some negative power of which only depends on the dimension, the ellipticity constant and the rate of decay of the mixing coefficients. Similar results are established for approximations in terms of appropriate Dirichlet and Neumann problems localized in a cubic sample (0, ).

33 citations

Journal ArticleDOI
TL;DR: A broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure-property relationships and study various physical phenomena and mechanisms is provided in this paper.
Abstract: With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure–property relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanics community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to “simply” support experimental work. This is illustrated by examples from several application areas on structural materials. This manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.

25 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a broad and comprehensive overview of recent trends where predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure-properties relationships and study various physical phenomena and mechanisms.
Abstract: With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends where predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure-properties relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanics community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to "simply" support experimental work. This is illustrated by examples from several application areas on structural materials. This manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.

23 citations


Cites background from "Numerical strategy for unbiased hom..."

  • ...ist in order to circumvent this drawback [164]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a numerical approach is presented to determine the load bearing capacity of structural elements made of heterogeneous materials subjected to variable loads using the lower bound shakedown theorem applied to representative volume elements.

22 citations

References
More filters
Journal ArticleDOI
TL;DR: This paper presents convergence properties of the Nelder--Mead algorithm applied to strictly convex functions in dimensions 1 and 2, and proves convergence to a minimizer for dimension 1, and various limited convergence results for dimension 2.
Abstract: The Nelder--Mead simplex algorithm, first published in 1965, is an enormously popular direct search method for multidimensional unconstrained minimization. Despite its widespread use, essentially no theoretical results have been proved explicitly for the Nelder--Mead algorithm. This paper presents convergence properties of the Nelder--Mead algorithm applied to strictly convex functions in dimensions 1 and 2. We prove convergence to a minimizer for dimension 1, and various limited convergence results for dimension 2. A counterexample of McKinnon gives a family of strictly convex functions in two dimensions and a set of initial conditions for which the Nelder--Mead algorithm converges to a nonminimizer. It is not yet known whether the Nelder--Mead method can be proved to converge to a minimizer for a more specialized class of convex functions in two dimensions.

7,141 citations


"Numerical strategy for unbiased hom..." refers methods in this paper

  • ...In particular, we have used the Nelder-Mead algorithm (see [36] for details), but others could be considered....

    [...]

MonographDOI
06 May 2002
TL;DR: Some of the greatest scientists including Poisson, Faraday, Maxwell, Rayleigh, and Einstein have contributed to the theory of composite materials Mathematically, it is the study of partial differential equations with rapid oscillations in their coefficients Although extensively studied for more than a hundred years, an explosion of ideas in the last five decades has dramatically increased our understanding of the relationship between the properties of the constituent materials, the underlying microstructure of a composite, and the overall effective moduli which govern the macroscopic behavior as mentioned in this paper.
Abstract: Some of the greatest scientists including Poisson, Faraday, Maxwell, Rayleigh, and Einstein have contributed to the theory of composite materials Mathematically, it is the study of partial differential equations with rapid oscillations in their coefficients Although extensively studied for more than a hundred years, an explosion of ideas in the last five decades (and particularly in the last three decades) has dramatically increased our understanding of the relationship between the properties of the constituent materials, the underlying microstructure of a composite, and the overall effective (electrical, thermal, elastic) moduli which govern the macroscopic behavior This renaissance has been fueled by the technological need for improving our knowledge base of composites, by the advance of the underlying mathematical theory of homogenization, by the discovery of new variational principles, by the recognition of how important the subject is to solving structural optimization problems, and by the realization of the connection with the mathematical problem of quasiconvexification This 2002 book surveys these exciting developments at the frontier of mathematics

2,455 citations

Journal ArticleDOI
TL;DR: In this article, a quantitative definition of the representative volume element (RVE) size is proposed, which can be associated with a given precision of the estimation of the overall property and the number of realizations of a given volume V of microstructure that one is able to consider.

1,772 citations


"Numerical strategy for unbiased hom..." refers background in this paper

  • ...Alternatively, it is also possible (see [8, 16]) to use a smaller domain and perform averages over several realizations of the random medium....

    [...]

Journal ArticleDOI
TL;DR: An alternate method based on Fourier series which avoids meshing and which makes direct use of microstructure images is proposed, based on the exact expression of the Green function of a linear elastic and homogeneous comparison material.

1,170 citations

Journal ArticleDOI

1,069 citations


"Numerical strategy for unbiased hom..." refers methods in this paper

  • ...The realizations of the continuous random fields k (x) have been generated using the spectral representation method [37], and its Fast Fourier Transform implementation....

    [...]

Frequently Asked Questions (15)
Q1. What are the contributions in "Numerical strategy for unbiased homogenization of random materials" ?

This paper presents a numerical strategy that allows to lower the costs associated to the prediction of the value of homogenized tensors in elliptic problems. Examples of 1D and 2D problems with continuous properties, as well as a 2D matrix-inclusion problem, illustrate the effectiveness and potential of the method. 

The random field k is indeed locally invariant by inversion, and the homogenized tensor does not depend on the correlation structure. 

in the implementation of the loop in algorithm 1, a relative tolerance of criterion = 10−2 was selected for both the value and the argument of the potential function. 

The linear Finite Element method was used to compute the corrector problems, with 800, 1600, and 10000 triangular elements, respectively for the cases = `c/L = 10, = 1 and = 0.1. 

The coupling problem is set in the general Arlequin framework (see in particular [31, 32, 33, 34] for details on the Arlequin framework in a deterministic setting and [25, 26] for the stochastic case), and reads: find (u ,u ,Φ) ∈ V ×W ×Wc such that a (u , v) + C(Φ, v) = `(v), ∀v ∈ V A (u ,v)− C(Φ,v) = L(v), ∀v ∈ W C(Ψ, u − u ) = 0, ∀Ψ ∈ Wc , (14)where the forms a and `, on the one hand, and A and L, on the other hand, are the forms appearing in the weak formulations corresponding to equations (4) and (1), respectively, weighted by a function that enforces the conservation of the global energy, by appropriate partitioning among the two available models. 

The realizations of the continuous random fields k (x) have been generated using the spectral representation method [37], and its Fast Fourier Transform implementation. 

For = 0.1, the number of additional degrees of freedom (in space) is around 1800 for the discretization of D and around 200 for the discretization ofWc, to be compared to the 10000 degrees of freedom (in space) defined over the random domain D. Smaller would yield even smaller relative numbers. 

The authors then scale the fluctuations of the microstructure by 1/ , and look at the fluctuations of the solution u(x) at the original scale. 

Engng (2013) Prepared using nmeauth.cls DOI: 10.1002/nmethat there is part of the domain where only the effective model is defined, part of the domain where both models are defined and over which they are coupled, and part of the domain where both models are defined but over which they do not communicate. 

In the next section, the authors present a 1D example, for which the authors still know analytically the homogenized tensor, but for which the random medium is not locally invariant by inversion. 

The software used for the solution of the coupled Arlequin systems is freely available at https://github.com/cottereau/CArl.In all the simulations presented in this section, the authors have used κ0 = 1 and κ1 = 10−3 for the definition of the coupling operator (see Eq. (19)). 

Hence for an imposed unit strain at the boundary of the macro-scale domain (Dirichlet approach, which the authors will consider in the following), the strain tensor should be identity, whether the macro-model alone is solved for, or the coupled micro-macro model. 

To refine these observations, the authors present in Figure 9 the Arlequin estimates obtained for N = 103 Monte Carlo trials as a function of the correlation length = `c/L. Again, each experiment is repeated for n = 10 different ensembles of realizations of the random medium. 

It has been proved [8, 15] that, whatever the choice of boundary conditions, the limit of the estimated tensors was indeed the effective tensor. 

As in the previous case, the Arlequin estimate depends on both the number of Monte Carlo trials, but also on those realizations themselves, so each value of K N is computed for n = 10 different ensembles of realizations of the random medium.