scispace - formally typeset
Search or ask a question

Showing papers in "Multiscale Modeling & Simulation in 2007"


Journal ArticleDOI
TL;DR: The steepest descent for minimizing the functional is interpreted as a nonlocal diffusion process, which allows a convenient framework for nonlocal variational minimizations, including variational denoising, Bregman iterations, and the recently proposed inverse scale space.
Abstract: A nonlocal quadratic functional of weighted differences is examined. The weights are based on image features and represent the affinity between different pixels in the image. By prescribing different formulas for the weights, one can generalize many local and nonlocal linear denoising algorithms, including the nonlocal means filter and the bilateral filter. In this framework one can easily show that continuous iterations of the generalized filter obey certain global characteristics and converge to a constant solution. The linear operator associated with the Euler–Lagrange equation of the functional is closely related to the graph Laplacian. We can thus interpret the steepest descent for minimizing the functional as a nonlocal diffusion process. This formulation allows a convenient framework for nonlocal variational minimizations, including variational denoising, Bregman iterations, and the recently proposed inverse scale space. It is also demonstrated how the steepest descent flow can be used for segmenta...

503 citations


Journal ArticleDOI
TL;DR: A priori error estimates are derived and show, with appropriate choice of the mortar space, optimal order convergence and some superconvergence on the fine scale for both the solution and its flux.
Abstract: We develop multiscale mortar mixed finite element discretizations for second order elliptic equations. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. The polynomial degree of the mortar and subdomain approximation spaces may differ; in fact, the mortar space achieves approximation comparable to the fine scale on its coarse grid by using higher order polynomials. Our formulation is related to, but more flexible than, existing multiscale finite element and variational multiscale methods. We derive a priori error estimates and show, with appropriate choice of the mortar space, optimal order convergence and some superconvergence on the fine scale for both the solution and its flux. We also derive efficient and reliable a posteriori error estimators, which are used in an adaptive mesh refinement algorithm to obtain appropriate subdomain and mortar grids. Numerical experi...

320 citations


Journal ArticleDOI
TL;DR: The main purpose of this paper is to prove that the jump discontinuity set of the solution of the total variation based denoising problem is contained in the jump set ofThe datum to be denoised.
Abstract: The main purpose of this paper is to prove that the jump discontinuity set of the solution of the total variation based denoising problem is contained in the jump set of the datum to be denoised. We also prove some extensions of this result for the total variation minimization flow, for anisotropic norms, and for some more general convex functionals, which include the minimal surface equation case and its anisotropic extensions.

145 citations


Journal ArticleDOI
TL;DR: It is shown that the images produced by this model can be formed from the minimizers of a sequence of decoupled geometry sub-problems, and that the TV-L1 model is able to separate image features according to their scales.
Abstract: This paper studies the total variation regularization with an $L^1$ fidelity term (TV‐$L^1$) model for decomposing an image into features of different scales. We first show that the images produced by this model can be formed from the minimizers of a sequence of decoupled geometry subproblems. Using this result we show that the TV‐$L^1$ model is able to separate image features according to their scales, where the scale is analytically defined by the G‐value. A number of other properties including the geometric and morphological invariance of the TV‐$L^1$ model are also proved and their applications discussed.

109 citations


Journal ArticleDOI
TL;DR: It is shown via simulations that a dynamic two-step method involving the diffuse interface scale allows us to connect regions across larger inpainting domains and proves for the steady state problem that the isophote directions are matched at the boundary of inPainting regions.
Abstract: Image inpainting is the process of filling in missing parts of damaged images based on information gleaned from surrounding areas. We consider a model for inpainting binary images using a modified Cahn–Hilliard equation. We prove for the steady state problem that the isophote directions are matched at the boundary of inpainting regions. Our model has two scales, the diffuse interface scale, $\varepsilon$, on which it can accomplish topological transitions, and the feature scale of the image. We show via simulations that a dynamic two-step method involving the diffuse interface scale allows us to connect regions across larger inpainting domains. For the model problem of stripe inpainting, we show that this issue is related to a bifurcation structure with respect to the scale $\varepsilon$.

108 citations


Journal ArticleDOI
TL;DR: A convergence analysis of explicit tau-leaping schemes for simulating chemical reactions from the viewpoint of stochastic differential equations, to handle the non-Lipschitz property of the coefficients and jumps on the integer lattice.
Abstract: This paper builds a convergence analysis of explicit tau-leaping schemes for simulating chemical reactions from the viewpoint of stochastic differential equations Mathematically, the chemical reaction process is a pure jump process on a lattice with state-dependent intensity The stochastic differential equation form of the chemical master equation can be given via Poisson random measures Based on this form, different types of tau-leaping schemes can be proposed In order to make the problem well-posed, a modified explicit tau-leaping scheme is considered It is shown that the mean square strong convergence is of order $1/2$ and the weak convergence is of order 1 for this modified scheme The novelty of the analysis is to handle the non-Lipschitz property of the coefficients and jumps on the integer lattice

102 citations


Journal ArticleDOI
TL;DR: A two-time-scale system of jump-diffusion stochastic differential equations and the main goal is to study the convergence rate of the slow components to the effective dynamics.
Abstract: We study a two-time-scale system of jump-diffusion stochastic differential equations. The main goal is to study the convergence rate of the slow components to the effective dynamics. The convergenc...

89 citations


Journal ArticleDOI
TL;DR: The existence of global-in-time weak solutions to the model for a general class of spring-force-potentials including, in particular, the widely used finitely extensible nonlinear elastic (FENE) potential is established.
Abstract: We study the existence of global-in-time weak solutions to a coupled microscopic- macroscopic bead-spring model which arises from the kinetic theory of dilute solutions of polymeric liquids with noninteracting polymer chains. The model consists of the unsteady incompressible Navier-Stokes equations in a bounded domain Ω ⊂ R d , d = 2 or 3, for the velocity and the pressure of the fluid, with an elastic extra-stress tensor as the right-hand side in the momentum equation. The extra-stress tensor stems from the random movement of the polymer chains and is defined through the associated probability density function which satisfies a Fokker-Planck-type parabolic equation, a crucial feature of which is the presence of a center-of-mass diffusion term. The anisotropic Friedrichs mollifiers, which naturally arise in the course of the derivation of the model in the Kramers expression for the extra-stress tensor and in the drag term in the Fokker-Planck equation, are replaced by isotropic Friedrichs mollifiers. We establish the existence of global-in-time weak solutions to the model for a general class of spring-force-potentials including, in particular, the widely used finitely extensible nonlinear elastic (FENE) potential. We justify also, through a rigorous limiting process, certain classical reductions of this model appearing in the literature which exclude the center-of-mass diffusion term from the Fokker-Planck equation on the grounds that the diffusion coefficient is small relative to other coefficients featuring in the equation. In the case of a corotational drag term we perform a rigorous passage to the limit as the Friedrichs mollifiers in the Kramers expression and the drag term converge to identity operators.

82 citations


Journal ArticleDOI
TL;DR: This paper presents a method for computing the stationary in time wave field that results from steady air flow over topography as a superposition of Gaussian beams and shows that the approximate Gaussian beam stationary solution is close to a true time-dependent solution of the linearized system.
Abstract: Gaussian beams are approximate solutions to hyperbolic partial differential equations that are concentrated on a curve in space-time. In this paper, we present a method for computing the stationary in time wave field that results from steady air flow over topography as a superposition of Gaussian beams. We derive the system of equations that governs these mountain waves as a linearization of the basic equations of fluid dynamics and show that this system is well-posed. Furthermore, we show that the approximate Gaussian beam stationary solution is close to a true time-dependent solution of the linearized system.

67 citations


Journal ArticleDOI
TL;DR: A new analysis apparatus, the multiscale window transform (MWT), is developed to generalize the classical mean-eddy decomposition in fluid mechanics to include three or more ranges of scales, and to ensure a faithful representation of localized energy processes.
Abstract: A new analysis apparatus, the multiscale window transform (MWT), is developed to generalize the classical mean-eddy decomposition (MED) in fluid mechanics to include three or more ranges of scales, and to ensure a faithful representation of localized energy processes. The development begins with the introduction of a sequence of finite-dimensional subspaces of $L_2[0,1]$, $\{V_{\varrho,j}\}_{0\le j\le j_2}$, based on a multiresolution analysis of $L_2(\mathbb{R})$. All the $V_{\varrho,j}$ are sampling spaces, i.e., spaces spanned by some translation invariant basis. The upper bound of the index of scale level, $j_2$, is prescribed in accordance with the signal of concern. Within $V_{\varrho,j_2}$, the concepts of large-scale, mesoscale, and submesoscale windows are introduced, each being a subspace of $V_{\varrho,j_2}$ and containing exclusively a range of scales. A transform-reconstruction pair is constructed on each window, representing the features for the corresponding range of scales. The resulting M...

66 citations


Journal ArticleDOI
TL;DR: In this paper, iterative regularization with the Bregman distance of the total variation seminorm is analyzed in a functional analytical setting using methods from convex analysis and existence of a solution of the corresponding flow equation is proved.
Abstract: In this paper we analyze iterative regularization with the Bregman distance of the total variation seminorm. Moreover, we prove existence of a solution of the corresponding flow equation as introduced in [M. Burger, G. Gilboa, S. Osher, and J. Xu, Commun. Math. Sci., 4 (2006), pp. 179–212] in a functional analytical setting using methods from convex analysis. The results are generalized to variational denoising methods with ${\rm L}^p$-norm fit-to-data terms and Bregman distance regularization terms. For the associated flow equations well-posedness is derived using recent results on metric gradient flows from [L. Ambrosio, N. Gigli, and G. Savare, Gradient Flows in Metric Spaces and in the Space of Probability Measures, Lectures in Mathematics ETH Zurich, Birkhauser Verlag, Basel, 2005]. In contrast to previous work the results of this paper apply for the analysis of variational denoising methods with the Bregman distance under adequate noise assumptions. Aside from the theoretical results we introduce a ...

Journal ArticleDOI
TL;DR: It is demonstrated that damped‐shear boundary conditions for the conservative‐velocity problems or linear boundary conditions in the multiscale finite‐volume method are caused by the appearance of unphysical “circulation cells.”
Abstract: The multiscale finite‐volume (MSFV) method has been designed to solve flow problems on large domains efficiently. First, a set of basis functions, which are local numerical solutions, is employed to construct a fine‐scale pressure approximation; then a conservative fine‐scale velocity approximation is constructed by solving local problems with boundary conditions obtained from the pressure approximation; finally, transport is solved at the fine scale. The method proved very robust and accurate for multiphase flow simulations in highly heterogeneous isotropic reservoirs with complex correlation structures. However, it has recently been pointed out that the fine‐scale details of the MSFV solutions may be lost in the case of high anisotropy or large grid aspect ratios. This shortcoming is analyzed in this paper, and it is demonstrated that it is caused by the appearance of unphysical “circulation cells.” We show that damped‐shear boundary conditions for the conservative‐velocity problems or linear boundaryco...

Journal ArticleDOI
TL;DR: The Wigner transform of the wave field is used and it is shown that it becomes deterministic in the large diversity limit when integrated against test functions and also shows that the limit is deterministic when the support of the test functions tends to zero but is large compared to the correlation length.
Abstract: We consider the random Schrodinger equation as it arises in the paraxial regime for wave propagation in random media. In the white noise limit it becomes the Ito–Schrodinger stochastic partial differential equation which we analyze here in the high frequency regime. We also consider the large lateral diversity limit where the typical width of the propagating beam is large compared to the correlation length of the random medium. We use the Wigner transform of the wave field and show that it becomes deterministic in the large diversity limit when integrated against test functions. This is the self-averaging property of the Wigner transform. It follows easily when the support of the test functions is of the order of the beam width. We also show with a more detailed analysis that the limit is deterministic when the support of the test functions tends to zero but is large compared to the correlation length.

Journal ArticleDOI
TL;DR: A specialized (nonsmooth) objective function allowing all these coefficients to be selectively restored, without modifying the other coefficients which are nearly faithful, using regularization in the domain of the restored function is designed.
Abstract: We consider the denoising of a function (an image or a signal) containing smooth regions and edges. Classical ways to solve this problem are variational methods and shrinkage of a representation of the data in a basis or a frame. We propose a method which combines the advantages of both approaches. Following the wavelet shrinkage method of Donoho and Johnstone, we set to zero all frame coefficients with respect to a reasonable threshold. The shrunk frame representation involves both large coefficients corresponding to noise (outliers) and some coefficients, erroneously set to zero, leading to Gibbs-like oscillations in the estimate. We design a specialized (nonsmooth) objective function allowing all these coefficients to be selectively restored, without modifying the other coefficients which are nearly faithful, using regularization in the domain of the restored function. We analyze the well-posedness and the main properties of this objective function. We also propose an approximation of this method which...

Journal ArticleDOI
TL;DR: This article constructs a hybrid model by spatially coupling a lattice Boltzmann model (LBM) to a finite difference discretization of the partial differential equation (PDE) for reaction-diffusion systems and shows that the global spatial discretized error of the hybrid model is one order less accurate than the local error made at the interface.
Abstract: In this article we construct a hybrid model by spatially coupling a lattice Boltzmann model (LBM) to a finite difference discretization of the partial differential equation (PDE) for reaction-diffusion systems. Because the LBM has more variables (the particle distribution functions) than the PDE (only the particle density), we have a one-to-many mapping problem from the PDE to the LBM domain at the interface. We perform this mapping using either results from the Chapman–Enskog expansion or a pointwise iterative scheme that approximates these analytical relations numerically. Most importantly, we show that the global spatial discretization error of the hybrid model is one order less accurate than the local error made at the interface. We derive closed expressions for the spatial discretization error at steady state and verify them numerically for several examples on the one-dimensional domain.

Journal ArticleDOI
TL;DR: This study extends the Karhunen–Loeve moment equation (KLME) approach, an approach based on KL decomposition, to efficiently and accurately quantify uncertainty for flow in nonstationary heterogeneous porous media that include a number of zones with different statistics of the hydraulic conductivity.
Abstract: In this study, we extend the Karhunen–Loeve moment equation (KLME) approach, an approach based on KL decomposition, to efficiently and accurately quantify uncertainty for flow in nonstationary heterogeneous porous media that include a number of zones with different statistics of the hydraulic conductivity. We first decompose the log hydraulic conductivity $Y = {\rm ln}\, K_s$ for each zone by the KL decomposition, which is related to a set of eigenvalues and their corresponding orthogonal deterministic eigenfunctions. Based on the decomposition for all individual zones, we develop an algorithm to find the eigenvalues and eigenfunctions for the entire domain. Following the methodology proposed by Zhang and Lu [J. Comput. Phys., 194 (2004), pp. 773–794], we solve the head variability up to second order in terms of $\sigma_Y^2$ and compare the results with those obtained from Monte Carlo (MC) simulations. It is evident that the results from the KLME approach with higher‐order corrections are close to those f...

Journal ArticleDOI
TL;DR: This paper analyzes convergence and smoothness of such subdivision processes and shows that the nonlinear schemes essentially have the same properties regarding$C^1$ and $C^2$ smoothness as the linear schemes they are derived from.
Abstract: Linear stationary subdivision rules take a sequence of input data and produce ever denser sequences of subdivided data from it. They are employed in multiresolution modeling and have intimate connections with wavelet and more general pyramid transforms. Data which naturally do not live in a vector space, but in a nonlinear geometry like a surface, symmetric space, or a Lie group (e.g., motion capture data), require different handling. One way to deal with Lie group valued data has been proposed by Donoho [talk at the IMI Approximation and Computation Meeting, Charleston, SC, 2001]: It is to employ a logexponential analogue of a linear subdivision rule. While a comprehensive discussion of applications is given by Ur Rahman et al. [Multiscale Model. Simul., 4 (2005), pp. 1201–1232], this paper analyzes convergence and smoothness of such subdivision processes and shows that the nonlinear schemes essentially have the same properties regarding $C^1$ and $C^2$ smoothness as the linear schemes they are derived from.

Journal ArticleDOI
TL;DR: A collection of models for the Euler equations which are based also on the Mori–Zwanzig formalism are presented, which are used to compute the rate of energy decay for the Taylor–Green vortex problem.
Abstract: In a recent paper [O. H. Hald and P. Stinis, Proc. Natl. Acad. Sci. USA, 104 (2007), pp. 6527–6532], an infinitely long memory model (the t-model) for the Euler equations was presented and analyzed...

Journal ArticleDOI
TL;DR: The dead leaves model consists of the superposition of random closed sets (the objects) and enables one to model the occlusion phenomena and a random field is obtained that contains homogeneous regions, satisfies scaling properties, and is statistically relevant for modeling natural images.
Abstract: The dead leaves model, introduced by the mathematical morphology school, consists of the superposition of random closed sets (the objects) and enables one to model the occlusion phenomena. When combined with specific size distributions for objects, one obtains random fields providing adequate models for natural images. However, this framework imposes bounds on the sizes of objects. We consider the limits of these random fields when letting the cutoff sizes tend to zero and infinity. As a result we obtain a random field that contains homogeneous regions, satisfies scaling properties, and is statistically relevant for modeling natural images. We then investigate the combined effect of these features on the regularity of images in the framework of Besov spaces.

Journal ArticleDOI
TL;DR: The Cauchy problems associated to strong initial magnetic fields are investigated and the convergence towards the so-called "guiding center approximation" is justified when the dynamics is observed on a slower time scale than the plasma frequency.
Abstract: In this paper we study the asymptotic behavior of the Vlasov-Maxwell equations with strong magnetic field. More precisely we investigate the Cauchy problems associated to strong initial magnetic fields. We justify the convergence towards the so-called "guiding center approximation" when the dynamics is observed on a slower time scale than the plasma frequency. Our proofs rely on the modulated energy method.

Journal ArticleDOI
TL;DR: This article is devoted to the reformulation of an isothermal version of the quantum hydrodynamic model derived by Degond and Ringhofer in [J. Statist. Phys., 112 (2003), pp. 587–628] (which will be referred to as the quantum Euler system).
Abstract: This article is devoted to the reformulation of an isothermal version of the quantum hydrodynamic model derived by Degond and Ringhofer in [J. Statist. Phys., 112 (2003), pp. 587–628] (which will be referred to as the quantum Euler system). We write the model under a simpler (differential) form. The derivation is based on an appropriate use of commutators. Starting from the quantum Liouville equation, the system of moments is closed by a density operator which minimizes the quantum free energy. Some properties of the model are then exhibited, and most of them rely on a gauge invariance property of the system. Several simplifications of the model are also written for the special case of irrotational flows. The second part of the paper is devoted to a formal analysis of the asymptotic behavior of the quantum Euler system in three situations: at the semiclassical limit, at the zero‐temperature limit, and at a diffusive limit. The remarkable fact is that in each case we recover a known model: respectively, th...

Journal ArticleDOI
TL;DR: In this article, the authors examined an application of the optimal prediction framework to the truncated Fourier-Galerkin approximation of Burgers's equation and showed that it restores qualitative features of the solution in the case where the number of small wavelength unresolved modes is insufficient to resolve the resulting shocks.
Abstract: We examine an application of the optimal prediction framework to the truncated Fourier–Galerkin approximation of Burgers’s equation. Under particular conditions on the density of the modes and the length of the memory kernel, optimal prediction introduces an additional term to the Fourier–Galerkin approximation which represents the influence of an arbitrary number of small wavelength unresolved modes on the long wavelength resolved modes. The modified system, called the t‐model by previous authors, takes the form of a time‐dependent cubic term added to the original quadratic system. Numerical experiments show that this additional term restores qualitative features of the solution in the case where the number of modes is insufficient to resolve the resulting shocks (i.e., zero or very small viscosity) and for which the original Fourier–Galerkin approximation is very poor. In particular, numerical examples are shown in which the kinetic energy decays in the same manner as in the exact solution, i.e., as $t^...

Journal ArticleDOI
TL;DR: It is shown, both theoretically and numerically, that a solitary wave is more robust than a linear wave in the early steps of the propagation, but it eventually decays much faster after a critical distance corresponding to the loss of about half of its initial amplitude.
Abstract: The deformation of a nonlinear pulse traveling in a dispersive random medium can be studied with asymptotic analysis based on separation of scales when the propagation distance is large compared to the correlation length of the random medium. We consider shallow water waves with a spatially random depth. We use a formulation in terms of a terrain-following Boussinesq system. We compute the effective evolution equation for the front pulse which can be written as a dissipative Kortweg-de Vries equation. We study the soliton dynamics driven by this system. We show, both theoretically and numerically, that a solitary wave is more robust than a linear wave in the early steps of the propagation. However, it eventually decays much faster after a critical distance corresponding to the loss of about half of its initial amplitude. We also perform an asymptotic analysis for a class of random bottom topographies. A universal behavior is captured through the asymptotic analysis of the metric term for the corresponding change to terrain-following coordinates. Within this class we characterize the effective height for highly disordered topographies. The probabilistic asymptotic results are illustrated by performing Monte Carlo simulations with a Schwarz-Christoffel Toolbox.

Journal ArticleDOI
TL;DR: This work suggests applying an aggregation/disaggregation method which addresses only well-conditioned subproblems and thus results in a stable algorithm.
Abstract: Whenever the invariant stationary density of metastable dynamical systems decomposes into almost invariant partial densities, its computation as eigenvector of some transition probability matrix is an ill-conditioned problem. In order to avoid this computational difficulty, we suggest applying an aggregation/disaggregation method which addresses only well-conditioned subproblems aud thus results in a stable algorithm. In contrast to existing methods, the aggregation step is done via a sampling algorithm which covers only small patches of the sampling space. Finally, the theoretical analysis is illustrated by two biomolecular examples.

Journal ArticleDOI
TL;DR: A rigorous formulation of concurrent multiscale computing based on relaxation is presented; the connection between concurrent multISCale computing and enhanced-strain elements is established; and the approach in an important area of application, namely, single-crystal plasticity, is illustrated.
Abstract: This paper is concerned with the effective modeling of deformation microstructures within a concurrent multiscale computing framework. We present a rigorous formulation of concurrent multiscale computing based on relaxation; we establish the connection between concurrent multiscale computing and enhanced-strain elements; and we illustrate the approach in an important area of application, namely, single-crystal plasticity, for which the explicit relaxation of the problem is derived analytically. This example demonstrates the vast effect of microstructure formation on the macroscopic behavior of the sample, e.g., on the force/travel curve of a rigid indentor. Thus, whereas the unrelaxed model results in an overly stiff response, the relaxed model exhibits a proper limit load, as expected. Our numerical examples additionally illustrate that ad hoc element enhancements, e.g., based on polynomial, trigonometric, or similar representations, are unlikely to result in any significant relaxation in general.

Journal ArticleDOI
TL;DR: A new sparse spectral method is developed, in which the fast Fourier transform is replaced by RA$\mathcal{\ell}$SFA (randomized algorithm of sparse Fourier analysis), a sublinear randomized algorithm that takes time $O(B \log N)$ to recover a B-term Fourier representation for a signal of length N.
Abstract: We develop a new sparse spectral method, in which the fast Fourier transform (FFT) is replaced by RA$\mathcal{\ell}$SFA (randomized algorithm of sparse Fourier analysis); this is a sublinear randomized algorithm that takes time $O(B \log N)$ to recover a B-term Fourier representation for a signal of length N, where we assume $B \ll N$. To illustrate its potential, we consider the parabolic homogenization problem with a characteristic fine scale size $\varepsilon$. For fixed tolerance the sparse method has a computational cost of $O(|{\log\varepsilon}|)$ per time step, whereas standard methods cost at least $O(\varepsilon^{-1})$. We present a theoretical analysis as well as numerical results; they show the advantage of the new method in speed over the traditional spectral methods when $\varepsilon$ is very small. We also show some ways to extend the methods to hyperbolic and elliptic problems.

Journal ArticleDOI
TL;DR: The problem of simulating the slow observable of a multiscale diffusion process is studied and an algorithm is improved, using the past simulations as control variates, in order to reduce the variance of the subsequent simulations.
Abstract: We study the problem of simulating the slow observable of a multiscale diffusion process. In particular, we extend previous algorithms to the case where the simulation of the different scales cannot be uncoupled and we have no explicit knowledge of the drift or the variance of the multiscale diffusion. This is the case when the simulation data come from a black box “legacy code,” or possibly from a fine scale simulator (e.g., MD, kMC) which we want to effectively model as a diffusion process. We improve the algorithm, using the past simulations as control variates, in order to reduce the variance of the subsequent simulations.

Journal ArticleDOI
TL;DR: This work quantifies the influence of small objects on (i) the energy density measured at an array of detectors and (ii) the correlation between the wave field measured in the absence of the object and theWave field measuredIn the presence of the objects.
Abstract: We derive kinetic models for the correlations and the energy densities of wave fields propagating in random media. These models take the form of radiative transfer and diffusion equations. We use these macroscopic models to address the detection and imaging of small objects buried in highly heterogeneous media. More specifically, we quantify the influence of small objects on (i) the energy density measured at an array of detectors and (ii) the correlation between the wave field measured in the absence of the object and the wave field measured in the presence of the object. We analyze the advantages and disadvantages of such measurements as a function of the level of disorder in the random media. Numerical simulations verify the theoretical predictions.

Journal ArticleDOI
TL;DR: In the present paper a network model for supply chains with policy attributes is introduced and numerical results are presented for several different examples.
Abstract: In the present paper a network model for supply chains with policy attributes is introduced. The proposed network model is an extension of the single lane model with policy attributes presented in [D. Armbruster, P. Degond, and C. Ringhofer, Transport Theory Statist. Phys., submitted]. The single lane model is extended to the network case using the ideas developed in [S. Gottlich, M. Herty, and A. Klar, Commun. Math. Sci., 3 (2005), pp. 545–559; S. Gottlich, M. Herty, and A. Klar, Commun. Math. Sci., 4 (2006), pp. 315–330]. Numerical results are presented for several different examples.

Journal ArticleDOI
TL;DR: This paper designs a multi-scale mathematical model where ovulation and atresia result from a hormonal controlled selection process, and describes the set of microscopic initial conditions leading to the macroscopic phenomenon of either ovulation or atResia, in the framework of backwards reachable sets theory.
Abstract: During each ovarian cycle, only a definite number of follicles ovulate, while the others undergo a degeneration process called atresia. We have designed a multi-scale mathematical model where ovulation and atresia result from a hormonal controlled selection process. A 2D-conservation law describes the age and maturity structuration of the follicular cell population. In this paper, we focus on the operating mode of the control, through the study of the characteristics of the conservation law. We describe in particular the set of microscopic initial conditions leading to the macroscopic phenomenon of either ovulation or atresia, in the framework of backwards reachable sets theory.