scispace - formally typeset
Search or ask a question
Author

C. William Gear

Bio: C. William Gear is an academic researcher from Princeton University. The author has contributed to research in topics: State variable & Nonlinear dimensionality reduction. The author has an hindex of 16, co-authored 28 publications receiving 2192 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A framework for computer-aided multiscale analysis, which enables models at a fine (microscopic/stochastic) level of description to perform modeling tasks at a coarse (macroscopic, systems) level, and can bypass the derivation of the macroscopic evolution equations when these equations conceptually exist but are not available in closed form is presented.
Abstract: We present and discuss a framework for computer-aided multiscale analysis, which enables models at a fine (microscopic/stochastic) level of description to perform modeling tasks at a coarse (macroscopic, systems) level. These macroscopic modeling tasks, yielding information over long time and large space scales, are accomplished through appropriately initialized calls to the microscopic simulator for only short times and small spatial domains. Traditional modeling approaches first involve the derivation of macroscopic evolution equations (balances closed through constitutive relations). An arsenal of analytical and numerical techniques for the efficient solution of such evolution equations (usually Partial Differential Equations, PDEs) is then brought to bear on the problem. Our equation-free (EF) approach, introduced in (1), when successful, can bypass the derivation of the macroscopic evolution equations when these equations conceptually exist but are not available in closed form. We discuss how the mathematics-assisted development of a computational superstructure may enable alternative descriptions of the problem physics (e.g. Lattice Boltzmann (LB), kinetic Monte Carlo (KMC) or Molecular Dynamics (MD) microscopic simulators, executed over relatively short time and space scales) to perform systems level tasks (integration over relatively large time and space scales,"coarse" bifurcation analysis, optimization, and control) directly. In effect, the procedure constitutes a system identification based, "closure-on-demand" computational toolkit, bridging microscopic/stochastic simulation with traditional continuum scientific computation and numerical analysis. We will briefly survey the application of these "numerical enabling technology" ideas through examples including the computation of coarsely self-similar solutions, and discuss various features, limitations and potential extensions of the approach.

852 citations

Journal ArticleDOI
TL;DR: Over the last few years with several collaborators, a mathematically inspired, computational enabling technology is developed and validated that allows the modeler to perform macroscopic tasks acting on the microscopic models directly, and can lead to experimental protocols for the equation-free exploration of complex system dynamics.
Abstract: The best available descriptions of systems often come at a fine level (atomistic, stochastic, microscopic, agent based), whereas the questions asked and the tasks required by the modeler (prediction, parametric analysis, optimization, and control) are at a much coarser, macroscopic level. Traditional modeling approaches start by deriving macroscopic evolution equations from microscopic models, and then bringing an arsenal of computational tools to bear on these macroscopic descriptions. Over the last few years with several collaborators, we have developed and validated a mathematically inspired, computational enabling technology that allows the modeler to perform macroscopic tasks acting on the microscopic models directly. We call this the “equation-free” approach, since it circumvents the step of obtaining accurate macroscopic descriptions. The backbone of this approach is the design of computational “experiments”. In traditional numerical analysis, the main code “pings“ a subroutine containing the model, and uses the returned information (time derivatives, etc.) to perform computer-assisted analysis. In our approach the same main code “pings“ a subroutine that runs an ensemble of appropriately initialized computational experiments from which the same quantities are estimated. Traditional continuum numerical algorithms can, thus, be viewed as protocols for experimental design (where “experiment“ means a computational experiment set up, and performed with a model at a different level of description). Ultimately, what makes it all possible is the ability to initialize computational experiments at will. Short bursts of appropriately initialized computational experimentation -through matrix-free numerical analysis, and systems theory tools like estimationbridge microscopic simulation with macroscopic modeling. If enough control authority exists to initialize laboratory experiments “at will” this computational enabling technology can lead to experimental protocols for the equation-free exploration of complex system dynamics.

391 citations

Journal ArticleDOI
TL;DR: In this paper, the authors define the maximum indices, which are the maxima of earlier indices in a neighborhood of the solution over a set of perturbations, and show that these indices are simply not related to each other.
Abstract: In the last few years there has been considerable research on differential algebraic equations (DAEs) $F(t, y, y') = 0$ where $F_{y'}$ is identically singular. Much of the mathematical effort has focused on computing a solution that is assumed to exist. More recently there has been some discussion of solvability of DAEs. There has historically been some imprecision in the use of the two key concepts of solvability and index for DAEs. The index is also important in control and systems theory but with different terminology. The consideration of increasingly complex nonlinear DAEs makes a clear and correct development necessary. This paper will try to clarify several points concerning the index. After establishing some new and more precise terminology that we need, some inaccuracies in the literature will be corrected. The two types of indices most frequently used, the differentiation index and the perturbation index, are defined with respect to solutions of unperturbed problems. Examples are given to show that these indices can be very different for the same problem. We define new "maximum indices," which are the maxima of earlier indices in a neighborhood of the solution over a set of perturbations and show that these indices are simply related to each other. These indices are also related to an index defined in terms of Jacobians.

259 citations

Posted Content
TL;DR: A framework for computer-aided multiscale analysis, which enables models at a "fine" (microscopic/stochastic) level of description to perform modeling tasks at a 'coarse' (macroscopic, systems) level, and can bypass the derivation of the macroscopic evolution equations when these equations conceptually exist but are not available in closed form.
Abstract: We present and discuss a framework for computer-aided multiscale analysis, which enables models at a "fine" (microscopic/stochastic) level of description to perform modeling tasks at a "coarse" (macroscopic, systems) level. These macroscopic modeling tasks, yielding information over long time and large space scales, are accomplished through appropriately initialized calls to the microscopic simulator for only short times and small spatial domains. Our equation-free (EF) approach, when successful, can bypass the derivation of the macroscopic evolution equations when these equations conceptually exist but are not available in closed form. We discuss how the mathematics-assisted development of a computational superstructure may enable alternative descriptions of the problem physics (e.g. Lattice Boltzmann (LB), kinetic Monte Carlo (KMC) or Molecular Dynamics (MD) microscopic simulators, executed over relatively short time and space scales) to perform systems level tasks (integration over relatively large time and space scales,"coarse" bifurcation analysis, optimization, and control) directly. In effect, the procedure constitutes a systems identification based, "closure on demand" computational toolkit, bridging microscopic/stochastic simulation with traditional continuum scientific computation and numerical analysis. We illustrate these ideas through examples from chemical kinetics (LB, KMC), rheology (Brownian Dynamics), homogenization and the computation of "coarsely self-similar" solutions, and discuss various features, limitations and potential extensions of the approach.

134 citations

Journal ArticleDOI
TL;DR: In this paper, the gap-to-teeth method is used for multiscale modeling of systems represented by microscopic physics-based simulators, when coarse-grained evolution equations are not available in closed form.

130 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Van Kampen as mentioned in this paper provides an extensive graduate-level introduction which is clear, cautious, interesting and readable, and could be expected to become an essential part of the library of every physical scientist concerned with problems involving fluctuations and stochastic processes.
Abstract: N G van Kampen 1981 Amsterdam: North-Holland xiv + 419 pp price Dfl 180 This is a book which, at a lower price, could be expected to become an essential part of the library of every physical scientist concerned with problems involving fluctuations and stochastic processes, as well as those who just enjoy a beautifully written book. It provides an extensive graduate-level introduction which is clear, cautious, interesting and readable.

3,647 citations

Journal ArticleDOI
TL;DR: This work develops a novel framework to discover governing equations underlying a dynamical system simply from data measurements, leveraging advances in sparsity techniques and machine learning and using sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data.
Abstract: Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.

2,784 citations

Journal ArticleDOI
TL;DR: It is shown here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods, which allows not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so.
Abstract: Summary. Markov chain Monte Carlo and sequential Monte Carlo methods have emerged as the two main tools to sample from high dimensional probability distributions. Although asymptotic convergence of Markov chain Monte Carlo algorithms is ensured under weak assumptions, the performance of these algorithms is unreliable when the proposal distributions that are used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. We show here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods. This allows us not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so. We demonstrate these algorithms on a non-linear state space model and a Levy-driven stochastic volatility model.

1,869 citations

Journal ArticleDOI
TL;DR: The process of iterating or diffusing the Markov matrix is seen as a generalization of some aspects of the Newtonian paradigm, in which local infinitesimal transitions of a system lead to global macroscopic descriptions by integration.
Abstract: We provide a framework for structural multiscale geometric organization of graphs and subsets of R(n). We use diffusion semigroups to generate multiscale geometries in order to organize and represent complex structures. We show that appropriately selected eigenfunctions or scaling functions of Markov matrices, which describe local transitions, lead to macroscopic descriptions at different scales. The process of iterating or diffusing the Markov matrix is seen as a generalization of some aspects of the Newtonian paradigm, in which local infinitesimal transitions of a system lead to global macroscopic descriptions by integration. We provide a unified view of ideas from data analysis, machine learning, and numerical analysis.

1,654 citations