scispace - formally typeset
Search or ask a question
Author

Constantinos Theodoropoulos

Bio: Constantinos Theodoropoulos is an academic researcher from University of Manchester. The author has contributed to research in topics: Nonlinear system & Model predictive control. The author has an hindex of 27, co-authored 122 publications receiving 3397 citations. Previous affiliations of Constantinos Theodoropoulos include State University of New York System & University at Buffalo.


Papers
More filters
Journal ArticleDOI
TL;DR: A framework for computer-aided multiscale analysis, which enables models at a fine (microscopic/stochastic) level of description to perform modeling tasks at a coarse (macroscopic, systems) level, and can bypass the derivation of the macroscopic evolution equations when these equations conceptually exist but are not available in closed form is presented.
Abstract: We present and discuss a framework for computer-aided multiscale analysis, which enables models at a fine (microscopic/stochastic) level of description to perform modeling tasks at a coarse (macroscopic, systems) level. These macroscopic modeling tasks, yielding information over long time and large space scales, are accomplished through appropriately initialized calls to the microscopic simulator for only short times and small spatial domains. Traditional modeling approaches first involve the derivation of macroscopic evolution equations (balances closed through constitutive relations). An arsenal of analytical and numerical techniques for the efficient solution of such evolution equations (usually Partial Differential Equations, PDEs) is then brought to bear on the problem. Our equation-free (EF) approach, introduced in (1), when successful, can bypass the derivation of the macroscopic evolution equations when these equations conceptually exist but are not available in closed form. We discuss how the mathematics-assisted development of a computational superstructure may enable alternative descriptions of the problem physics (e.g. Lattice Boltzmann (LB), kinetic Monte Carlo (KMC) or Molecular Dynamics (MD) microscopic simulators, executed over relatively short time and space scales) to perform systems level tasks (integration over relatively large time and space scales,"coarse" bifurcation analysis, optimization, and control) directly. In effect, the procedure constitutes a system identification based, "closure-on-demand" computational toolkit, bridging microscopic/stochastic simulation with traditional continuum scientific computation and numerical analysis. We will briefly survey the application of these "numerical enabling technology" ideas through examples including the computation of coarsely self-similar solutions, and discuss various features, limitations and potential extensions of the approach.

852 citations

Journal ArticleDOI
TL;DR: An adaptation of this approach that allows for a direct, effective ("coarse") bifurcation analysis of microscopic, kinetic-based models; this is illustrated through a comparative study of the FitzHugh-Nagumo PDE and a corresponding Lattice-Boltzmann model.
Abstract: Evolutionary, pattern forming partial differential equations (PDEs) are often derived as limiting descriptions of microscopic, kinetic theory-based models of molecular processes (e.g., reaction and diffusion). The PDE dynamic behavior can be probed through direct simulation (time integration) or, more systematically, through stability/bifurcation calculations; time-stepper-based approaches, like the Recursive Projection Method [Shroff, G. M. & Keller, H. B. (1993) SIAM J. Numer. Anal. 30, 1099–1120] provide an attractive framework for the latter. We demonstrate an adaptation of this approach that allows for a direct, effective (“coarse”) bifurcation analysis of microscopic, kinetic-based models; this is illustrated through a comparative study of the FitzHugh-Nagumo PDE and of a corresponding Lattice–Boltzmann model.

266 citations

Journal ArticleDOI
TL;DR: A time-stepper based approach to the ‘coarse’ integration and stability/bifurcation analysis of distributed reacting system models that can circumvent the derivation of accurate, closed form, macroscopic PDE descriptions of the system.

231 citations

Journal ArticleDOI
01 Aug 2011-Energy
TL;DR: It is found that succinic acid co-production can enhance the profit of the overall biorefinery by 60% for a 20 years plant lifetime, indicating the importance of glycerol when it is utilised as a key renewable building block for the production of commodity chemicals.

205 citations

Posted Content
TL;DR: A framework for computer-aided multiscale analysis, which enables models at a "fine" (microscopic/stochastic) level of description to perform modeling tasks at a 'coarse' (macroscopic, systems) level, and can bypass the derivation of the macroscopic evolution equations when these equations conceptually exist but are not available in closed form.
Abstract: We present and discuss a framework for computer-aided multiscale analysis, which enables models at a "fine" (microscopic/stochastic) level of description to perform modeling tasks at a "coarse" (macroscopic, systems) level. These macroscopic modeling tasks, yielding information over long time and large space scales, are accomplished through appropriately initialized calls to the microscopic simulator for only short times and small spatial domains. Our equation-free (EF) approach, when successful, can bypass the derivation of the macroscopic evolution equations when these equations conceptually exist but are not available in closed form. We discuss how the mathematics-assisted development of a computational superstructure may enable alternative descriptions of the problem physics (e.g. Lattice Boltzmann (LB), kinetic Monte Carlo (KMC) or Molecular Dynamics (MD) microscopic simulators, executed over relatively short time and space scales) to perform systems level tasks (integration over relatively large time and space scales,"coarse" bifurcation analysis, optimization, and control) directly. In effect, the procedure constitutes a systems identification based, "closure on demand" computational toolkit, bridging microscopic/stochastic simulation with traditional continuum scientific computation and numerical analysis. We illustrate these ideas through examples from chemical kinetics (LB, KMC), rheology (Brownian Dynamics), homogenization and the computation of "coarsely self-similar" solutions, and discuss various features, limitations and potential extensions of the approach.

134 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Van Kampen as mentioned in this paper provides an extensive graduate-level introduction which is clear, cautious, interesting and readable, and could be expected to become an essential part of the library of every physical scientist concerned with problems involving fluctuations and stochastic processes.
Abstract: N G van Kampen 1981 Amsterdam: North-Holland xiv + 419 pp price Dfl 180 This is a book which, at a lower price, could be expected to become an essential part of the library of every physical scientist concerned with problems involving fluctuations and stochastic processes, as well as those who just enjoy a beautifully written book. It provides an extensive graduate-level introduction which is clear, cautious, interesting and readable.

3,647 citations

Journal ArticleDOI
TL;DR: This work develops a novel framework to discover governing equations underlying a dynamical system simply from data measurements, leveraging advances in sparsity techniques and machine learning and using sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data.
Abstract: Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.

2,784 citations

Journal ArticleDOI
TL;DR: A method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias and the formalism provides a unified description which has metadynamics and canonical sampling as limiting cases.
Abstract: We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.

2,174 citations

Journal ArticleDOI
TL;DR: The process of iterating or diffusing the Markov matrix is seen as a generalization of some aspects of the Newtonian paradigm, in which local infinitesimal transitions of a system lead to global macroscopic descriptions by integration.
Abstract: We provide a framework for structural multiscale geometric organization of graphs and subsets of R(n). We use diffusion semigroups to generate multiscale geometries in order to organize and represent complex structures. We show that appropriately selected eigenfunctions or scaling functions of Markov matrices, which describe local transitions, lead to macroscopic descriptions at different scales. The process of iterating or diffusing the Markov matrix is seen as a generalization of some aspects of the Newtonian paradigm, in which local infinitesimal transitions of a system lead to global macroscopic descriptions by integration. We provide a unified view of ideas from data analysis, machine learning, and numerical analysis.

1,654 citations