scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems

05 Nov 2015-Siam Review (Society for Industrial and Applied Mathematics)-Vol. 57, Iss: 4, pp 483-531
TL;DR: Model reduction aims to reduce the computational burden by generating reduced models that are faster and cheaper to simulate, yet accurately represent the original large-scale system behavior as mentioned in this paper. But model reduction of linear, nonparametric dynamical systems has reached a considerable level of maturity, as reflected by several survey papers and books.
Abstract: Numerical simulation of large-scale dynamical systems plays a fundamental role in studying a wide range of complex physical phenomena; however, the inherent large-scale nature of the models often leads to unmanageable demands on computational resources. Model reduction aims to reduce this computational burden by generating reduced models that are faster and cheaper to simulate, yet accurately represent the original large-scale system behavior. Model reduction of linear, nonparametric dynamical systems has reached a considerable level of maturity, as reflected by several survey papers and books. However, parametric model reduction has emerged only more recently as an important and vibrant research area, with several recent advances making a survey paper timely. Thus, this paper aims to provide a resource that draws together recent contributions in different communities to survey the state of the art in parametric model reduction methods. Parametric model reduction targets the broad class of problems for wh...

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest as discussed by the authors, and these different models have varying evaluation costs, i.e.
Abstract: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs...

678 citations

Journal ArticleDOI
TL;DR: This work reviews the recent status of methodologies and techniques related to the construction of digital twins mostly from a modeling perspective to provide a detailed coverage of the current challenges and enabling technologies along with recommendations and reflections for various stakeholders.
Abstract: Digital twin can be defined as a virtual representation of a physical asset enabled through data and simulators for real-time prediction, optimization, monitoring, controlling, and improved decision making. Recent advances in computational pipelines, multiphysics solvers, artificial intelligence, big data cybernetics, data processing and management tools bring the promise of digital twins and their impact on society closer to reality. Digital twinning is now an important and emerging trend in many applications. Also referred to as a computational megamodel, device shadow, mirrored system, avatar or a synchronized virtual prototype, there can be no doubt that a digital twin plays a transformative role not only in how we design and operate cyber-physical intelligent systems, but also in how we advance the modularity of multi-disciplinary systems to tackle fundamental barriers not addressed by the current, evolutionary modeling practices. In this work, we review the recent status of methodologies and techniques related to the construction of digital twins mostly from a modeling perspective. Our aim is to provide a detailed coverage of the current challenges and enabling technologies along with recommendations and reflections for various stakeholders.

660 citations

Journal ArticleDOI
TL;DR: Although neural networks have been applied previously to complex fluid flows, the article featured here is the first to apply a true DNN architecture, specifically to Reynolds averaged Navier Stokes turbulence models, suggesting that DNNs may play a critically enabling role in the future of modelling complex flows.
Abstract: It was only a matter of time before deep neural networks (DNNs) – deep learning – made their mark in turbulence modelling, or more broadly, in the general area of high-dimensional, complex dynamical systems. In the last decade, DNNs have become a dominant data mining tool for big data applications. Although neural networks have been applied previously to complex fluid flows, the article featured here (Ling et al., J. Fluid Mech., vol. 807, 2016, pp. 155–166) is the first to apply a true DNN architecture, specifically to Reynolds averaged Navier Stokes turbulence models. As one often expects with modern DNNs, performance gains are achieved over competing state-of-the-art methods, suggesting that DNNs may play a critically enabling role in the future of modelling complex flows.

609 citations

Journal ArticleDOI
TL;DR: In this paper, a review of techniques for analyzing fluid flow data is presented, with the aim of extracting simplified models that capture the essential features of these flows, in order to gain insight into the flow physics, and potentially identify mechanisms for controlling these flows.
Abstract: Advances in experimental techniques and the ever-increasing fidelity of numerical simulations have led to an abundance of data describing fluid flows. This review discusses a range of techniques for analyzing such data, with the aim of extracting simplified models that capture the essential features of these flows, in order to gain insight into the flow physics, and potentially identify mechanisms for controlling these flows. We review well-developed techniques, such as proper orthogonal decomposition and Galerkin projection, and discuss more recent techniques developed for linear systems, such as balanced truncation and dynamic mode decomposition (DMD). We then discuss some of the methods available for nonlinear systems, with particular attention to the Koopman operator, an infinite-dimensional linear operator that completely characterizes the dynamics of a nonlinear system and provides an extension of DMD to nonlinear systems.

567 citations

Book
28 Feb 2019
TL;DR: In this paper, the authors bring together machine learning, engineering mathematics, and mathematical physics to integrate modeling and control of dynamical systems with modern methods in data science, and highlight many of the recent advances in scientific computing that enable data-driven methods to be applied to a diverse range of complex systems, such as turbulence, the brain, climate, epidemiology, finance, robotics, and autonomy.
Abstract: Data-driven discovery is revolutionizing the modeling, prediction, and control of complex systems. This textbook brings together machine learning, engineering mathematics, and mathematical physics to integrate modeling and control of dynamical systems with modern methods in data science. It highlights many of the recent advances in scientific computing that enable data-driven methods to be applied to a diverse range of complex systems, such as turbulence, the brain, climate, epidemiology, finance, robotics, and autonomy. Aimed at advanced undergraduate and beginning graduate students in the engineering and physical sciences, the text presents a range of topics and methods from introductory to state of the art.

563 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Reference EntryDOI
15 Oct 2005
TL;DR: Principal component analysis (PCA) as discussed by the authors replaces the p original variables by a smaller number, q, of derived variables, the principal components, which are linear combinations of the original variables.
Abstract: When large multivariate datasets are analyzed, it is often desirable to reduce their dimensionality. Principal component analysis is one technique for doing this. It replaces the p original variables by a smaller number, q, of derived variables, the principal components, which are linear combinations of the original variables. Often, it is possible to retain most of the variability in the original variables with q very much smaller than p. Despite its apparent simplicity, principal component analysis has a number of subtleties, and it has many uses and extensions. A number of choices associated with the technique are briefly discussed, namely, covariance or correlation, how many components, and different normalization constraints, as well as confusion with factor analysis. Various uses and extensions are outlined. Keywords: dimension reduction; factor analysis; multivariate analysis; variance maximization

14,773 citations

Book
17 Aug 1995
TL;DR: This paper reviewed the history of the relationship between robust control and optimal control and H-infinity theory and concluded that robust control has become thoroughly mainstream, and robust control methods permeate robust control theory.
Abstract: This paper will very briefly review the history of the relationship between modern optimal control and robust control. The latter is commonly viewed as having arisen in reaction to certain perceived inadequacies of the former. More recently, the distinction has effectively disappeared. Once-controversial notions of robust control have become thoroughly mainstream, and optimal control methods permeate robust control theory. This has been especially true in H-infinity theory, the primary focus of this paper.

6,945 citations

Book
01 Jan 1963
TL;DR: These notes cover the basic definitions of discrete probability theory, and then present some results including Bayes' rule, inclusion-exclusion formula, Chebyshev's inequality, and the weak law of large numbers.
Abstract: These notes cover the basic definitions of discrete probability theory, and then present some results including Bayes' rule, inclusion-exclusion formula, Chebyshev's inequality, and the weak law of large numbers. 1 Sample spaces and events To treat probability rigorously, we define a sample space S whose elements are the possible outcomes of some process or experiment. For example, the sample space might be the outcomes of the roll of a die, or flips of a coin. To each element x of the sample space, we assign a probability, which will be a non-negative number between 0 and 1, which we will denote by p(x). We require that x∈S p(x) = 1, so the total probability of the elements of our sample space is 1. What this means intuitively is that when we perform our process, exactly one of the things in our sample space will happen. Example. The sample space could be S = {a, b, c}, and the probabilities could be p(a) = 1/2, p(b) = 1/3, p(c) = 1/6. If all elements of our sample space have equal probabilities, we call this the uniform probability distribution on our sample space. For example, if our sample space was the outcomes of a die roll, the sample space could be denoted S = {x 1 , x 2 ,. .. , x 6 }, where the event x i correspond to rolling i. The uniform distribution, in which every outcome x i has probability 1/6 describes the situation for a fair die. Similarly, if we consider tossing a fair coin, the outcomes would be H (heads) and T (tails), each with probability 1/2. In this situation we have the uniform probability distribution on the sample space S = {H, T }. We define an event A to be a subset of the sample space. For example, in the roll of a die, if the event A was rolling an even number, then A = {x 2 , x 4 , x 6 }. The probability of an event A, denoted by P(A), is the sum of the probabilities of the corresponding elements in the sample space. For rolling an even number, we have P(A) = p(x 2) + p(x 4) + p(x 6) = 1 2 Given an event A of our sample space, there is a complementary event which consists of all points in our sample space that are not …

6,236 citations