scispace - formally typeset
Search or ask a question
Book

The Proper Generalized Decomposition for Advanced Numerical Simulations: A Primer

TL;DR: The present text is the first available book describing the Proper Generalized Decomposition (PGD), and provides a very readable and practical introduction that allows the reader to quickly grasp the main features of the method.
Abstract: Many problems in scientific computing are intractable with classical numerical techniques. These fail, for example, in the solution of high-dimensional models due to the exponential increase of the number of degrees of freedom.Recently, the authors of this book and their collaborators have developed a novel technique, called Proper Generalized Decomposition (PGD) that has proven to be a significant step forward. The PGD builds by means of a successive enrichment strategy a numerical approximation of the unknown fields in a separated form. Although first introduced and successfully demonstrated in the context of high-dimensional problems, the PGD allows for a completely new approach for addressing more standard problems in science and engineering. Indeed, many challenging problems can be efficiently cast into a multi-dimensional framework, thus opening entirely new solution strategies in the PGD framework. For instance, the material parameters and boundary conditions appearing in a particular mathematical model can be regarded as extra-coordinates of the problem in addition to the usual coordinates such as space and time. In the PGD framework, this enriched model is solved only once to yield a parametric solution that includes all particular solutions for specific values of the parameters. The PGD has now attracted the attention of a large number of research groups worldwide. The present text is the first available book describing the PGD. It provides a very readable and practical introduction that allows the reader to quickly grasp the main features of the method. Throughout the book, the PGD is applied to problems of increasing complexity, and the methodology is illustrated by means of carefully selected numerical examples. Moreover, the reader has free access to the Matlab software used to generate these examples.
Citations
More filters
MonographDOI
TL;DR: The problem of best approximation in subsets of low-rank tensors is analyzed and its connection with the problem of optimal model reduction in low-dimensional reduced spaces is discussed.
Abstract: Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. These methods exploit the tensor structure of function spaces and apply to many problems in computational science which are formulated in tensor spaces, such as problems arising in stochastic calculus, uncertainty quantification or parametric analyses. Here, we present complexity reduction methods based on low-rank approximation methods. We analyze the problem of best approximation in subsets of low-rank tensors and discuss its connection with the problem of optimal model reduction in low-dimensional reduced spaces. We present different algorithms for computing approximations of a function in low-rank formats. In particular, we present constructive algorithms which are based either on a greedy construction of an approximation (with successive corrections in subsets of low-rank tensors) or on the greedy construction of tensor subspaces (for subspace-based low-rank formats). These algorithms can be applied for tensor compression, tensor completion or for the numerical solution of equations in low-rank tensor formats. A special emphasis is given to the solution of stochastic or parameter-dependent models. Different approaches are presented for the approximation of vector-valued or multivariate functions (identified with tensors), based on samples of the functions (black-box approaches) or on the models equations which are satisfied by the functions.

264 citations


Cites methods from "The Proper Generalized Decompositio..."

  • ...This algorithm constitutes the most prominent version of so-called Proper Generalized Decomposition and it has been used in many applications (see the review [22] and the monograph [21])....

    [...]

Journal ArticleDOI
TL;DR: A comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria.

174 citations

Journal ArticleDOI
TL;DR: Not only data serve to enrich physically-based models, but also modeling and simulation viewpoints, which could allow us to perform a tremendous leap forward, by replacing big-data-based habits by the incipient smart-data paradigm.
Abstract: Engineering is evolving in the same way than society is doing. Nowadays, data is acquiring a prominence never imagined. In the past, in the domain of materials, processes and structures, testing machines allowed extract data that served in turn to calibrate state-of-the-art models. Some calibration procedures were even integrated within these testing machines. Thus, once the model had been calibrated, computer simulation takes place. However, data can offer much more than a simple state-of-the-art model calibration, and not only from its simple statistical analysis, but from the modeling and simulation viewpoints. This gives rise to the the family of so-called twins: the virtual, the digital and the hybrid twins. Moreover, as discussed in the present paper, not only data serve to enrich physically-based models. These could allow us to perform a tremendous leap forward, by replacing big-data-based habits by the incipient smart-data paradigm.

154 citations


Cites methods from "The Proper Generalized Decompositio..."

  • ...Such a solution has been demonstrated on many applications where the Proper Generalized Decomposition (PGD) method is used [2, 3]....

    [...]

Journal ArticleDOI
TL;DR: Data-Driven simulation constitutes a potential change of paradigm in SBES by using a data-driven inverse approach in order to generate the whole constitutive manifold from few complex experimental tests, as discussed in the present work.
Abstract: The use of constitutive equations calibrated from data has been implemented into standard numerical solvers for successfully addressing a variety problems encountered in simulation-based engineering sciences (SBES). However, the complexity remains constantly increasing due to the need of increasingly detailed models as well as the use of engineered materials. Data-Driven simulation constitutes a potential change of paradigm in SBES. Standard simulation in computational mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy, $$\ldots $$ ), whereas the second one consists of models that scientists have extracted from collected, either natural or synthetic, data. Data-driven (or data-intensive) simulation consists of directly linking experimental data to computers in order to perform numerical simulations. These simulations will employ laws, universally recognized as epistemic, while minimizing the need of explicit, often phenomenological, models. The main drawback of such an approach is the large amount of required data, some of them inaccessible from the nowadays testing facilities. Such difficulty can be circumvented in many cases, and in any case alleviated, by considering complex tests, collecting as many data as possible and then using a data-driven inverse approach in order to generate the whole constitutive manifold from few complex experimental tests, as discussed in the present work.

118 citations


Cites background or methods from "The Proper Generalized Decompositio..."

  • ...As in the case of the PGD constructor, we consider a greedy algorithm that computes sequentially these functions [3]....

    [...]

  • ...As is the case when applying the PGD solver, the solution procedure consists of using an alternated direction fixed point strategy, that proceeds as follows [3]: 1....

    [...]

  • ...An alternative approximation makes use of a separated representation (usually considered within the proper generalized decomposition (PGD) framework [2,3]) that reads...

    [...]

Journal Article
TL;DR: In this article, the authors proposed an approach to solve the problem of energy-efficient computing for the U.S. Dept. of Energy's Advanced Scientific Computing Research (Award DE-SC0007099).
Abstract: United States. Dept. of Energy. Office of Advanced Scientific Computing Research (Award DE-SC0007099)

65 citations

References
More filters
Journal ArticleDOI
TL;DR: A dimension reduction method called discrete empirical interpolation is proposed and shown to dramatically reduce the computational complexity of the popular proper orthogonal decomposition (POD) method for constructing reduced-order models for time dependent and/or parametrized nonlinear partial differential equations (PDEs).
Abstract: A dimension reduction method called discrete empirical interpolation is proposed and shown to dramatically reduce the computational complexity of the popular proper orthogonal decomposition (POD) method for constructing reduced-order models for time dependent and/or parametrized nonlinear partial differential equations (PDEs). In the presence of a general nonlinearity, the standard POD-Galerkin technique reduces dimension in the sense that far fewer variables are present, but the complexity of evaluating the nonlinear term remains that of the original problem. The original empirical interpolation method (EIM) is a modification of POD that reduces the complexity of evaluating the nonlinear term of the reduced model to a cost proportional to the number of reduced variables obtained by POD. We propose a discrete empirical interpolation method (DEIM), a variant that is suitable for reducing the dimension of systems of ordinary differential equations (ODEs) of a certain type. As presented here, it is applicable to ODEs arising from finite difference discretization of time dependent PDEs and/or parametrically dependent steady state problems. However, the approach extends to arbitrary systems of nonlinear ODEs with minor modification. Our contribution is a greatly simplified description of the EIM in a finite-dimensional setting that possesses an error bound on the quality of approximation. An application of DEIM to a finite difference discretization of the one-dimensional FitzHugh-Nagumo equations is shown to reduce the dimension from 1024 to order 5 variables with negligible error over a long-time integration that fully captures nonlinear limit cycle behavior. We also demonstrate applicability in higher spatial dimensions with similar state space dimension reduction and accuracy results.

1,695 citations


"The Proper Generalized Decompositio..." refers background in this paper

  • ...A final example is the chemical modelling of systems so dilute that the concept of concentration cannot be used, yielding the so-called chemical master equation that governs cell signalling and other phenomena in molecular biology [7]....

    [...]

  • ...The multidimensional chemical master equation was efficiently solved in [7] and [44]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a general treatment of the variational multiscale method in the context of an abstract Dirichlet problem is presented, showing how the exact theory represents a paradigm for subgrid-scale models and a posteriori error estimation.

1,578 citations


"The Proper Generalized Decompositio..." refers background in this paper

  • ...The main objective of the POD is to obtain the most typical or characteristic structure φ(x) among these um(x), ∀m [11]....

    [...]

Book
02 Jun 2003
TL;DR: An introduction to monotonicity-preserving schemes and other stabilization techniques and new trends in fluid dynamics, and main issues in incompressible flow problems.
Abstract: Preface. 1. Introduction and preliminaries. Finite elements in fluid dynamics. Subjects covered. Kinematical descriptions of the flow field. The basic conservation equations. Basic ingredients of the finite element method. 2. Steady transport problems. Problem statement. Galerkin approximation. Early Petrov-Galerkin methods. Stabilization techniques. Other stabilization techniques and new trends. Applications and solved exercises. 3. Unsteady convective transport. Introduction. Problem statement. The methods of characteristics. Classical time and space discretization techniques. Stability and accuracy analysis. Taylor-Galerkin Methods. An introduction to monotonicity-preserving schemes. Least-squares-based spatial discretization. The discontinuous Galerkin method. Space-time formulations. Applications and solved exercises. 4. Compressible Flow Problems. Introduction. Nonlinear hyperbolic equations. The Euler equations. Spatial discretization techniques. Numerical treatment of shocks. Nearly incompressible flows. Fluid-structure interaction. Solved exercises. 5. Unsteady convection-diffusion problems. Introduction. Problem statement. Time discretization procedures. Spatial discretization procedures. Stabilized space-time formulations. Solved exercises. 6. Viscous incompressible flows. Introduction Basic concepts. Main issues in incompressible flow problems. Trial solutions and weighting functions. Stationary Stokes problem. Steady Navier-Stokes problem. Unsteady Navier-Stokes equations. Applications and Solved Exercices. References. Index.

1,035 citations

Journal ArticleDOI
TL;DR: A new paradigm in the field of simulation-based engineering sciences (SBES) to face the challenges posed by current ICT technologies is addressed, by combining an off-line stage in which the general PGD solution, the vademecum, is computed, and an on-line phase in which real-time response is obtained as a result of the queries.
Abstract: In this paper we are addressing a new paradigm in the field of simulation-based engineering sciences (SBES) to face the challenges posed by current ICT technologies. Despite the impressive progress attained by simulation capabilities and techniques, some challenging problems remain today intractable. These problems, that are common to many branches of science and engineering, are of different nature. Among them, we can cite those related to high-dimensional problems, which do not admit mesh-based approaches due to the exponential increase of degrees of freedom. We developed in recent years a novel technique, called Proper Generalized Decomposition (PGD). It is based on the assumption of a separated form of the unknown field and it has demonstrated its capabilities in dealing with high-dimensional problems overcoming the strong limitations of classical approaches. But the main opportunity given by this technique is that it allows for a completely new approach for classic problems, not necessarily high dimensional. Many challenging problems can be efficiently cast into a multidimensional framework and this opens new possibilities to solve old and new problems with strategies not envisioned until now. For instance, parameters in a model can be set as additional extra-coordinates of the model. In a PGD framework, the resulting model is solved once for life, in order to obtain a general solution that includes all the solutions for every possible value of the parameters, that is, a sort of computational vademecum. Under this rationale, optimization of complex problems, uncertainty quantification, simulation-based control and real-time simulation are now at hand, even in highly complex scenarios, by combining an off-line stage in which the general PGD solution, the vademecum, is computed, and an on-line phase in which, even on deployed, handheld, platforms such as smartphones or tablets, real-time response is obtained as a result of our queries.

265 citations