scispace - formally typeset
Search or ask a question
Book

Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-dependent Problems

06 Sep 2007-
TL;DR: This book discusses infinite difference approximations, Iterative methods for sparse linear systems, and zero-stability and convergence for initial value problems for ordinary differential equations.
Abstract: Finite difference approximations -- Steady states and boundary value problems -- Elliptic equations -- Iterative methods for sparse linear systems -- The initial value problem for ordinary differential equations -- Zero-stability and convergence for initial value problems -- Absolute stability for ordinary differential equations -- Stiff ordinary differential equations -- Diffusion equations and parabolic problems -- Addiction equations and hyperbolic systems -- Mixed equations -- Appendixes: A. Measuring errors -- B. Polynomial interpolation and orthogonal polynomials -- C. Eigenvalues and inner-product norms -- D. Matrix powers and exponentials -- E. Partial differential equations.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: To the best of our knowledge, there is only one application of mathematical modelling to face recognition as mentioned in this paper, and it is a face recognition problem that scarcely clamoured for attention before the computer age but, having surfaced, has attracted the attention of some fine minds.
Abstract: to be done in this area. Face recognition is a problem that scarcely clamoured for attention before the computer age but, having surfaced, has involved a wide range of techniques and has attracted the attention of some fine minds (David Mumford was a Fields Medallist in 1974). This singular application of mathematical modelling to a messy applied problem of obvious utility and importance but with no unique solution is a pretty one to share with students: perhaps, returning to the source of our opening quotation, we may invert Duncan's earlier observation, 'There is an art to find the mind's construction in the face!'.

3,015 citations

01 Mar 1987
TL;DR: The variable-order Adams method (SIVA/DIVA) package as discussed by the authors is a collection of subroutines for solution of non-stiff ODEs.
Abstract: Initial-value ordinary differential equation solution via variable order Adams method (SIVA/DIVA) package is collection of subroutines for solution of nonstiff ordinary differential equations. There are versions for single-precision and double-precision arithmetic. Requires fewer evaluations of derivatives than other variable-order Adams predictor/ corrector methods. Option for direct integration of second-order equations makes integration of trajectory problems significantly more efficient. Written in FORTRAN 77.

1,955 citations

Journal ArticleDOI
TL;DR: In this paper, the authors propose a sparse regression method for discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain, which relies on sparsitypromoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models.
Abstract: We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg–de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable.

1,069 citations

Book
14 Oct 2010
TL;DR: The author provides a firm grounding in fundamental NLP properties and algorithms, and relates them to real-world problem classes in process optimization, thus making the material understandable and useful to chemical engineers and experts in mathematical optimization.
Abstract: This book addresses modern nonlinear programming (NLP) concepts and algorithms, especially as they apply to challenging applications in chemical process engineering. The author provides a firm grounding in fundamental NLP properties and algorithms, and relates them to real-world problem classes in process optimization, thus making the material understandable and useful to chemical engineers and experts in mathematical optimization. Nonlinear Programming: Concepts, Algorithms, and Applications to Chemical Processes shows readers which NLP methods are best suited for specific applications, how large-scale problems should be formulated and what features of these problems should be emphasized, and how existing NLP methods can be extended to exploit specific structures of large-scale optimization models. Audience: The book is intended for chemical engineers interested in using NLP algorithms for specific applications, experts in mathematical optimization who want to understand process engineering problems and develop better approaches to solving them, and researchers from both fields interested in developing better methods and problem formulations for challenging engineering problems. Contents: Preface; Chapter 1: Introduction to Process Optimization; Chapter 2: Concepts of Unconstrained Optimization; Chapter 3: Newton-Type Methods for Unconstrained Optimization; Chapter 4: Concepts of Constrained Optimization; Chapter 5: Newton Methods for Equality Constrained Optimization; Chapter 6: Numerical Algorithms for Constrained Optimization; Chapter 7: Steady State Process Optimization; Chapter 8: Introduction to Dynamic Process Optimization; Chapter 9: Dynamic Optimization Methods with Embedded DAE Solvers; Chapter 10: Simultaneous Methods for Dynamic Optimization; Chapter 11: Process Optimization with Complementarity Constraints; Bibliography; Index

789 citations

References
More filters
Book
01 Jan 1983

34,729 citations


"Finite Difference Methods for Ordin..." refers background in this paper

  • ..., [35], [82], [91]) that for a general N N dense matrix (one with few elements equal to zero), performing Gaussian elimination requires O....

    [...]

  • ...See, for example, [35] for introductory discussions of such algorithms....

    [...]

Journal ArticleDOI
TL;DR: An iterative algorithm is given for solving a system Ax=k of n linear equations in n unknowns and it is shown that this method is a special case of a very general method which also includes Gaussian elimination.
Abstract: An iterative algorithm is given for solving a system Ax=k of n linear equations in n unknowns. The solution is given in n steps. It is shown that this method is a special case of a very general method which also includes Gaussian elimination. These general algorithms are essentially algorithms for finding an n dimensional ellipsoid. Connections are made with the theory of orthogonal polynomials and continued fractions.

7,598 citations


"Finite Difference Methods for Ordin..." refers methods in this paper

  • ...The CG method was first proposed in 1952 by Hestenes and Stiefel [46], but it took some time for this and related methods to be fully understood and widely used....

    [...]

Book
01 Jan 1965
TL;DR: Theoretical background Perturbation theory Error analysis Solution of linear algebraic equations Hermitian matrices Reduction of a general matrix to condensed form Eigenvalues of matrices of condensed forms The LR and QR algorithms Iterative methods Bibliography.
Abstract: Theoretical background Perturbation theory Error analysis Solution of linear algebraic equations Hermitian matrices Reduction of a general matrix to condensed form Eigenvalues of matrices of condensed forms The LR and QR algorithms Iterative methods Bibliography Index.

7,422 citations

Book
01 Jan 1955
TL;DR: The prerequisite for the study of this book is a knowledge of matrices and the essentials of functions of a complex variable as discussed by the authors, which is a useful text in the application of differential equations as well as for the pure mathematician.
Abstract: The prerequisite for the study of this book is a knowledge of matrices and the essentials of functions of a complex variable. It has been developed from courses given by the authors and probably contains more material than will ordinarily be covered in a one-year course. It is hoped that the book will be a useful text in the application of differential equations as well as for the pure mathematician.

7,071 citations

Journal ArticleDOI
TL;DR: In this article, the authors present finite-difference schemes for the evaluation of first-order, second-order and higher-order derivatives yield improved representation of a range of scales and may be used on nonuniform meshes.

5,832 citations


"Finite Difference Methods for Ordin..." refers methods in this paper

  • ...Higher order methods of this form also exist; see Lele [63] for an in-depth discussion....

    [...]