Topic
Ode
About: Ode is a research topic. Over the lifetime, 6352 publications have been published within this topic receiving 92837 citations.
Papers published on a yearly basis
Papers
More filters
•
01 Jan 1965
TL;DR: This book discusses ODEs, Partial Differential Equations, Fourier Series, Integrals, and Transforms, and Numerics for ODE's and PDE's, as well as numerical analysis and potential theory, and more.
Abstract: PART A: ORDINARY DIFFERENTIAL EQUATIONS (ODE'S). Chapter 1. First-Order ODE's. Chapter 2. Second Order Linear ODE's. Chapter 3. Higher Order Linear ODE's. Chapter 4. Systems of ODE's Phase Plane, Qualitative Methods. Chapter 5. Series Solutions of ODE's Special Functions. Chapter 6. Laplace Transforms. PART B: LINEAR ALGEBRA, VECTOR CALCULUS. Chapter 7. Linear Algebra: Matrices, Vectors, Determinants: Linear Systems. Chapter 8. Linear Algebra: Matrix Eigenvalue Problems. Chapter 9. Vector Differential Calculus: Grad, Div, Curl. Chapter 10. Vector Integral Calculus: Integral Theorems. PART C: FOURIER ANALYSIS, PARTIAL DIFFERENTIAL EQUATIONS. Chapter 11. Fourier Series, Integrals, and Transforms. Chapter 12. Partial Differential Equations (PDE's). Chapter 13. Complex Numbers and Functions. Chapter 14. Complex Integration. Chapter 15. Power Series, Taylor Series. Chapter 16. Laurent Series: Residue Integration. Chapter 17. Conformal Mapping. Chapter 18. Complex Analysis and Potential Theory. PART E: NUMERICAL ANALYSIS SOFTWARE. Chapter 19. Numerics in General. Chapter 20. Numerical Linear Algebra. Chapter 21. Numerics for ODE's and PDE's. PART F: OPTIMIZATION, GRAPHS. Chapter 22. Unconstrained Optimization: Linear Programming. Chapter 23. Graphs, Combinatorial Optimization. PART G: PROBABILITY STATISTICS. Chapter 24. Data Analysis: Probability Theory. Chapter 25. Mathematical Statistics. Appendix 1: References. Appendix 2: Answers to Odd-Numbered Problems. Appendix 3: Auxiliary Material. Appendix 4: Additional Proofs. Appendix 5: Tables. Index.
3,643 citations
••
[...]
TL;DR: This paper describes mathematical and software developments for a suite of programs for solving ordinary differential equations in MATLAB.
Abstract: This paper describes mathematical and software developments for a suite of programs for solving ordinary differential equations in MATLAB.
3,330 citations
••
TL;DR: This paper presents a novel algorithm to accelerate the differential evolution (DE), which employs opposition-based learning (OBL) for population initialization and also for generation jumping and results confirm that the ODE outperforms the original DE and FADE in terms of convergence speed and solution accuracy.
Abstract: Evolutionary algorithms (EAs) are well-known optimization approaches to deal with nonlinear and complex problems. However, these population-based algorithms are computationally expensive due to the slow nature of the evolutionary process. This paper presents a novel algorithm to accelerate the differential evolution (DE). The proposed opposition-based DE (ODE) employs opposition-based learning (OBL) for population initialization and also for generation jumping. In this work, opposite numbers have been utilized to improve the convergence rate of DE. A comprehensive set of 58 complex benchmark functions including a wide range of dimensions is employed for experimental verification. The influence of dimensionality, population size, jumping rate, and various mutation strategies are also investigated. Additionally, the contribution of opposite numbers is empirically verified. We also provide a comparison of ODE to fuzzy adaptive DE (FADE). Experimental results confirm that the ODE outperforms the original DE and FADE in terms of convergence speed and solution accuracy.
1,419 citations
•
03 Dec 2018TL;DR: In this paper, the authors introduce a new family of deep neural network models called continuous normalizing flows, which parameterize the derivative of the hidden state using a neural network, and the output of the network is computed using a black-box differential equation solver.
Abstract: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
1,082 citations
•
TL;DR: In this paper, the authors introduce a new family of deep neural network models called continuous normalizing flows, which parameterize the derivative of the hidden state using a neural network, and the output of the network is computed using a black-box differential equation solver.
Abstract: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
1,033 citations