Topic

# Backward Euler method

About: Backward Euler method is a research topic. Over the lifetime, 5396 publications have been published within this topic receiving 108711 citations. The topic is also known as: Euler backward method & backward Euler's method.

##### Papers published on a yearly basis

##### Papers

More filters

•

Duke University

^{1}TL;DR: In this article, an introduction to vortex dynamics for incompressible fluid flows is given, along with vortex sheets, weak solutions and approximate-solution sequences for the Euler equation.

Abstract: Preface 1. An introduction to vortex dynamics for incompressible fluid flows 2. The vorticity-stream formulation of the Euler and the Navier-Stokes equations 3. Energy methods for the Euler and the Navier-Stokes equations 4. The particle-trajectory method for existence and uniqueness of solutions to the Euler equation 5. The search for singular solutions to the 3D Euler equations 6. Computational vortex methods 7. Simplified asympototic equations for slender vortex filaments 8. Weak solutions to the 2D Euler equations with initial vorticity in L 9. Introduction to vortex sheets, weak solutions and approximate-solution sequences for the Euler equation 10. Weak solutions and solution sequences in two dimensions 11. The 2D Euler equation: concentrations and weak solutions with vortex-sheet initial data 12. Reduced Hausdorff dimension, oscillations and measure-valued solutions of the Euler equations in two and three dimensions 13. The Vlasov-Poisson equations as an analogy to the Euler equations for the study of weak solutions Index.

1,863 citations

••

01 Jan 1982TL;DR: When approximating a hyperbolic system of conservation laws w t + {f(w)} t = 0 with so-called upwind differences, one must determine in which direction each of a variety of signals moves through the computational grid.

Abstract: When approximating a hyperbolic system of conservation laws w t + {f(w)} t = 0 with so-called upwind differences, we must, in the first place, establish which way the wind blows. More precisely, we must determine in which direction each of a variety of signals moves through the computational grid. For this purpose, a physical model of the interaction between computational cells is needed; at present two such models are in use.

1,648 citations

••

TL;DR: In this article, the authors prove that the maximum norm of the vorticity of a solution of the Euler equation is a function of the smoothness of the solution, and that if a solution is initially smooth and loses its regularity at some later time, then the maximum vortivity necessarily grows without bound as the critical time approaches.

Abstract: The authors prove that the maximum norm of the vorticity controls the breakdown of smooth solutions of the 3-D Euler equations. In other words, if a solution of the Euler equations is initially smooth and loses its regularity at some later time, then the maximum vorticity necessarily grows without bound as the critical time approaches; equivalently, if the vorticity remains bounded, a smooth solution persists.

1,595 citations

••

TL;DR: The Euler Method and its Generalizations Analysis of Runge-Kutta Methods General Linear Methods Bibliography.

Abstract: Mathematical and Computational Introduction The Euler Method and its Generalizations Analysis of Runge-Kutta Methods General Linear Methods Bibliography.

1,313 citations

••

TL;DR: A reinforcement learning framework for continuous-time dynamical systems without a priori discretization of time, state, and action based on the Hamilton-Jacobi-Bellman (HJB) equation for infinite-horizon, discounted reward problems is presented and algorithms for estimating value functions and improving policies with the use of function approximators are derived.

Abstract: This article presents a reinforcement learning framework for continuous time dynamical systems without a priori discretization of time, state, and action. Based on the Hamilton-Jacobi-Bellman(HJB) equation for infinite-horizon, discounted reward problems, we derive algorithms for estimating value functions and improving policies with the use of function approximators. The processof value function estimation is formulated asthe minimization of a continuous-time form of the temporal difference (TD) error. Update methods based on backward Euler approximation and exponential eligibility traces are derived, and their correspondences with the conventional residual gradient, TD(0), and TD(lambda) algorithms are shown. For policy improvement, two methods—a continuous actor-critic method and a value-gradient-based greedy policy—are formulated. As a special case of the latter, a nonlinear feedback control law using the value gradient and the model of the input gain is derived. The advantage updating, a model-free algorithm derived previously, is also formulated in the HJBbased framework.The performance of the proposed algorithms is first tested in a nonlinear control task of swinging a pendulum up with limited torque. It is shown in the simulations that (1) the task is accomplished by the continuous actor-critic method in a number of trials several times fewer than by the conventional discrete actor-critic method; (2) among the continuous policy update methods, the value-gradient-based policy with a known or learned dynamic model performs several times better than the actor-critic method; and (3) a value function update using exponential eligibility traces is more efficient and stable than that based on Euler approximation. The algorithms are then tested in a higher-dimensional task: cartpole swing-up. This task is accomplished in several hundred trials using the value-gradient-based policy with a learned dynamic model.

974 citations