Curse of dimensionality reduction in max-plus based approximation methods: Theoretical estimates and improved pruning algorithms
Stéphane Gaubert,William M. McEneaney,Zheng Qu +2 more
- pp 1054-1061
Reads0
Chats0
TLDR
This work derives from its approach a refinement of the curse of dimensionality free method introduced previously by McEneaney, with a higher accuracy for a comparable computational cost.Abstract:
Max-plus based methods have been recently developed to approximate the value function of possibly high dimensional optimal control problems. A critical step of these methods consists in approximating a function by a supremum of a small number of functions (max-plus “basis functions”) taken from a prescribed dictionary. We study several variants of this approximation problem, which we show to be continuous versions of the facility location and k-center combinatorial optimization problems, in which the connection costs arise from a Bregman distance. We give theoretical error estimates, quantifying the number of basis functions needed to reach a prescribed accuracy. We derive from our approach a refinement of the curse of dimensionality free method introduced previously by McEneaney, with a higher accuracy for a comparable computational cost.read more
Citations
More filters
Journal ArticleDOI
Overcoming the curse of dimensionality for some Hamilton–Jacobi partial differential equations via neural network architectures
TL;DR: This article showed that some classes of neural networks correspond to representation formulas of HJ PDE solutions whose Hamiltonians and initial data are obtained from the parameters of the neural networks, which naturally encode the physics contained in some HJ partial differential equations.
Journal ArticleDOI
On some neural network architectures that can represent viscosity solutions of certain high dimensional Hamilton–Jacobi partial differential equations
Jérôme Darbon,Tingwei Meng +1 more
TL;DR: It is proved that under certain assumptions, the two neural network architectures proposed represent viscosity solutions to two sets of HJ PDEs with zero error.
Journal ArticleDOI
Perspectives on Characteristics Based Curse-of-Dimensionality-Free Numerical Approaches for Solving Hamilton–Jacobi Equations
Ivan Yegorov,Peter M. Dower +1 more
TL;DR: It is pointed out that, despite the indicated advantages, the related approaches still have a limited range of applicability, and their extensions to Hamilton–Jacobi–Isaacs equations in zero-sum two-player differential games are currently developed only for sufficiently narrow classes of control systems.
Posted Content
Overcoming the curse of dimensionality for some Hamilton--Jacobi partial differential equations via neural network architectures
TL;DR: It is proved that some classes of neural networks correspond to representation formulas of HJ PDE solutions whose Hamiltonians and initial data are obtained from the parameters of the neural networks.
Journal ArticleDOI
Dual Dynamic Programing with cut selection: Convergence proof and numerical experiments
TL;DR: A limited memory variant of Level 1 is proposed and the convergence of DDP combined with the Territory algorithm, Level 1 or its variant for nonlinear optimization problems is shown and convergence in a finite number of iterations is shown.
References
More filters
Book
Convex Optimization
Stephen Boyd,Lieven Vandenberghe +1 more
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Book
Approximation Algorithms
TL;DR: Covering the basic techniques used in the latest research work, the author consolidates progress made so far, including some very recent and promising results, and conveys the beauty and excitement of work in the field.
Journal ArticleDOI
An analysis of approximations for maximizing submodular set functions--I
TL;DR: It is shown that a “greedy” heuristic always produces a solution whose value is at least 1 −[(K − 1/K]K times the optimal value, which can be achieved for eachK and has a limiting value of (e − 1)/e, where e is the base of the natural logarithm.
Posted Content
An analysis of approximations for maximizing submodular set functions II
TL;DR: In this article, the authors considered the problem of finding a maximum weight independent set in a matroid, where the elements of the matroid are colored and the items of the independent set can have no more than K colors.
Journal ArticleDOI
The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming
TL;DR: This method can be regarded as a generalization of the methods discussed in [1–4] and applied to the approximate solution of problems in linear and convex programming.