scispace - formally typeset
Open AccessBook

Optimization and nonsmooth analysis

Reads0
Chats0
TLDR
The Calculus of Variations as discussed by the authors is a generalization of the calculus of variations, which is used in many aspects of analysis, such as generalized gradient descent and optimal control.
Abstract
1. Introduction and Preview 2. Generalized Gradients 3. Differential Inclusions 4. The Calculus of Variations 5. Optimal Control 6. Mathematical Programming 7. Topics in Analysis.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Iterative Averaging of Entropic Projections for Solving Stochastic Convex Feasibility Problems

TL;DR: The main results show that this problem can be solved by an iterative method based on averaging at each step the Bregman projections with respect to f(x)=∑i=1nxi· ln xi of the current iterate onto the given sets.
Journal ArticleDOI

A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints

TL;DR: It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter and is convergent to an optimal solution of the related problem.
Journal ArticleDOI

Robust Neuro-Adaptive Containment of Multileader Multiagent Systems With Uncertain Dynamics

TL;DR: A new kind of containment controllers consisting of a linear local information-based feedback term, a neuro-adaptive approximation term and a nonsmooth feedback term are designed to complete the goal of quasi-containment in multiagent MASs subject to unknown nonlinear dynamics and external disturbances.
Journal ArticleDOI

Convergence analysis of gradient descent stochastic algorithms

TL;DR: This paper proves convergence of a sample-path based stochastic gradient-descent algorithm for optimizing expected-value performance measures in discrete event systems.
Journal ArticleDOI

Optimality conditions and a smoothing trust region newton method for nonlipschitz optimization

TL;DR: This paper derives affine-scaled second order necessary and sufficient conditions for local minimizers of minimization problems with nonconvex, nonsmooth, perhaps non-Lipschitz penalty functions and proposes a global convergent smoothing trust region Newton method.