scispace - formally typeset
Open AccessBook

Optimization and nonsmooth analysis

Reads0
Chats0
TLDR
The Calculus of Variations as discussed by the authors is a generalization of the calculus of variations, which is used in many aspects of analysis, such as generalized gradient descent and optimal control.
Abstract
1. Introduction and Preview 2. Generalized Gradients 3. Differential Inclusions 4. The Calculus of Variations 5. Optimal Control 6. Mathematical Programming 7. Topics in Analysis.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Optimality and Duality for Invex Nonsmooth Multiobjective programming problems

TL;DR: In this article, the authors consider nonsmooth multiobjective programming problems with inequality and equality constraints involving locally Lipschitz functions and present sufficient optimality conditions under various generalized invexity assumptions and certain regularity conditions.
Journal ArticleDOI

A parameterized Newton method and a quasi-Newton method for nonsmooth equations

TL;DR: This paper presents a parameterized Newton method using generalized Jacobians and a Broyden-like method for solving nonsmooth equations that ensures that the method is well-defined even when the generalized Jacobian is singular.
Journal ArticleDOI

Generalized directional gradients, backward stochastic differential equations and mild solutions of semilinear parabolic equations

TL;DR: In this paper, a forward-backward system of stochastic differential equations in an infinite-dimensional framework and its relationship with a semilinear parabolic differential equation on a Hilbert space is studied.
Journal ArticleDOI

The SECQ, Linear Regularity, and the Strong CHIP for an Infinite System of Closed Convex Sets in Normed Linear Spaces

TL;DR: A property relating their epigraphs with their intersection's epigraph is studied, and its relations to other constraint qualifications (such as the linear regularity, the strong CHIP, and Jameson's ($G$)-property) are established.

On algorithms for solving least squares problems under an L1 penalty or an L1 constraint

TL;DR: Several algorithms can be used to calculate the LASSO solution by minimising the residual sum of squares subject to a constraint (penalty) on the sum of the absolute values of the coefficient estimates.