scispace - formally typeset
Open Access

√(x2 + μ) is the Most Computationally Efficient Smooth Approximation to |x|: a Proof

TLDR
In this paper, the authors show that the most computationally efficient smooth approximation to |x| is the function √ x2 + µ, a function which has indeed been successfully used in such optimization.
Abstract
In many practical situations, we need to minimize an expression of the type ∑ |ci|. The problem is that most efficient optimization techniques use the derivative of the objective function, but the function |x| is not differentiable at 0. To make optimization efficient, it is therefore reasonable to approximate |x| by a smooth function. We show that in some reasonable sense, the most computationally efficient smooth approximation to |x| is the function √ x2 + µ, a function which has indeed been successfully used in such optimization. c

read more

Content maybe subject to copyright    Report

Citations
More filters
Dissertation

Sparse and redundant representations for inverse problems and recognition

TL;DR: This research investigates the combination of domain adaptation, dictionary learning, object recognition, activity recognition, and shape representation in machine learning to solve the challenge of sparse representation in signal/Image processing.
Journal ArticleDOI

Negativity Bounds for Weyl–Heisenberg Quasiprobability Representations

TL;DR: In this paper, the authors define a family of negativity measures which includes Zhu's as a special case and consider another member of the family which they call “sum negativity,” and prove a sufficient condition for local maxima in sum negativity and find exact global minima in dimensions 3 and 4.
Journal ArticleDOI

Construction of an efficient portfolio of power purchase decisions based on risk-diversification tradeoff

TL;DR: In this paper, the authors present a methodology based on the tradeoff between risk and diversification in order to evaluate a purchase portfolio of energy, where the assets refer to purchasing strategies of a retailer-generator of electricity in three markets: spot, regulated and non-regulated markets.
Journal ArticleDOI

Sigmoid functions for the smooth approximation to the absolute value function

TL;DR: In this article, the authors presented smooth approximations to the absolute value function |x| using sigmoid functions and provided sharp hyperbolic bounds for the error function.
Proceedings ArticleDOI

Certified machine learning: A posteriori error estimation for physics-informed neural networks

TL;DR: This paper derives a rigorous upper limit on the PINN prediction error and applies this bound exemplarily to two academic toy problems, whereof one falls in the category of model-predictive control and thereby shows the practical use of the derived results.
References
More filters
Book

Compressed sensing

TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Journal ArticleDOI

Decoding by linear programming

TL;DR: F can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program) and numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted.
Journal ArticleDOI

Stable signal recovery from incomplete and inaccurate measurements

TL;DR: In this paper, the authors considered the problem of recovering a vector x ∈ R^m from incomplete and contaminated observations y = Ax ∈ e + e, where e is an error term.
Posted Content

Stable Signal Recovery from Incomplete and Inaccurate Measurements

TL;DR: It is shown that it is possible to recover x0 accurately based on the data y from incomplete and contaminated observations.
Posted Content

Decoding by Linear Programming

TL;DR: In this paper, it was shown that under suitable conditions on the coding matrix, the input vector can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program).
Related Papers (5)