scispace - formally typeset
Search or ask a question
Topic

Concave function

About: Concave function is a research topic. Over the lifetime, 1415 publications have been published within this topic receiving 33278 citations.


Papers
More filters
Posted Content
Yan Yan1, Yi Xu1, Qihang Lin1, Lijun Zhang2, Tianbao Yang1 
TL;DR: The design and analysis of new stochastic primal-dual algorithms that use a mixture of stochastically gradient updates and a logarithmic number of deterministic dual updates for solving a family of convex-concave problems with no bilinear structure assumed are designed.
Abstract: Previous studies on stochastic primal-dual algorithms for solving min-max problems with faster convergence heavily rely on the bilinear structure of the problem, which restricts their applicability to a narrowed range of problems. The main contribution of this paper is the design and analysis of new stochastic primal-dual algorithms that use a mixture of stochastic gradient updates and a logarithmic number of deterministic dual updates for solving a family of convex-concave problems with no bilinear structure assumed. Faster convergence rates than $O(1/\sqrt{T})$ with $T$ being the number of stochastic gradient updates are established under some mild conditions of involved functions on the primal and the dual variable. For example, for a family of problems that enjoy a weak strong convexity in terms of the primal variable and has a strongly concave function of the dual variable, the convergence rate of the proposed algorithm is $O(1/T)$. We also investigate the effectiveness of the proposed algorithms for learning robust models and empirical AUC maximization.

21 citations

Posted Content
TL;DR: In this article, the convergence analysis for the proximal DC algorithm with extrapolation (pDCA$_e$) was refined and shown to be locally linearly convergent when the objective is level-bounded, without differentiability assumptions in the concave part.
Abstract: We consider the problem of minimizing a difference-of-convex (DC) function, which can be written as the sum of a smooth convex function with Lipschitz gradient, a proper closed convex function and a continuous possibly nonsmooth concave function. We refine the convergence analysis in [38] for the proximal DC algorithm with extrapolation (pDCA$_e$) and show that the whole sequence generated by the algorithm is convergent when the objective is level-bounded, {\em without} imposing differentiability assumptions in the concave part. Our analysis is based on a new potential function and we assume such a function is a Kurdyka-{\L}ojasiewicz (KL) function. We also establish a relationship between our KL assumption and the one used in [38]. Finally, we demonstrate how the pDCA$_e$ can be applied to a class of simultaneous sparse recovery and outlier detection problems arising from robust compressed sensing in signal processing and least trimmed squares regression in statistics. Specifically, we show that the objectives of these problems can be written as level-bounded DC functions whose concave parts are {\em typically nonsmooth}. Moreover, for a large class of loss functions and regularizers, the KL exponent of the corresponding potential function are shown to be 1/2, which implies that the pDCA$_e$ is locally linearly convergent when applied to these problems. Our numerical experiments show that the pDCA$_e$ usually outperforms the proximal DC algorithm with nonmonotone linesearch [24, Appendix A] in both CPU time and solution quality for this particular application.

21 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that f is a matrix concave function of order [n/2] and that for all n-by-n positive semidefinite matrices A and B, and all unitarily invariant norms, f is not assumed to be a monotone matrix function of all orders.
Abstract: Let f (0, ∞) → R be a monotone matrix function of order n for some arbitrary but fixed value of n. We show that f is a matrix concave function of order [n/2] and that for all n-by-n positive semidefinite matrices A and B, and all unitarily invariant norms . Because f is not assumed to be a monotone matrix function of all orders, Loewner's integral representation of functions that are monotone of all orders is not applicable, instead we use the functional characterization of f in proving these results.

21 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a method for finding the global minimum of a Lipschitzian function subject to a convex and a reverse convex constraint, which is the same complexity as the outer approximation algorithm for a concave minimization problem.
Abstract: We will present a new method for finding the global minimum of a Lipschitzian function under Lipschitzian constraints. The method consists in converting the given problem into one of globally minimizing a concave function subject to a convex and a reverse convex constraints. The resulting algorithm is of the same complexity as the outer approximation algorithm for a concave minimization problem.

21 citations

Journal ArticleDOI
01 Jul 2020
TL;DR: In this article, the Hermite-Hadamard inequality is reconstructed via a relatively new method called the green function technique, which can obtain loads of new results for functions whose second derivative is convex, monotone and concave in absolute value.
Abstract: The Hermite-Hadamard inequality by means of the Riemann-Liouville fractional integral operators is already known in the literature. In this paper, it is our purpose to reconstruct this inequality via a relatively new method called the green function technique. In the process, some identities are established. Using these identities, we obtain loads of new results for functions whose second derivative is convex, monotone and concave in absolute value. We anticipate that the method outlined in this article will stimulate further investigation in this direction.

21 citations


Network Information
Related Topics (5)
Markov chain
51.9K papers, 1.3M citations
74% related
Bounded function
77.2K papers, 1.3M citations
74% related
Polynomial
52.6K papers, 853.1K citations
72% related
Upper and lower bounds
56.9K papers, 1.1M citations
72% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202158
202049
201952
201860