scispace - formally typeset
Search or ask a question
Topic

Concave function

About: Concave function is a research topic. Over the lifetime, 1415 publications have been published within this topic receiving 33278 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of learning a concave function on a compact convex domain using linear combinations of "bump-like" components (neurons).
Abstract: Fitting a function by using linear combinations of a large number $N$ of “simple” components is one of the most fruitful ideas in statistical learning. This idea lies at the core of a variety of methods, from two-layer neural networks to kernel regression, to boosting. In general, the resulting risk minimization problem is nonconvex and is solved by gradient descent or its variants. Unfortunately, little is known about global convergence properties of these approaches. Here, we consider the problem of learning a concave function $f$ on a compact convex domain $\Omega \subset {\mathbb{R}}^{d}$, using linear combinations of “bump-like” components (neurons). The parameters to be fitted are the centers of $N$ bumps, and the resulting empirical risk minimization problem is highly nonconvex. We prove that, in the limit in which the number of neurons diverges, the evolution of gradient descent converges to a Wasserstein gradient flow in the space of probability distributions over $\Omega $. Further, when the bump width $\delta $ tends to $0$, this gradient flow has a limit which is a viscous porous medium equation. Remarkably, the cost function optimized by this gradient flow exhibits a special property known as displacement convexity, which implies exponential convergence rates for $N\to \infty $, $\delta \to 0$. Surprisingly, this asymptotic theory appears to capture well the behavior for moderate values of $\delta $, $N$. Explaining this phenomenon, and understanding the dependence on $\delta $, $N$ in a quantitative manner remains an outstanding challenge.

44 citations

Posted Content
TL;DR: In this paper, the authors show that L-divergence induces a new information geometry on the simplex consisting of a Riemannian metric and a pair of dually coupled affine connections which defines two kinds of geodesics.
Abstract: A function is exponentially concave if its exponential is concave. We consider exponentially concave functions on the unit simplex. In a previous paper we showed that gradient maps of exponentially concave functions provide solutions to a Monge-Kantorovich optimal transport problem and give a better gradient approximation than those of ordinary concave functions. The approximation error, called L-divergence, is different from the usual Bregman divergence. Using tools of information geometry and optimal transport, we show that L-divergence induces a new information geometry on the simplex consisting of a Riemannian metric and a pair of dually coupled affine connections which defines two kinds of geodesics. We show that the induced geometry is dually projectively flat but not flat. Nevertheless, we prove an analogue of the celebrated generalized Pythagorean theorem from classical information geometry. On the other hand, we consider displacement interpolation under a Lagrangian integral action that is consistent with the optimal transport problem and show that the action minimizing curves are dual geodesics. The Pythagorean theorem is also shown to have an interesting application of determining the optimal trading frequency in stochastic portfolio theory.

44 citations

Journal ArticleDOI
TL;DR: An efficient algorithm is given to decide whether a competitive equilibrium exists or not, when cost functions of the producers are M♮-convex and utility function of the consumers are M-concave and quasilinear in money, where M- Convexity is closely related to the gross substitutes condition.
Abstract: This paper considers an economic model in which producers and consumers trade various indivisible commodities through a perfectly divisible commodity, money. On the basis of the recent developments in discrete mathematics (combinatorial optimization), we give an efficient algorithm to decide whether a competitive equilibrium exists or not, when cost functions of the producers are M♮-convex and utility functions of the consumers are M♮-concave and quasilinear in money, where M♮-convexity is closely related to the gross substitutes condition.

44 citations

Journal ArticleDOI
TL;DR: In this paper, a matrix version of Choi's inequality for positive unital maps and operator convex functions remains valid for monotone convex function at the cost of unitary congruences.

43 citations

Book ChapterDOI
13 Dec 2010
TL;DR: It is proved that when the cost functions have the form f(x) = cr/x then it is PLS-complete to compute a Pure Nash Equilibrium even in the case where strategies of the players are paths on a directed network, which proves PLScompleteness in undirected networks.
Abstract: We study Congestion Games with non-increasing cost functions (Cost Sharing Games) from a complexity perspective and resolve their computational hardness, which has been an open question. Specifically we prove that when the cost functions have the form f(x) = cr/x (Fair Cost Allocation) then it is PLS-complete to compute a Pure Nash Equilibrium even in the case where strategies of the players are paths on a directed network. For cost functions of the form f(x) = cr(x)/x, where cr(x) is a non-decreasing concave function we also prove PLScompleteness in undirected networks. Thus we extend the results of [7, 1] to the non-increasing case. For the case of Matroid Cost Sharing Games, where tractability of Pure Nash Equilibria is known by [1] we give a greedy polynomial time algorithm that computes a Pure Nash Equilibrium with social cost at most the potential of the optimal strategy profile. Hence, for this class of games we give a polynomial time version of the Potential Method introduced in [2] for bounding the Price of Stability.

43 citations


Network Information
Related Topics (5)
Markov chain
51.9K papers, 1.3M citations
74% related
Bounded function
77.2K papers, 1.3M citations
74% related
Polynomial
52.6K papers, 853.1K citations
72% related
Upper and lower bounds
56.9K papers, 1.1M citations
72% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202158
202049
201952
201860