scispace - formally typeset
Search or ask a question

Showing papers on "Concave function published in 2013"


Journal ArticleDOI
TL;DR: In this paper, the authors present a combinatorial characterization of the Bethe entropy function of a factor graph, such a characterization being in contrast to the original, analytical, definition of this function.
Abstract: We present a combinatorial characterization of the Bethe entropy function of a factor graph, such a characterization being in contrast to the original, analytical, definition of this function. We achieve this combinatorial characterization by counting valid configurations in finite graph covers of the factor graph. Analogously, we give a combinatorial characterization of the Bethe partition function, whose original definition was also of an analytical nature. As we point out, our approach has similarities to the replica method, but also stark differences. The above findings are a natural backdrop for introducing a decoder for graph-based codes that we will call symbolwise graph-cover decoding, a decoder that extends our earlier work on blockwise graph-cover decoding. Both graph-cover decoders are theoretical tools that help toward a better understanding of message-passing iterative decoding, namely blockwise graph-cover decoding links max-product (min-sum) algorithm decoding with linear programming decoding, and symbolwise graph-cover decoding links sum-product algorithm decoding with Bethe free energy function minimization at temperature one. In contrast to the Gibbs entropy function, which is a concave function, the Bethe entropy function is in general not concave everywhere. In particular, we show that every code picked from an ensemble of regular low-density parity-check codes with minimum Hamming distance growing (with high probability) linearly with the block length has a Bethe entropy function that is convex in certain regions of its domain.

89 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the first variation of the total mass functional, which corresponds to the volume of convex bodies when restricted to the subclass of characteristic functions and proved integral representation formulae for such a first variation, which suggest to define in a natural way the notion of area measure for a log-concave function.

65 citations


Journal ArticleDOI
TL;DR: In this article, the authors prove that the curvature of the largest interior ball touching the hypersurface at each point is a subsolution of the linearized flow equation if the speed is concave.

54 citations


Journal ArticleDOI
TL;DR: It is shown that it is not possible to approximate the minimum of a general concave function over the unit hypercube to within any factor, unless P = NP, by showing a similar hardness of approximation result for supermodular function minimization, a result that may be of independent interest.
Abstract: We present a fully polynomial time approximation scheme (FPTAS) for optimizing a very general class of non-linear functions of low rank over a polytope. Our approximation scheme relies on constructing an approximate Pareto-optimal front of the linear functions which constitute the given low-rank function. In contrast to existing results in the literature, our approximation scheme does not require the assumption of quasi-concavity on the objective function. For the special case of quasi-concave function minimization, we give an alternative FPTAS, which always returns a solution which is an extreme point of the polytope. Our technique can also be used to obtain an FPTAS for combinatorial optimization problems with non-linear objective functions, for example when the objective is a product of a fixed number of linear functions. We also show that it is not possible to approximate the minimum of a general concave function over the unit hypercube to within any factor, unless P = NP. We prove this by showing a similar hardness of approximation result for supermodular function minimization, a result that may be of independent interest.

48 citations


Journal ArticleDOI
TL;DR: This work presents an algorithm for solving the Pareto surface approximation problem that is practical with 10 or less conflicting objectives, motivated by an application to radiation therapy optimization.
Abstract: We consider the problem of approximating Pareto surfaces of convex multicriteria optimization problems by a discrete set of points and their convex combinations. Finding the scalarization parameters that optimally limit the approximation error when generating a single Pareto optimal solution is a nonconvex optimization problem. This problem can be solved by enumerative techniques but at a cost that increases exponentially with the number of objectives. We present an algorithm for solving the Pareto surface approximation problem that is practical with 10 or less conflicting objectives, motivated by an application to radiation therapy optimization. Our enumerative scheme is, in a sense, dual to a family of previous algorithms. The proposed technique retains the quality of the best previous algorithm in this class while solving fewer subproblems. A further improvement is provided by a procedure for discarding subproblems based on reusing information from previous solves. The combined effect of the enhancements is empirically demonstrated to reduce the computational expense of solving the Pareto surface approximation problem by orders of magnitude. For problems where the objectives have positive curvature, an improved bound on the approximation error is demonstrated using transformations of the initial objectives with strictly increasing and concave functions.

46 citations


Journal ArticleDOI
TL;DR: This paper showed that the generating function for the number of concave compositions, denoted v(q), is a mixed mock modular form in a more general sense than is typically used.
Abstract: A composition of an integer constrained to have decreasing then increasing parts is called concave. We prove that the generating function for the number of concave compositions, denoted v(q), is a mixed mock modular form in a more general sense than is typically used. We relate v(q) to generating functions studied in connection with “Moonshine of the Mathieu group” and the smallest parts of a partition. We demonstrate this connection in four different ways. We use the elliptic and modular properties of Appell sums as well as q-series manipulations and holomorphic projection. As an application of the modularity results, we give an asymptotic expansion for the number of concave compositions of n. For comparison, we give an asymptotic expansion for the number of concave compositions of n with strictly decreasing and increasing parts, the generating function of which is related to a false theta function rather than a mock theta function.

33 citations


Journal ArticleDOI
TL;DR: The existence and uniqueness of the Nash equilibrium (NE) are investigated using the concavity of the utility function and the exact potential game associated with the proposed utility function to verify the validation of the proposed framework.
Abstract: The amplify-and-forward (AF) cooperative communication scheme is modeled using the Stackelberg market framework, where a relay is willing to sell its resources, power, and bandwidth to multiple users to maximize its revenue. The relay determines the prices for relaying the users' information, depending on its available resources and the users' demands. Subsequently, each user maximizes its own utility function by determining the optimum power and bandwidth to buy from the relay. The utility function of the user is formulated as a joint concave function in power and bandwidth. The existence and uniqueness of the Nash equilibrium (NE) are investigated using the concavity of the utility function and the exact potential game associated with the proposed utility function. The NE solution can be obtained in a centralized manner, which requires full knowledge of all channel gains of all users, which may be difficult to obtain in practice. In this sense, a distributed algorithm can be applied to obtain power and bandwidth allocations with minimum information exchange between the relay and the users. Similarly, the optimum prices for the power and bandwidth can also be obtained in a distributed manner. The convergence of the algorithms is investigated using the Jacobian matrix at the NE. Numerical simulations are used to verify the validation of the proposed framework.

32 citations


Posted Content
TL;DR: In this article, the concept of f-divergence and relative entropy for s-concave and log concave functions was introduced and the affine invariant valuation property was established.
Abstract: We prove new entropy inequalities for log concave and s-concave functions that strengthen and generalize recently established reverse log Sobolev and Poincare inequalities for such functions. This leads naturally to the concept of f-divergence and, in particular, relative entropy for s-concave and log concave functions. We establish their basic properties, among them the affine invariant valuation property. Applications are given in the theory of convex bodies.

30 citations


Journal ArticleDOI
Liran Rotem1
TL;DR: In this paper, the authors extend some notions, previously defined for log-concave functions, to the larger domain of the so-called α -concaves, and demonstrate how such geometric results can imply Poincare type inequalities.

26 citations


Journal ArticleDOI
TL;DR: The algorithm solves a polynomial number of linear minimization problems and computes an extreme point near-optimal solution and applies directly to combinatorial 0 - 1 problems where the convex hull of feasible solutions is known.

24 citations


Posted Content
TL;DR: In this paper, a Lyapunov method is developed that solves the problem in an online \emph{max-weight} fashion by selecting actions based on a set of time-varying weights.
Abstract: This paper considers a time-varying game with $N$ players. Every time slot, players observe their own random events and then take a control action. The events and control actions affect the individual utilities earned by each player. The goal is to maximize a concave function of time average utilities subject to equilibrium constraints. Specifically, participating players are provided access to a common source of randomness from which they can optimally correlate their decisions. The equilibrium constraints incentivize participation by ensuring that players cannot earn more utility if they choose not to participate. This form of equilibrium is similar to the notions of Nash equilibrium and correlated equilibrium, but is simpler to attain. A Lyapunov method is developed that solves the problem in an online \emph{max-weight} fashion by selecting actions based on a set of time-varying weights. The algorithm does not require knowledge of the event probabilities and has polynomial convergence time. A similar method can be used to compute a standard correlated equilibrium, albeit with increased complexity.

Journal ArticleDOI
TL;DR: Using variational methods based on the critical point theory and the Ekeland variational principle, it is shown that for small values of the parameter, the problem has at least two nontrivial smooth positive solutions.
Abstract: We consider a nonlinear parametric Dirichlet problem driven by the anisotropic p-Laplacian with the combined effects of "concave" and "convex" terms. The "superlinear" nonlinearity need not satisfy the Ambrosetti-Rabinowitz condition. Using variational methods based on the critical point theory and the Ekeland variational principle, we show that for small values of the parameter, the problem has at least two nontrivial smooth positive solutions.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the problem of optimal reinsurance under VaR and CTE optimization criteria when the ceded loss functions are in the class of increasing concave functions and proved that under the VaR optimization criterion, the quota-share reinsurance with a policy limit is always optimal.
Abstract: Most of the studies on optimal reinsurance are from the viewpoint of the insurer and the optimal ceded functions always turn out to be convex. However reinsurance contracts always involve a limit on the ceded loss function in practice, thus it may not be enough to confine the analysis to the class of convex functions only. In this paper, we study the problem of optimal reinsurance under VaR and CTE optimization criteria when the ceded loss functions are in the class of increasing concave functions. By using a simple geometric approach, we prove that under the VaR optimization criterion, the quota-share reinsurance with a policy limit is always optimal, while the full reinsurance with a policy limit is optimal under the CTE optimization criterion. Some illustrative examples are presented.

Journal ArticleDOI
TL;DR: It is shown that utility functions act as the Lagrange multipliers of the stochastic order constraints in this general setting, and that the dual problem is a search over utility functions.
Abstract: We study convex optimization problems with a class of multivariate integral stochastic order constraints defined in terms of parametrized families of increasing concave functions. We show that utility functions act as the Lagrange multipliers of the stochastic order constraints in this general setting, and that the dual problem is a search over utility functions. Practical implementation issues are discussed.

Book ChapterDOI
26 Aug 2013
TL;DR: A generalization of the classical knapsack problem, where in the standard setting a fixed capacity may not be exceeded by the weight of the chosen items, is replaced by a weight-dependent cost function.
Abstract: In this paper we consider a generalization of the classical knapsack problem. While in the standard setting a fixed capacity may not be exceeded by the weight of the chosen items, we replace this hard constraint by a weight-dependent cost function. The objective is to maximize the total profit of the chosen items minus the cost induced by their total weight. We study two natural classes of cost functions, namely convex and concave functions. For the concave case, we show that the problem can be solved in polynomial time; for the convex case we present an FPTAS and a 2-approximation algorithm with the running time of \(\mathcal{O}(n \log n)\), where n is the number of items. Before, only a 3-approximation algorithm was known.

Book ChapterDOI
28 Aug 2013
TL;DR: Two dual methods that have been proposed independently for computing solutions of the discrete or semi-discrete instances of optimal transport are presented and compared.
Abstract: The goal of this expository article is to present and compare two dual methods that have been proposed independently for computing solutions of the discrete or semi-discrete instances of optimal transport.

DissertationDOI
01 Jan 2013
TL;DR: A novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions, is developed and an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals is demonstrated.
Abstract: The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions. First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques. Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set. Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.

Journal ArticleDOI
TL;DR: In this paper, a general concave integral control (GCIC) strategy is proposed, which is derived by normalizing the bounded integral control action and concave function gain integrator, introducing the partial derivative of Lyapunov function into the integrator and originating a class of new strategy to transform ordinary control into general integral control.
Abstract: In this paper, a class of fire-new general integral control, named general concave integral control, is proposed. It is derived by normalizing the bounded integral control action and concave function gain integrator, introducing the partial derivative of Lyapunov function into the integrator and originating a class of new strategy to transform ordinary control into general integral control. By using Lyapunov method along with LaSalle’s invariance principle, the theorem to ensure regionally as well as semi-globally asymptotic stability is established only by some bounded information. Moreover, the highlight point of this integral control strategy is that the integrator output could tend to infinity but the integral control action is finite. Therefore, a simple and ingenious method to design general integral control is founded. Simulation results showed that under the normal and perturbed cases, the optimum response in the whole domain of interest can all be achieved by a set of the same control gains, even under the case that the payload is changed abruptly.

Journal ArticleDOI
TL;DR: In this paper, Yang's method was used to obtain the minimal number of support points that maximize any concave function of the Fisher information matrix, and their efficiencies were compared based on the minimization of the support points.
Abstract: This article studies optimal designs to analyze dose-response functions with a downturn. Two interesting challenges are estimating the entire dose-response curve and estimating the ED50. Here, I obtain and compare optimal designs for these objectives, separately and together in a two-stage design. I adopt a probit model with a quadratic term to describe the dose-response. Under the probit model, Yang's method is used to obtain the minimal number of support points that maximize any concave function of the Fisher information matrix. Optimal designs are obtained based on the minimal number of support points, and their efficiencies are compared.

Journal ArticleDOI
TL;DR: In this article, the authors considered generalized measure-theoretic entropy, where instead of the Shannon entropy function, they considered an arbitrary concave function defined on the unit interval, vanishing in the origin, and showed that this isomorphism invariant is linearly dependent on the Kolmogorov-Sinai entropy.
Abstract: We consider the concept of generalized measure-theoretic entropy, where instead of the Shannon entropy function we consider an arbitrary concave function defined on the unit interval, vanishing in the origin. Under mild assumptions on this function we show that this isomorphism invariant is linearly dependent on the Kolmogorov-Sinai entropy.

Journal ArticleDOI
TL;DR: In this article, a numerical method for solving concave continuous state dynamic programming problems is introduced based on a pair of polyhedral approximations of concave functions, which is globally convergent and produces computable upper and lower bounds on the value function.

Posted Content
TL;DR: In this paper, a dynamic relation between convex duality and stochastic control theory has been established for markets modeled by It\^o-L\'evy processes, and the existence of an optimal scenario is equivalent to the replicability of a related claim.
Abstract: A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities: (i) The optimal terminal wealth $X^*(T) : = X_{\varphi^*}(T)$ of the problem to maximize the expected $U$-utility of the terminal wealth $X_{\varphi}(T)$ generated by admissible portfolios $\varphi(t), 0 \leq t \leq T$ in a market with the risky asset price process modeled as a semimartingale; (ii) The optimal scenario $\frac{dQ^*}{dP}$ of the dual problem to minimize the expected $V$-value of $\frac{dQ}{dP}$ over a family of equivalent local martingale measures $Q$, where $V$ is the convex conjugate function of the concave function $U$. In this paper we consider markets modeled by It\^o-L\'evy processes. In the first part we use the maximum principle in stochastic control theory to extend the above relation to a \emph{dynamic} relation, valid for all $t \in [0,T]$. We prove in particular that the optimal adjoint process for the primal problem coincides with the optimal density process, and that the optimal adjoint process for the dual problem coincides with the optimal wealth process, $0 \leq t \leq T$. In the terminal time case $t=T$ we recover the classical duality connection above. We get moreover an explicit relation between the optimal portfolio $\varphi^*$ and the optimal measure $Q^*$. We also obtain that the existence of an optimal scenario is equivalent to the replicability of a related $T$-claim. In the second part we present robust (model uncertainty) versions of the optimization problems in (i) and (ii), and we prove a similar dynamic relation between them. In particular, we show how to get from the solution of one of the problems to the other. We illustrate the results with explicit examples.

Journal ArticleDOI
TL;DR: In this paper, the authors study an optimal portfolio selection problem under instantaneous price impact and show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solution to a special type of quasi-variational inequality.
Abstract: In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solution to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a “piecewise constant” form, reflecting a more practical perspective.

Posted Content
TL;DR: The key idea of the algorithm is to learn the input data pattern dynamically: it solves a sequence of carefully chosen partial allocation problems and use their optimal solutions to assist with the future decision.
Abstract: We consider an online matching problem with concave returns. This problem is a significant generalization of the Adwords allocation problem and has vast applications in online advertising. In this problem, a sequence of items arrive sequentially and each has to be allocated to one of the bidders, who bid a certain value for each item. At each time, the decision maker has to allocate the current item to one of the bidders without knowing the future bids and the objective is to maximize the sum of some concave functions of each bidder's aggregate value. In this work, we propose an algorithm that achieves near-optimal performance for this problem when the bids arrive in a random order and the input data satisfies certain conditions. The key idea of our algorithm is to learn the input data pattern dynamically: we solve a sequence of carefully chosen partial allocation problems and use their optimal solutions to assist with the future decision. Our analysis belongs to the primal-dual paradigm, however, the absence of linearity of the objective function and the dynamic feature of the algorithm makes our analysis quite unique.

Journal ArticleDOI
TL;DR: Simulation results show the mass functions with concave curve may generally obtain the satisfied solution within the allowed iterations, and classifies mass functions into four different types of curvilinear functions according to their curvilInear styles.
Abstract: Inspired by physicomimetics, artificial physics optimisation (APO) is a novel population-based stochastic algorithm. In APO framework, the mass of each individual corresponds to a user-defined function of the value of an objective to be optimised, which can supply some important information for searching global optima. There are many functions that can be used as mass function, and no doubt some will be better than others for specific optimisation problems or perhaps classes of problems. This paper proposes the basic requirement and design method of mass function, and classifies mass functions into four different types of curvilinear functions according to their curvilinear styles, such as linear function, convex function, and concave function, etc. Simulation results show the mass functions with concave curve may generally obtain the satisfied solution within the allowed iterations.

Journal ArticleDOI
TL;DR: In this article, a new definition of functional Steiner symmetrizations on logconcave functions was given, and a new proof of the classical Prekopa-Leindler inequality was given.
Abstract: In this paper, we give a new definition of functional Steiner symmetrizations on logconcave functions. Using the functional Steiner symmetrization, we give a new proof of the classical Prekopa-Leindler inequality on log-concave functions. Mathematics subject classification (2010): 46E30, 52A40.

Posted Content
TL;DR: In this article, the authors consider markets modeled by Ito-Levy processes, and in the first part they give a new proof of the above result in this setting, based on the maximum principle in stochastic control theory.
Abstract: A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities: \begin{myenumerate} \item The optimal terminal wealth $X^*(T) : = X_{\varphi^*}(T)$ of the classical problem to maximize the expected $U$-utility of the terminal wealth $X_{\varphi}(T)$ generated by admissible portfolios $\varphi(t); 0 \leq t \leq T$ in a market with the risky asset price process modeled as a semimartingale \item The optimal scenario $\frac{dQ^*}{dP}$ of the dual problem to minimize the expected $V$-value of $\frac{dQ}{dP}$ over a family of equivalent local martingale measures $Q$. Here $V$ is the convex dual function of the concave function $U$. \end{myenumerate} In this paper we consider markets modeled by Ito-Levy processes, and in the first part we give a new proof of the above result in this setting, based on the maximum principle in stochastic control theory. An advantage with our approach is that it also gives an explicit relation between the optimal portfolio $\varphi^*$ and the optimal measure $Q^*$, in terms of backward stochastic differential equations. In the second part we present robust (model uncertainty) versions of the optimization problems in (i) and (ii), and we prove a relation between them. In particular, we show explicitly how to get from the solution of one of the problems to the solution of the other. We illustrate the results with explicit examples.

Journal ArticleDOI
TL;DR: Simulation results show the mass functions with concave curve may generally obtain the satisfied solution within the allowed iterations, and classifies mass functions into three different types of curvilinear functions according to their curvilInear styles, such as linear function, convex function, and concave function.
Abstract: Artificial physics optimisation APO is a novel population-based stochastic algorithm inspired by physicomimetics. APO with the feasibility and dominance method EAD-APO is employed to solve constrained optimisation problems. In EAD-APO, the mass of each feasible individual corresponds to a user-defined function of the value of an objective to be optimised, and the mass of each infeasible individual corresponds to a user-defined function of the constraint violation value, which can supply some important information for searching global optima. There are many functions can be used as mass function, and no doubt some will be better than others for specific optimisation problems or perhaps classes of problems. This paper proposes the basic regulation and design method of mass function, and classifies mass functions into three different types of curvilinear functions according to their curvilinear styles, such as linear function, convex function, and concave function. Simulation results show the mass functions with concave curve may generally obtain the satisfied solution within the allowed iterations.

Posted Content
TL;DR: In this paper, the authors show that the market impact of volume weighted average price (VWAP) orders is convex function of a trading rate, but most empirical estimates of transaction cost are concave functions.
Abstract: The market impact (MI) of Volume Weighted Average Price (VWAP) orders is a convex function of a trading rate, but most empirical estimates of transaction cost are concave functions. How is this possible? We show that isochronic (constant trading time) MI is slightly convex, and isochoric (constant trading volume) MI is concave. We suggest a model that fits all trading regimes and guarantees no-dynamic-arbitrage.

Book ChapterDOI
21 Aug 2013
TL;DR: Two stochastic multi-armed bandit problems are considered in this paper in the Bayesian setting, where the reward or the available information derived from an arm is not a function of just the current play of that arm.
Abstract: We consider two stochastic multi-armed bandit problems in this paper in the Bayesian setting. In the first problem the accrued reward in a step is a concave function (such as the maximum) of the observed values of the arms played in that step. In the second problem, the observed value from a play of arm i is revealed after δ i steps. Both of these problems have been considered in the bandit literature but no solutions with provably good performance guarantees are known over short horizons. The two problems are similar in the sense that the reward (for the first) or the available information (for the second) derived from an arm is not a function of just the current play of that arm. This interdependence between arms renders most existing analysis techniques inapplicable.