scispace - formally typeset
Search or ask a question

Showing papers on "Concave function published in 2018"


Journal ArticleDOI
TL;DR: A proximal difference-of-convex algorithm with extrapolation to possibly accelerate the proximal DCA, and it is shown that any cluster point of the sequence generated by the algorithm is a stationary points of the DC optimization problem for a fairly general choice of extrapolation parameters.
Abstract: We consider a class of difference-of-convex (DC) optimization problems whose objective is level-bounded and is the sum of a smooth convex function with Lipschitz gradient, a proper closed convex function and a continuous concave function. While this kind of problems can be solved by the classical difference-of-convex algorithm (DCA) (Pham et al. Acta Math Vietnam 22:289–355, 1997), the difficulty of the subproblems of this algorithm depends heavily on the choice of DC decomposition. Simpler subproblems can be obtained by using a specific DC decomposition described in Pham et al. (SIAM J Optim 8:476–505, 1998). This decomposition has been proposed in numerous work such as Gotoh et al. (DC formulations and algorithms for sparse optimization problems, 2017), and we refer to the resulting DCA as the proximal DCA. Although the subproblems are simpler, the proximal DCA is the same as the proximal gradient algorithm when the concave part of the objective is void, and hence is potentially slow in practice. In this paper, motivated by the extrapolation techniques for accelerating the proximal gradient algorithm in the convex settings, we consider a proximal difference-of-convex algorithm with extrapolation to possibly accelerate the proximal DCA. We show that any cluster point of the sequence generated by our algorithm is a stationary point of the DC optimization problem for a fairly general choice of extrapolation parameters: in particular, the parameters can be chosen as in FISTA with fixed restart (O’Donoghue and Candes in Found Comput Math 15, 715–732, 2015). In addition, by assuming the Kurdyka-Łojasiewicz property of the objective and the differentiability of the concave part, we establish global convergence of the sequence generated by our algorithm and analyze its convergence rate. Our numerical experiments on two difference-of-convex regularized least squares models show that our algorithm usually outperforms the proximal DCA and the general iterative shrinkage and thresholding algorithm proposed in Gong et al. (A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems, 2013).

112 citations


Proceedings Article
03 Jul 2018
TL;DR: This paper shows that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation, which implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regrets.
Abstract: To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently. In this paper, we illustrate an intrinsic connection between these two concepts by showing that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation. This observation implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regret. As a result, we present a series of strongly adaptive algorithms that have small dynamic regrets for convex functions, exponentially concave functions, and strongly convex functions, respectively. To the best of our knowledge, this is the first time that exponential concavity is utilized to upper bound the dynamic regret. Moreover, all of those adaptive algorithms do not need any prior knowledge of the functional variation, which is a significant advantage over previous specialized methods for minimizing dynamic regret.

83 citations


Journal ArticleDOI
TL;DR: It is shown that L-divergence induces a new information geometry on the simplex consisting of a Riemannian metric and a pair of dually coupled affine connections which defines two kinds of geodesics and proves an analogue of the celebrated generalized Pythagorean theorem from classical information geometry.
Abstract: A function is exponentially concave if its exponential is concave. We consider exponentially concave functions on the unit simplex. In a previous paper, we showed that gradient maps of exponentially concave functions provide solutions to a Monge–Kantorovich optimal transport problem and give a better gradient approximation than those of ordinary concave functions. The approximation error, called L-divergence, is different from the usual Bregman divergence. Using tools of information geometry and optimal transport, we show that L-divergence induces a new information geometry on the simplex consisting of a Riemannian metric and a pair of dually coupled affine connections which defines two kinds of geodesics. We show that the induced geometry is dually projectively flat but not flat. Nevertheless, we prove an analogue of the celebrated generalized Pythagorean theorem from classical information geometry. On the other hand, we consider displacement interpolation under a Lagrangian integral action that is consistent with the optimal transport problem and show that the action minimizing curves are dual geodesics. The Pythagorean theorem is also shown to have an interesting application of determining the optimal trading frequency in stochastic portfolio theory.

69 citations


Proceedings ArticleDOI
20 Jun 2018
TL;DR: In this paper, the capacity of the binary deletion channel with deletion probability d is shown to be at most (1−d) log ϕ for d ≥ 1/2, and, assuming the capacity function is convex, is at most 1−d log(4/ϕ) for dd outside the limiting case d → 0 that is fully explicit and proved without computer assistance.
Abstract: We develop a systematic approach, based on convex programming and real analysis, for obtaining upper bounds on the capacity of the binary deletion channel and, more generally, channels with i.i.d. insertions and deletions. Other than the classical deletion channel, we give a special attention to the Poisson-repeat channel introduced by Mitzenmacher and Drinea (IEEE Transactions on Information Theory, 2006). Our framework can be applied to obtain capacity upper bounds for any repetition distribution (the deletion and Poisson-repeat channels corresponding to the special cases of Bernoulli and Poisson distributions). Our techniques essentially reduce the task of proving capacity upper bounds to maximizing a univariate, real-valued, and often concave function over a bounded interval. The corresponding univariate function is carefully designed according to the underlying distribution of repetitions and the choices vary depending on the desired strength of the upper bounds as well as the desired simplicity of the function (e.g., being only efficiently computable versus having an explicit closed-form expression in terms of elementary, or common special, functions). Among our results, we show that the capacity of the binary deletion channel with deletion probability d is at most (1−d) logϕ for d ≥ 1/2, and, assuming the capacity function is convex, is at most 1−d log(4/ϕ) for dd outside the limiting case d → 0 that is fully explicit and proved without computer assistance. Furthermore, we derive the first set of capacity upper bounds for the Poisson-repeat channel. Our results uncover further striking connections between this channel and the deletion channel, and suggest, somewhat counter-intuitively, that the Poisson-repeat channel is actually analytically simpler than the deletion channel and may be of key importance to a complete understanding of the deletion channel. Finally, we derive several novel upper bounds on the capacity of the deletion channel. All upper bounds are maximums of efficiently computable, and concave, univariate real functions over a bounded domain. In turn, we upper bound these functions in terms of explicit elementary and standard special functions, whose maximums can be found even more efficiently (and sometimes, analytically, for example for d=1/2). Along the way, we develop several new techniques of potentially independent interest. For example, we develop systematic techniques to study channels with mean constraints over the reals. Furthermore, we motivate the study of novel probability distributions over non-negative integers, as well as novel special functions which could be of interest to mathematical analysis.

50 citations


Proceedings Article
01 Oct 2018
TL;DR: A PTAS is developed for the underlying optimization problem of determining a reward-maximizing sequence of arm pulls and it is shown how to use this PTAS in a learning setting to obtain sublinear regret.
Abstract: We introduce a general model of bandit problems in which the expected payout of an arm is an increasing concave function of the time since it was last played. We first develop a PTAS for the underlying optimization problem of determining a reward-maximizing sequence of arm pulls. We then show how to use this PTAS in a learning setting to obtain sublinear regret.

45 citations


Proceedings ArticleDOI
01 Feb 2018
TL;DR: This paper comprehensively discusses mathematical properties of the class of exponentially concave functions, like closedness under linear and convex combination and relations to quasi−, Jensen− and Schur-concavity, and new inequalities for the Kullback-Leibler divergence and for mutual information are derived.
Abstract: Concave functions play a central role in optimization. So-called exponentially concave functions are of similar importance in information theory. In this paper, we comprehensively discuss mathematical properties of the class of exponentially concave functions, like closedness under linear and convex combination and relations to quasi−, Jensen− and Schur-concavity. Information theoretic quantities such as self-information and (scaled) entropy are shown to be exponentially concave. Furthermore, new inequalities for the Kullback-Leibler divergence, for the entropy of mixture distributions, and for mutual information are derived.

40 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extend the notion of John's ellipsoid to the setting of integrable log-concave functions, and define the integral ratio of a log-Concave function, which can be viewed as a stability version of functional affine isoperimetric inequality.
Abstract: We extend the notion of John’s ellipsoid to the setting of integrable log-concave functions. This will allow us to define the integral ratio of a log-concave function, which will extend the notion of volume ratio, and we will find the log-concave function maximizing the integral ratio. A reverse functional affine isoperimetric inequality will be given, written in terms of this integral ratio. This can be viewed as a stability version of the functional affine isoperimetric inequality.

38 citations


Journal ArticleDOI
TL;DR: In this paper, the LYZ ellipsoid and Petty projection body for log-concave functions was developed and the continuous, SL ( n ) contravariant valuation on a subclass of logconcaves was classified.

29 citations


Journal ArticleDOI
TL;DR: In this paper, the size issue of the densest subgraph problem was addressed by generalizing the density of the subgraph induced by the vertices in the graph to a monotonically non-decreasing function.
Abstract: In the densest subgraph problem, given an edge-weighted undirected graph $$G=(V,E,w)$$ , we are asked to find $$S\subseteq V$$ that maximizes the density, i.e., w(S) / |S|, where w(S) is the sum of weights of the edges in the subgraph induced by S. This problem has often been employed in a wide variety of graph mining applications. However, the problem has a drawback; it may happen that the obtained subset is too large or too small in comparison with the size desired in the application at hand. In this study, we address the size issue of the densest subgraph problem by generalizing the density of $$S\subseteq V$$ . Specifically, we introduce the f-density of $$S\subseteq V$$ , which is defined as w(S) / f(|S|), where $$f:\mathbb {Z}_{\ge 0}\rightarrow \mathbb {R}_{\ge 0}$$ is a monotonically non-decreasing function. In the f-densest subgraph problem (f-DS), we aim to find $$S\subseteq V$$ that maximizes the f-density w(S) / f(|S|). Although f-DS does not explicitly specify the size of the output subset of vertices, we can handle the above size issue using a convex/concave size function f appropriately. For f-DS with convex function f, we propose a nearly-linear-time algorithm with a provable approximation guarantee. On the other hand, for f-DS with concave function f, we propose an LP-based exact algorithm, a flow-based $$O(|V|^3)$$ -time exact algorithm for unweighted graphs, and a nearly-linear-time approximation algorithm.

24 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied real-valued, continuous and translation invariant valuations defined on the space of quasi-concave functions of N variables and proved a homogeneous decomposition theorem of McMullen type, and a representation formula for those valuations which are N-homogeneous.
Abstract: We study real-valued, continuous and translation invariant valuations defined on the space of quasi-concave functions of N variables. In particular, we prove a homogeneous decomposition theorem of McMullen type, and we find a representation formula for those valuations which are N-homogeneous. Moreover, we introduce the notion of Klain's functions for these type of valuations.

23 citations


Posted Content
TL;DR: In this article, the convergence analysis for the proximal DC algorithm with extrapolation (pDCA$_e$) was refined and shown to be locally linearly convergent when the objective is level-bounded, without differentiability assumptions in the concave part.
Abstract: We consider the problem of minimizing a difference-of-convex (DC) function, which can be written as the sum of a smooth convex function with Lipschitz gradient, a proper closed convex function and a continuous possibly nonsmooth concave function. We refine the convergence analysis in [38] for the proximal DC algorithm with extrapolation (pDCA$_e$) and show that the whole sequence generated by the algorithm is convergent when the objective is level-bounded, {\em without} imposing differentiability assumptions in the concave part. Our analysis is based on a new potential function and we assume such a function is a Kurdyka-{\L}ojasiewicz (KL) function. We also establish a relationship between our KL assumption and the one used in [38]. Finally, we demonstrate how the pDCA$_e$ can be applied to a class of simultaneous sparse recovery and outlier detection problems arising from robust compressed sensing in signal processing and least trimmed squares regression in statistics. Specifically, we show that the objectives of these problems can be written as level-bounded DC functions whose concave parts are {\em typically nonsmooth}. Moreover, for a large class of loss functions and regularizers, the KL exponent of the corresponding potential function are shown to be 1/2, which implies that the pDCA$_e$ is locally linearly convergent when applied to these problems. Our numerical experiments show that the pDCA$_e$ usually outperforms the proximal DC algorithm with nonmonotone linesearch [24, Appendix A] in both CPU time and solution quality for this particular application.

Proceedings Article
01 Jan 2018
TL;DR: It is proved that local minima of probably conditionally concave energies on general matching polytopes are with high probability extreme points of the matching polytope (\eg, permutations).
Abstract: In this paper we address the graph matching problem. Following the recent works of \cite{zaslavskiy2009path,Vestner2017} we analyze and generalize the idea of concave relaxations. We introduce the concepts of \emph{conditionally concave} and \emph{probably conditionally concave} energies on polytopes and show that they encapsulate many instances of the graph matching problem, including matching Euclidean graphs and graphs on surfaces. We further prove that local minima of probably conditionally concave energies on general matching polytopes (\eg, doubly stochastic) are with high probability extreme points of the matching polytope (\eg, permutations).

Journal ArticleDOI
TL;DR: In this article, the fixed point index was used to establish existence theorems for positive solutions to a system of semipositone fractional difference boundary value problems, where nonnegative concave functions and nonnegative matrices were adopted to characterize the coupling behavior of nonlinear terms.
Abstract: Using the fixed point index, we establish two existence theorems for positive solutions to a system of semipositone fractional difference boundary value problems. We adopt nonnegative concave functions and nonnegative matrices to characterize the coupling behavior of our nonlinear terms.

Journal ArticleDOI
TL;DR: In this article, it was shown that convex concave functions can be characterized by a combination of two simpler exchange properties, and for a function defined on an integral polymatroid, a much simpler exchange axiom characterizes the convexity of the function.
Abstract: $${\mathrm{M}}^ atural $$ -concave functions form a class of discrete concave functions in discrete convex analysis, and are defined by a certain exchange axiom. We show in this paper that $${\mathrm{M}}^ atural $$ -concave functions can be characterized by a combination of two simpler exchange properties. It is also shown that for a function defined on an integral polymatroid, a much simpler exchange axiom characterizes $${\mathrm{M}}^ atural $$ -concavity. These results have some significant implications in discrete convex analysis.

Posted Content
TL;DR: In this paper, the authors consider an optimal transport problem on the unit simplex whose solutions are given by gradients of exponentially concave functions and prove two main results: the optimal transport is the large deviation limit of a particle system of Dirichlet processes.
Abstract: We consider an optimal transport problem on the unit simplex whose solutions are given by gradients of exponentially concave functions and prove two main results. First, we show that the optimal transport is the large deviation limit of a particle system of Dirichlet processes transporting one probability measure on the unit simplex to another by coordinatewise multiplication and normalizing. The structure of our Lagrangian and the appearance of the Dirichlet process relate our problem closely to the entropic measure on the Wasserstein space as defined by von-Renesse and Sturm in the context of Wasserstein diffusion. The limiting procedure is a triangular limit where we allow simultaneously the number of particles to grow to infinity while the `noise' tends to zero. The method, which generalizes easily to many other cost functions, including the squared Euclidean distance, provides a novel combination of the Schrodinger problem approach due to C. Leonard and the related Brownian particle systems by Adams et al.which does not require gamma convergence. Second, we analyze the behavior of entropy along the paths of transport. The reference measure on the simplex is taken to be the Dirichlet measure with all zero parameters which relates to the finite-dimensional distributions of the entropic measure. The interpolating curves are not the usual McCann lines. Nevertheless we show that entropy plus a multiple of the transport cost remains convex, which is reminiscent of the semiconvexity of entropy along lines of McCann interpolations in negative curvature spaces. We also obtain, under suitable conditions, dimension-free bounds of the optimal transport cost in terms of entropy.

Posted Content
TL;DR: In this article, the authors introduced the concepts of conditionally concave and probably concave energies on polytopes and showed that they encapsulate many instances of the graph matching problem, including matching Euclidean graphs and graphs on surfaces.
Abstract: In this paper we address the graph matching problem. Following the recent works of \cite{zaslavskiy2009path,Vestner2017} we analyze and generalize the idea of concave relaxations. We introduce the concepts of conditionally concave and probably conditionally concave energies on polytopes and show that they encapsulate many instances of the graph matching problem, including matching Euclidean graphs and graphs on surfaces. We further prove that local minima of probably conditionally concave energies on general matching polytopes (e.g., doubly stochastic) are with high probability extreme points of the matching polytope (e.g., permutations).

Journal ArticleDOI
TL;DR: In this article, a simple alternating scheme to compute fully Bayesian maximum a posteriori (MAP) estimates leads to the exact same sequence of updates as a standard MM strategy (see the adaptive lasso).
Abstract: Majorization-minimization (MM) is a standard iterative optimization technique which consists in minimizing a sequence of convex surrogate functionals. MM approaches have been particularly successful to tackle inverse problems and statistical machine learning problems where the regularization term is a sparsity-promoting concave function. However, due to non-convexity, the solution found by MM depends on its initialization. Uniform initialization is the most natural and often employed strategy as it boils down to penalizing all coefficients equally in the first MM iteration. Yet, this arbitrary choice can lead to unsatisfactory results in severely under-determined inverse problems such as source imaging with magneto- and electro-encephalography (M/EEG). The framework of hierarchical Bayesian modeling (HBM) is an alternative approach to encode sparsity. This work shows that for certain hierarchical models, a simple alternating scheme to compute fully Bayesian maximum a posteriori (MAP) estimates leads to the exact same sequence of updates as a standard MM strategy (see the adaptive lasso). With this parallel outlined, we show how to improve upon these MM techniques by probing the multimodal posterior density using Markov Chain Monte-Carlo (MCMC) techniques. Firstly, we show that these samples can provide well-informed initializations that help MM schemes to reach better local minima. Secondly, we demonstrate how it can reveal the different modes of the posterior distribution in order to explore and quantify the inherent uncertainty and ambiguity of such ill-posed inference procedure. In the context of M/EEG, each mode corresponds to a plausible configuration of neural sources, which is crucial for data interpretation, especially in clinical contexts. Results on both simulations and real datasets show how the number or the type of sensors affect the uncertainties on the estimates.

Journal ArticleDOI
TL;DR: C (for Convex/Concave Bounds) defines local conditions using suitably chosen convex and concave functions, and its superiority over the state-of-the-art is demonstrated in its reduced runtime and power consumption.
Abstract: As data becomes dynamic, large, and distributed, there is increasing demand for what have become known as distributed stream algorithms. Since continuously collecting the data to a central server and processing it there is infeasible, a common approach is to define local conditions at the distributed nodes, such that—as long as they are maintained—some desirable global condition holds.Previous methods derived local conditions focusing on communication efficiency. While proving very useful for reducing the communication volume, these local conditions often suffer from heavy computational burden at the nodes. The computational complexity of the local conditions affects both the runtime and the energy consumption. These are especially critical for resource-limited devices like smartphones and sensor nodes. Such devices are becoming more ubiquitous due to the recent trend toward smart cities and the Internet of Things. To accommodate for high data rates and limited resources of these devices, it is crucial that the local conditions be quickly and efficiently evaluated.Here we propose a novel approach, designated CB (for Convex/Concave Bounds). CB defines local conditions using suitably chosen convex and concave functions. Lightweight and simple, these local conditions can be rapidly checked on the fly. CB’s superiority over the state-of-the-art is demonstrated in its reduced runtime and power consumption, by up to six orders of magnitude in some cases. As an added bonus, CB also reduced communication overhead in all the tested application scenarios.

Journal ArticleDOI
TL;DR: This work studies the general continuous capacity case and account for economies-of-scale in its cost through an increasing concave function and uses Lagrangian relaxation to decompose the problem and reformulate the subproblems as second-order cone programs that are solved at multiple utilization levels.

Journal ArticleDOI
TL;DR: In this paper, sufficient and necessary conditions for the minimax equality for extended real-valued abstract convex–concave functions are provided.
Abstract: In this paper, we provide sufficient and necessary conditions for the minimax equality for extended real-valued abstract convex–concave functions. As an application, we get sufficient and necessary conditions for the minimax equality for extended real-valued convex–concave functions.

Journal ArticleDOI
TL;DR: A novel linear programming relaxation is defined for solving optimization problems with a diseconomy of scale and it is shown that the integrality gap of the relaxation is Aq, where Aq is the q-th moment of the Poisson random variable with parameter 1.
Abstract: We present a new framework for solving optimization problems with a diseconomy of scale. In such problems, our goal is to minimize the cost of resources used to perform a certain task. The cost of resources grows superlinearly, as xq, q≥ 1, with the amount x of resources used. We define a novel linear programming relaxation for such problems and then show that the integrality gap of the relaxation is Aq, where Aq is the q-th moment of the Poisson random variable with parameter 1. Using our framework, we obtain approximation algorithms for the Minimum Energy Efficient Routing, Minimum Degree Balanced Spanning Tree, Load Balancing on Unrelated Parallel Machines, and Unrelated Parallel Machine Scheduling with Nonlinear Functions of Completion Times problems. Our analysis relies on the decoupling inequality for nonnegative random variables. The inequality states that ║∑i=1nXi║q ≤ Cq║∑i=1nYi║q, where Xi are independent nonnegative random variables, Yi are possibly dependent nonnegative random variables, and each Yi has the same distribution as Xi. The inequality was proved by de la Pena in 1990. De la Pena, Ibragimov, and Sharakhmetov showed that Cq≤ 2 for qi(1,2) and Cq≤ Aq1/q for q≥ 2. We show that the optimal constant is Cq= Aq1/q for any q≥ 1. We then prove a more general inequality: For every convex function v, Elv(∑i=1nXi)r ≤ Elv (P∑i=1n Yi)r, and, for every concave function ψ, Elψ (∑i=1n Xi)r ≥ Elψ(P∑i=1n Yi)r, where P is a Poisson random variable with parameter 1 independent of the random variables Yi.

Journal ArticleDOI
TL;DR: In this paper, the coefficient regions of analytic self-maps of the unit disk with a prescribed fixed point were discussed, and the Fekete-Szegő problem for normalized concave functions with a pole in the disk was solved.
Abstract: In this article, we discuss the coefficient regions of analytic self-maps of the unit disk with a prescribed fixed point. As an application, we solve the Fekete–Szegő problem for normalized concave functions with a pole in the unit disk.

Posted Content
TL;DR: This paper considers ambiguity in choice functions over a multi-attribute prospect space and proposes two approaches based respectively on the support functions and level functions of quasi-concave functions to develop tractable formulations of the maximin preference robust optimization model.
Abstract: Decision maker's preferences are often captured by some choice functions which are used to rank prospects. In this paper, we consider ambiguity in choice functions over a multi-attribute prospect space. Our main result is a robust preference model where the optimal decision is based on the worst-case choice function from an ambiguity set constructed through preference elicitation with pairwise comparisons of prospects. Differing from existing works in the area, our focus is on quasi-concave choice functions rather than concave functions and this enables us to cover a wide range of utility/risk preference problems including multi-attribute expected utility and $S$-shaped aspirational risk preferences. The robust choice function is increasing and quasi-concave but not necessarily translation invariant, a key property of monetary risk measures. We propose two approaches based respectively on the support functions and level functions of quasi-concave functions to develop tractable formulations of the maximin preference robust optimization model. The former gives rise to a mixed integer linear programming problem whereas the latter is equivalent to solving a sequence of convex risk minimization problems. To assess the effectiveness of the proposed robust preference optimization model and numerical schemes, we apply them to a security budget allocation problem and report some preliminary results from experiments.

Journal ArticleDOI
TL;DR: This paper proposes an alternative to nonparametric segmented concave least squares by using a differentiable approximation to an arbitrary functional form based on smoothly mixing Cobb-Douglas anchor functions over the data space.

Journal ArticleDOI
TL;DR: In this paper, the existence and multiplicity of positive solutions for a system of nonlinear second-order difference equations subject to multi-point boundary conditions, under some assumptions on the nonlinearities of the system which contains concave functions, were studied.
Abstract: We study the existence and multiplicity of positive solutions for a system of nonlinear second-order difference equations subject to multi-point boundary conditions, under some assumptions on the nonlinearities of the system which contains concave functions. In the proofs of our main results we use some theorems from the fixed point index theory.

Journal ArticleDOI
TL;DR: In this article, fractional integral is considered and some new upper bounds of the distance between the middle and left of Hermite–Hadamard type inequalities for fractional integrals are established for (s,m)-convex or s-concave functions.
Abstract: In this article, fractional integral is considered Some new upper bounds of the distance between the middle and left of Hermite–Hadamard type inequalities for fractional integrals are established for $(s,m)$ -convex or s-concave functions

Journal ArticleDOI
TL;DR: The multiple exchange property for matroid bases is generalized for valuated matroids and M-natural concave set functions and the strong no complementarities condition of Gul and Stacchetti is equivalent to the gross substitutes condition of Kelso and Crawford.
Abstract: The multiple exchange property for matroid bases is generalized for valuated matroids and M-natural concave set functions. The proof is based on the Fenchel-type duality theorem in discrete convex analysis. The present result has an implication in economics: The strong no complementarities condition of Gul and Stacchetti is, in fact, equivalent to the gross substitutes condition of Kelso and Crawford.

Posted Content
TL;DR: In this article, the generalized Sugeno integral of any measurable function has been shown to have two-sided attainable bounds of Jensen type for convex and concave functions in real-valued functions.
Abstract: In this paper we provide two-sided attainable bounds of Jensen type for the generalized Sugeno integral of {\it any} measurable function. The results extend the previous results of Roman-Flores et al. for increasing functions and Abbaszadeh et al. for convex and concave functions. We also give corrections of some results of Abbaszadeh et al. As a by-product, we obtain sharp inequalities for symmetric integral of Grabisch. To the best of our knowledge, the results in the real-valued functions context are presented for the first time here.

Journal ArticleDOI
TL;DR: For planar convex bodies, it was shown in this paper that for convex body with convex inverse functions, the Bruns-Minkowski inequality is equivalent to the Minkowski mixed volume inequalities.
Abstract: For strictly increasing concave functions $${\varphi}$$ whose inverse functions are log-concave, the $${\varphi}$$ -Brunn–Minkowski inequality for planar convex bodies is established. It is shown that for convex bodies in $${\mathbb{R}^n}$$ the $${\varphi}$$ -Brunn–Minkowski is equivalent to the $${\varphi}$$ -Minkowski mixed volume inequalities.