scispace - formally typeset
Search or ask a question
Topic

Concave function

About: Concave function is a research topic. Over the lifetime, 1415 publications have been published within this topic receiving 33278 citations.


Papers
More filters
Book ChapterDOI
01 Jan 1997
TL;DR: In this article, the authors introduce new classes of vector generalized concave functions and point out their role in investigating local and global efficiency and in establishing sufficient optimality conditions for a vector optimization problem.
Abstract: One of the aim of this paper is to introduce new classes of vector generalized concave functions and to point out their role in investigating local and global efficiency and in establishing sufficient optimality conditions for a vector optimization problem. Another aim is to stress the role of the Bouligand tangent cone at a point of the feasible region in deriving optimality conditions.

7 citations

01 Jan 2014
TL;DR: In this article, the Hessian matrices of penalty functions, evaluated at their minimizing point, are compared with the condition number of these matrices for small values of a controlling parameter.
Abstract: This paper deals with the Hessian matrices of penalty functions, evaluated at their minimizing point. It is concerned with the condition number of these matrices for small values of a controlling parameter. At the end of the paper a comparison is made between different types of penalty functions on the grounds of the results obtained. 1. Classification of penalty-function techniques Throughout this paper we shall be concerned with the problem minimize f(x) subject t~ } g,(x) ~ 0; 1 = I, ... ,m, where x denotes an element of the n-dimensional vector space En. We shall be assuming that the functions J, -gl> ... , -gm are convex and twice differentiable with continuous second-order partial derivatives on an open, convex subset V of En. The constraint set (1.1) R={xlg,(x);;:;::O; i=I, ... ,m} (1.2) is a bounded subset of V. The interior Ro of R is non-empty. We consider a number of penalty-function techniques for solving problem (1.1). One can distinguish two classes, both of which have been referred to by expressive names. The interior-point methods operate in the interior Ro of R. The penalty function is given by m Br(x) = f(x) r ~ cp[g,(x)], '=1 (1.3) where cp is a concave function of one variable, say y. Its derivative tp' reads cp'(y) = y-V (l.4) with a positive, integer '11. A point x(r) minimizing (1.3) over Ro 'exists then for "any r > 0, Any convergent sequence {x(rk)}' where {rk} is a monotonie, decreasing null sequence as k --* 00, converges to a solution of (1.1). The exterior-point methods or outside-in methods present an approach to a minimum solution from outside the constraint set. The' general form of the HESSIAN MATRICES OF PENALTY FUNCTIONS penalty function is given by m Lr(x) = f(x) r-1 ~ lp[g,(x)], '=1 where 11' is a concave function of one variable y, such that {° for y < 0, 1p(y) = co(y) for y::::;; 0. The derivative co' of co is given by co'(y) = (-y)V. Let z(r) denote a point minimizing (1.5) over En. Any convergent sequence {zerk)}, where {rk} denotes again a monotonie, decreasing null sequence, converges to a solution of (1.1). It will be convenient to extend the terminology that we have been using in previous papers. Following Murray S) we shall refer to interior-point penalty functions of the type (1.3) as barrier functions. The exterior-point penalty functions (1.5) will briefly be indicated as loss functions, a name which has also been used by Fiacco and McCormick 1). Furthermore, we introduce a classification based on the behaviour of the functions tp' and co' in a neighbourhood of y = O. A barrier function is said to be of order 'JI if the function tp' has a pole of order 'JI at y = o. Similarly, a loss function is of order 'JI if the function co' has a zero of order 'JI at y = o. 2. Conditioning An intriguing point is the choice of a penalty function for numerical purposes. We shall not repeat here all the arguments supporting the choice of the firstorder penalty functions for computational purposes. Our concern is an argument which has been introduced only by Murray S), namely the question of "conditioning". This is a qualification referring to the Hessian matrix of a penalty function. The motivation for such a study is the idea that failures of (second-order) unconstrained-minimization techniques may be due to illconditioning of the Hessian matrix at some iteration points. Throughout this paper it is tacitly assumed that penalty functions are strictly convex so that they have a unique minimizing point in their definition area. We shall primarily be concerned with the Hessian matrix of penalty functions at the minimizing point. In what follows we shall refer to it as the principal Hessian matrix. The reason will be clear. In a neighbourhood ofthe minimizing point a useful approximation of a penalty function is given by a quadratic function, with the principal Hessian matrix as the coefficient matrix of the quadratic term. It is therefore reasonable to assume that unconstrained mini323

7 citations

Proceedings ArticleDOI
24 Jul 2011
TL;DR: This paper presents an extreme-point-based finite-termination procedure for obtaining a global maximizer of the associated Lagrangian dual problem, which is, in general, a piecewise-affine concave function.
Abstract: Prices in electricity markets are given by the dual variables associated with the supply-demand constraint in the dispatch problem. However, in unit-commitment-based day-ahead markets, these variables are less easy to obtain. A common approach relies on resolving the dispatch problem with the commitment decisions fixed and utilizing the associated dual variables. Yet, this avenue leads to inadequate revenues to generators and necessitates an uplift payment to be made by the market operator. Recently, a convex hull pricing scheme has been proposed to reduce the impact of such payments and requires the global maximization of the associated Lagrangian dual problem, which is, in general, a piecewise-affine concave function. In this paper, we present an extreme-point-based finite-termination procedure for obtaining such a global maximizer. Unlike standard subgradient schemes where an arbitrary subgradient is used, we present a novel technique where the steepest ascent direction is constructed by solving a continuous quadratic program. The scheme initiates a move along this direction with an a priori constant steplength, with the intent of reaching the boundary of the face. A backtracking scheme allows for mitigating the impact of excessively large steps. Termination of the scheme occurs when the set of subgradients contains the zero vector. Preliminary numerical tests are seen to be promising and display the finite-termination property. Furthermore, the scheme is seen to significantly outperform standard subgradient methods.

7 citations

Journal ArticleDOI
TL;DR: In the regime where the empirical degree distribution approaches a limit μ with finite mean, the systematic convergence of a broad class of graph parameters that includes the independence number, the maximum cut size, the logarithm of the Tutte polynomial, and the free energy of the anti-ferromagnetic Ising and Potts models is established.
Abstract: We consider large random graphs with prescribed degrees, such as those generated by the configuration model. In the regime where the empirical degree distribution approaches a limit $\mu$ with finite mean, we establish the systematic convergence of a broad class of graph parameters that includes in particular the independence number, the maximum cut size and the log-partition function of the antiferromagnetic Ising and Potts models. The corresponding limits are shown to be Lipschitz and concave functions of $\mu$. Our work extends the applicability of the celebrated interpolation method, introduced in the context of spin glasses, and recently related to the fascinating problem of right-convergence of sparse graphs.

7 citations

01 Jan 2003
TL;DR: A generalized convexity framework similar to a classical concept of E.F.Beckenbach is presented, which provides an inequality that can be used to derive stable and eective descent algorithms for estimation of the parameters in the Bayesian linear model when sub- or super-gaussian priors are used.
Abstract: Component estimation arises in Independent Component Analysis (ICA), Blind Source Separation (BSS), wavelet analysis and signal denoising [1], image reconstruction [2, 3], Factor Analysis [4], and sparse coding [5, 6]. In theoretical and algorithmic developments, an important distinction is commonly made between sub- and super-gaussian densities, super-gaussian densities being characterized as having high kurtosis, or having a sharp peak and heavy tails. In this paper we present a generalized convexity framework similar to a classical concept of E.F.Beckenbach [7], which we refer to as relative convexity. Based on a partial ordering induced by relative convexity, we derive a new measure of function curvature and a new criterion for super-gaussianity that is both simpler and of wider application than the kurtosis criterion. The relative convexity framework also provides an inequality that can be used to derive stable and eective descent algorithms for estimation of the parameters in the Bayesian linear model when sub- or super-gaussian priors are used. Apparently almost all common symmetric densities are comparable in this ordering to Gaussian, and thus are either sub- or super-gaussian, despite the fact that the measure is instantaneous, in contrast to momentbased measures. We present several algorithms for component estimation that are shown to be descent algorithms based on the relative convexity inequality arising from the assumption of super-gaussian priors. We also show an interesting relationship between the curvature of a convex or concave function and the curvature of its Fenchel-Legendre conjugate, which results in an elegant duality relationship between estimation with sub- and super-gaussian densities.

7 citations


Network Information
Related Topics (5)
Markov chain
51.9K papers, 1.3M citations
74% related
Bounded function
77.2K papers, 1.3M citations
74% related
Polynomial
52.6K papers, 853.1K citations
72% related
Upper and lower bounds
56.9K papers, 1.1M citations
72% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202158
202049
201952
201860