scispace - formally typeset
Search or ask a question

Showing papers on "Concave function published in 1981"


Journal ArticleDOI
TL;DR: This algorithm may be viewed as a generalization of the proximal point algorithm to cope with non-convexity of the objective function by linearizing the differentiable term at each iteration.
Abstract: An algorithm is presented for minimizing a function which is the sum of a continuously differentiable function and a convex function. The class of such problems contains as a special case that of minimizing a continuously differentiable function over a closed convex set. This algorithm may be viewed as a generalization of the proximal point algorithm to cope with non-convexity of the objective function by linearizing the differentiable term at each iteration. Convergence of the algorithm is proved and the rate of convergence is analysed.

189 citations


Journal ArticleDOI
TL;DR: A method is described for globally minimizing concave functions over convex sets whose defining constraints may be nonlinear that allows the objective function to be lower semicontinuous and nonseparable, and is guaranteed to converge to the global solution.
Abstract: A method is described for globally minimizing concave functions over convex sets whose defining constraints may be nonlinear. The algorithm generates linear programs whose solutions minimize the convex envelope of the original function over successively tighter polytopes enclosing the feasible region. The algorithm does not involve cuts of the feasible region, requires only simplex pivot operations and univariate search computations to be performed, allows the objective function to be lower semicontinuous and nonseparable, and is guaranteed to converge to the global solution. Computational aspects of the algorithm are discussed.

121 citations


Book ChapterDOI
TL;DR: Interactive Multiple Goal Programming (IMGP) starts frcm the assumption that the decision maker has defined a number of goal variables g1(x), …, 9m(x) these being concave functions of the instrumental variables x1, …, xn (x in vector notation).
Abstract: Interactive Multiple Goal Programming (IMGP) starts frcm the assumption that the decision maker has defined a number of goal variables g1(x), …, 9m(x), these being concave functions of the instrumental variables x1, …, xn (x in vector notation).

79 citations


Journal ArticleDOI
TL;DR: In this paper, the Neyman-Pearson lemma for 2-alternating capacities is applied to test problems between noncompact neighbourhoods of probability measures, and it is shown that the Radon-Nikodym derivative between the special capacities is usually a nondecreasing function of the truncated likelihood ratio of some probability measures.
Abstract: Solutions to minimax test problems between neighbourhoods generated by specially defined capacities are discussed. The capacities are superpositions of probability measures and concave functions, so the paper covers most of the earlier results of Huber and Rieder concerning minimax testing between ɛ-contamination and total variation neighbourhoods. It is shown that the Neyman-Pearson lemma for 2-alternating capacities, proved by Huber and Strassen, can be applied to test problems between noncompact neighbourhoods of probability measures. It turns out that the Radon-Nikodym derivative between the special capacities is usually a nondecreasing function of the truncated likelihood ratio of some probability measures.

49 citations


Journal ArticleDOI
TL;DR: In this article, branch-and-bound methods, cutting plane methods, and polyhedra methods are categorized into three distinct areas: (1) branch and bound, (2) cutting plane, and (3) building-up of polyhedral structures.
Abstract: Methodology is described for the global solution to problems of the form minimize f ( x ) subject to x ∈ G , u i ⩾ x i ⩾ l i where f ( x ) is a concave function of the vector x in R n space and G is a convex polytope. This paper investigates recent work accomplished to solve this problem. Basically, solution algorithms can be categorized into three distinct areas: (1) branch-and-bound methods, (2) cutting plane methods, and (3) build-up of polyhedra methods. Computational experience with these methods is also examined. Particular attention is paid to the possibility for extension of these algorithms to problems of large scale, specifically to those evidencing separable objective functions which are subject to linear sets of constraints.

35 citations


Journal ArticleDOI
TL;DR: A modification of Tuy's cone splitting algorithm for minimizing a concave function subject to linear inequality constraints is shown to be convergent by demonstrating that the limit of a sequence of constructed convex polytopes contains the feasible region as mentioned in this paper.
Abstract: A modification of Tuy's cone splitting algorithm for minimizing a concave function subject to linear inequality constraints is shown to be convergent by demonstrating that the limit of a sequence of constructed convex polytopes contains the feasible region. No geometric tolerance parameters are required.

20 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that under certain requirements on the operator and on the nature of its dependence on the real parameter k, the sum of the eigenvalues of an operator of a certain type is a concave function of k, for any positive integer n.
Abstract: Letλ1 (k)≦λ2 (k)≦⋯ be the eigenvalues of an operator of a certain type depending on a real parameterk. The paper shows that under certain requirements on the operator and on the nature of its dependence onk, the sumλ1 (k)+⋯+λ N (k) is a concave function ofk, for any positive integerN.

4 citations


Journal ArticleDOI
TL;DR: In this paper, an algorithm for minimizing a concave function over a convex polyedral set is given, which is based on the extension principle developed by Schoch and yields after a finite number of steps an exact optimal solution of the problem.
Abstract: For the problem of minimizing a concave function over a convex polyedral-set an algorithm is given, which is based on the extension principle developed by Schoch. This algorithm yields after a finite number of steps an exact optimal solution of the problem. On the other hand one can find out throughout the algorithm an approximate optimal solution with any given precision.

3 citations




Journal ArticleDOI
TL;DR: In this article, it was shown that f is concave if and only if for each w, c ( w, ·) is convex, where w is the cost function.

01 Jan 1981
TL;DR: Uryas'ev and Norkin this article proposed a method for unconditional minimization of an undifferentiable function, in which the direction of descent is chosen from a convex shell of generalized (anti)gradient taken on a fixed number of preceding iterations, while the step size is adjusted by the software.
Abstract: 4. S. P. Uryas'ev, "A method of adjusting the step size for the E-subgradient method," in: Behavior of Systems in Random Environments [in Russian], Inst. Kibern. Akad. Nauk UkrSSR, Kiev (1979), pp. 76-84. A ME THOD FUNCTION OF MINIMIZING AN UND!FFERENTIABLE WITH GENERALIZED-GRADIENT AVERAGING V. I. Norkin UDC 519.853.6 A method is given here for unconditional minimization of an undifferentiable function (generalized dif- ferential function), in which the direction of descent is chosen from a convex shell of generalized (anti)gra- dients taken on a fixed number of preceding iterations, while the step size is adjusted by the software. This method resembles the method of [1] taking a position intermediate between relaxation and nonrelaxation meth- ods. Definition [2, 3]. The function F(x), x ~ R m is called generalized differentiable if there exists a semi- continuous from above point-set mapping G(F) :x ~ R m ~ G(F, x) ~ R m such that the sets G (F, x) are bounded, convex, and closed, and at each point y e R m the following applies: F (x) = F (y) + (g (x), ~-- x) + o (y, x, g), (1) where o(y, x, g)/ix - y~ -- 0 uniformly for x --y and g 6 G(F, x). The elements of set G(F, x) are called the generalized gradients of F at point x. The class of generalized differentiable functions contains continuously differentiable ones, convex func- tions, and concave functions, and it is closed under the finite operations of maximum, minimum, and super- position. The gradients of continuously differentiable functions and the subgradients of convex functions are generalized gradients of these. To calculate the generalized gradients of complicated functions one has rules analogous to the rules for calculating ordinary gradients. A generalized differentiable function satisfies the local Lipshits condition. A necessary condition for a turning point in F at x is 0 ~ G(F, x) [2, 3]. To minimize F we use an algorithm: