scispace - formally typeset
Search or ask a question

Showing papers on "Concave function published in 1992"


Journal ArticleDOI
TL;DR: Dynamic programming solutions to two recurrence equations, used to compute a sequence alignment from a set of matching fragments between two strings, and to predict RNA secondary structure, are considered.
Abstract: Dynamic programming solutions to two recurrence equations, used to compute a sequence alignment from a set of matching fragments between two strings, and to predict RNA secondary structure, are considered. These recurrences are defined over a number of points that is quadratic in the input size; however, only a sparse set matters for the result. Efficient algorithms are given for solving these problems, when the cost of a gap in the alignment or a loop in the secondary structure is taken as a convex or concave function of the gap or loop length. The time complexity of our algorithms depends almost linearly on the number of points that need to be considered; when the problems are sparse, this results in a substantial speed-up over known algorithms.

101 citations


Journal ArticleDOI
TL;DR: This paper proposes different methods for finding the global minimum of a concave function subject to quadratic separable constraints and shows how this constraint can be replaced by an equivalent system of convex and separablequadratic constraints.
Abstract: When the follower's optimality conditions are both necessary and sufficient, the nonlinear bilevel program can be solved as a global optimization problem. The complementary slackness condition is usually the complicating constraint in such problems. We show how this constraint can be replaced by an equivalent system of convex and separable quadratic constraints. In this paper, we propose different methods for finding the global minimum of a concave function subject to quadratic separable constraints. The first method is of the branch and bound type, and is based on rectangular partitions to obtain upper and lower bounds. Convergence of the proposed algorithm is also proved. For computational purposes, different procedures that accelerate the convergence of the proposed algorithm are analysed. The second method is based on piecewise linear approximations of the constraint functions. When the constraints are convex, the problem is reduced to global concave minimization subject to linear constraints. In the case of non-convex constraints, we use zero-one integer variables to linearize the constraints. The number of integer variables depends only on the concave parts of the constraint functions.

89 citations



Proceedings ArticleDOI
24 Oct 1992
TL;DR: The author obtains the first polynomial approximation algorithms for many NP-hard problems such as the parametric Euclidean traveling salesman problem.
Abstract: Consider a convex set P in R/sup d/ and a piece wise polynomial concave function F: P to R. Let A be an algorithm that given a point x in IR/sup d/ computes F(x) if x in P, or returns a concave polynomial p such that p(x) or= 0. The author assumes that d is fixed and that all comparisons in A depend on the sign of polynomial functions of the input point. He shows that under these conditions, one can find max/sub P/ F in time which is polynomial in the number of arithmetic operations of A. Using this method he gives the first strongly polynomial algorithms for many nonlinear parametric problems in fixed dimension, such as the parametric max flow problem, the parametric minimum s-t distance, the parametric spanning tree problem and other problems. In addition he shows that in one dimension, the same result holds even if one only knows how to approximate the value of F. Specifically, if one can obtain an alpha -approximation for F(x) then one can alpha -approximate the value of maxF. He thus obtains the first polynomial approximation algorithms for many NP-hard problems such as the parametric Euclidean traveling salesman problem. >

20 citations


Journal ArticleDOI
TL;DR: In this paper, conditions for the existence of upper and lower bounds on convex quadratic objective functions subject to concave and convex Quadratic Constraint Constraints are presented.

13 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that every reasonable concave preference ordering possesses a concave utility function assuming values in a suitable non-standard extension of the reals, unless a certain finiteness (or piecewise linearity) condition holds.

12 citations


Book ChapterDOI
29 Apr 1992
TL;DR: An O(MP(log M + log2P)) algorithm for approximate regular expression matching for an arbitrary δ and any concave w is presented.
Abstract: Given a sequence A of length M and a regular expression R of length P, an approximate regular expression pattern matching algorithm computes the score of the best alignment between A and one of the sequences exactly matched by R There are a variety of schemes for scoring alignments In a concave gap-penalty scoring scheme, a function δ(a, b) gives the score of each aligned pair of symbols a and b, and a concave function w(k) gives the score of a sequence of unaligned symbols, or gap, of length k A function w is concave if and only if it has the property that for all k > 1, w(k+ 1)-w(k)

11 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that the expected local time at zero of a reflected Levy process with no negative jumps, starting from the origin, is a concave function of the time variable.
Abstract: Simple necessary and sufficient conditions for a function to be concave in terms of its shifted Laplace transform are given. As an application of this result, we show that the expected local time at zero of a reflected Levy process with no negative jumps, starting from the origin, is a concave function of the time variable. A special case is the expected cumulative idle time in an M/G/I queue. An immediate corollary is the concavity of the expected value of the reflected Lvy process itself. A special case is the virtual waiting time in an M/G/I queue.

11 citations



Journal ArticleDOI
TL;DR: In this paper, a linear programming subproblems in a space of dimensionp + q + 1 was shown to be solvable by a conical algorithm, where q may be much larger than p.
Abstract: In this paper, we are concerned with the linearly constrained global minimization of the sum of a concave function defined on ap-dimensional space and a linear function defined on aq-dimensional space, whereq may be much larger thanp. It is shown that a conical algorithm can be applied in a space of dimensionp + 1 that involves only linear programming subproblems in a space of dimensionp +q + 1. Some computational results are given.

10 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that for any function f : (0, ∞) → [0 ∞] satisfying the inequality f(s) + bf(t) ⩽ f(as + bt), s, t > 0, for somea andb such that 0 0).
Abstract: In the present note we prove that every functionf: (0, ∞) → [0, ∞) satisfying the inequalityaf(s) + bf(t) ⩽ f(as + bt), s, t > 0, for somea andb such that 0 0). This improves our recent result in [2], where the inequality is assumed to hold for alls, t ⩾ 0, and gives a positive answer to the question raised there.

Journal ArticleDOI
TL;DR: A network supply problem in which flows between any pair of nodes are possible, and it is argued that the cost function is approximately supportable in a well defined sense when link costs are independent of capacity in such a network.
Abstract: This paper considers a network supply problem in which flows between any pair of nodes are possible. It is assumed that users place a value on connection to other users in the network, and (possibly) on access to an external source. Cost on each link is an arbitrary concave function of link capacity. The objective is to study coalitional stability in this situation, when collections of flows can be served by competing suppliers. In contrast to other network games, this approach focuses on the cost of serving flows rather than the cost of attaching nodes to the network. The network is said to be stable if the derived cost function is supportable. Supportable cost functions, defined by Sharkey and Telser [9], are cost functions for which there exists a price vector which covers total cost, and simultaneously deters entry at any lower output by a rival firm with the same cost function. If the minimal cost network includes a link between every pair of nodes, then the cost function is shown to be supportable. In the special case in which link cost is independent of capacity, the cost function is also supportable. The paper also considers “Steiner” networks in which new nodes may be created in order to minimize total cost, or in which access may be obtained at more than one source location. When link costs are independent of capacity in such a network, it is argued that the cost function is approximately supportable in a well defined sense.


Journal ArticleDOI
M. Li Calzi1
TL;DR: This work defines and study three classes of generalized order concave functions from chains to chains whose properties are very close to those of the family of quasiconcave functions on R.
Abstract: We introduce order-theoretic generalized concavity. This is the study of those properties of generalized concave functions which require only that some order relations are defined on the domain and the image of the functions. In this work we define and study three classes of generalized order concave functions from chains to chains whose properties are very close to those of the family of quasiconcave functions on R. Purely order-theoretic characterization theorems are provided

Journal ArticleDOI
TL;DR: In this article, the authors show that for a special function, which is proportional to the density of a Wishart distribution, reparametrization can lead to maximization of a concave function.

Journal ArticleDOI
TL;DR: In this paper, the authors give a proof of convergence of an iterative method for maximizing a concave function subject to inequality constraints involving convex functions, where each iteration is very simple computationally, involving only one of the constraints.
Abstract: This paper gives a proof of convergence of an iterative method for maximizing a concave function subject to inequality constraints involving convex functions. The linear programming problem is an important special case. The primary feature is that each iteration is very simple computationally, involving only one of the constraints. Although the paper is theoretical in nature, some numerical results are included.

Book ChapterDOI
01 Jan 1992
Abstract: A class of generalized convex functions, the hyperbolic{concave functions, is de ned, and used to characterize the collection of Hardy{ Littlewood maximal functions. These maximal functions and the probability measures associated with these maximal functions, the maximal probability measures, are used in representations and inequalities within martingale theory. A related collection of minimal probability measures is also characterized, through a class of hyperbolic{concave envelopes.

Journal ArticleDOI
M. Behara1, Z. Dudek1
TL;DR: It is shown that for the sufficiently large number N of probability components the greatest value of a paraconcave entropy function is attained in the center of gravity of the simplex p1 + … + pN = 1, p1 ⩾ 0,…, pN ⩽ 0.

01 Jan 1992
TL;DR: Using this method, the first polynomial approximation algorithms for many NP-hard problems such as the parametric Euclidean Traveling Salesman Problem are obtained.
Abstract: Consider a convex set P in Rd and a piecewise polynomial concave function F: P -+ R. Let A be an algorithm that given a point x E Rd computes F(x) if x E PI or returns a concave polynomial p such that p(x) < 0 but for any y E P, p(y) 2 0. We assume that d is fixed and that all comparisons in A depend on the sign of polynomial functions of the input point. We show that under these conditions, one can find maxp F in time which is polynomial in the number of arithmetic operations of A. Using our method we give the first strongly polynomial algorithms for many nonlinear parametric problems in fixed dimension, such as the parametric mar pow problem, the parametric minimum s-t distance, the parametric spanning tree problem and other problems. In addition we show that in one dimension, the same result holds even if we only know how to approximate the value of F. Specifically, if we can obtain an a-approximation for F(x) then we can a-approximate the value of maxF. We thus obtain the first polynomial approximation algorithms for many NP-hard problems such as the parametric Euclidean Traveling Salesman Problem.

Journal ArticleDOI
TL;DR: In this article, the authors examined a model of optimal economic growth with infinite continuous time planning horizon, convex technology and an aggregate discounted utility, and established the existence of optimal capital accumulation paths, and then studied their variations as well as those of the optimal value as they perturb the data of the model.