scispace - formally typeset
Search or ask a question

Showing papers on "Linear-fractional programming published in 1998"


Book
01 Mar 1998
TL;DR: The theory of linear infinite programmings has been studied extensively in the literature, see as mentioned in this paper for a survey. But it is based on the primal problem and not on the dual problem.
Abstract: MODELLING. Modelling with the Primal Problem. Modelling with the Dual Problem. LINEAR SEMI-INFINITE SYSTEMS. Alternative Theorems. Consistency. Geometry. Stability. THEORY OF LINEAR SEMI-INFINITE PROGRAMMING. Optimality. Duality. Extremality and Boundedness. Stability and Well-Posedness. METHODS OF LINEAR SEMI-INFINITE PROGRAMMING. Local Reduction and Discretization Methods. Simplex-Like and Exchange Methods. Appendix. Symbols and Abbreviations. References. Index.

551 citations


Journal ArticleDOI
TL;DR: A Historical Sketch on Sensitivity Analysis and Parametric Programming T.J. Greenberg and the Optimal Set and Optimal Partition Approach.
Abstract: Foreword. Preface. 1. A Historical Sketch on Sensitivity Analysis and Parametric Programming T. Gal. 2. A Systems Perspective: Entity Set Graphs H. Muller-Merbach. 3. Linear Programming 1: Basic Principles H.J. Greenberg. 4. Linear Programming 2: Degeneracy Graphs T. Gal. 5. Linear Programming 3: The Tolerance Approach R.E. Wendell. 6. The Optimal Set and Optimal Partition Approach A.B. Berkelaar, et al. 7. Network Models G.L. Thompson. 8. Qualitative Sensitivity Analysis A. Gautier, et al. 9. Integer and Mixed-Integer Programming C. Blair. 10. Nonlinear Programming A.S. Drud, L. Lasdon. 11. Multi-Criteria and Goal Programming J. Dauer, Yi-Hsin Liu. 12. Stochastic Programming and Robust Optimization H. Vladimirou, S.A. Zenios. 13. Redundancy R.J. Caron, et al. 14. Feasibility and Viability J.W. Chinneck. 15. Fuzzy Mathematical Programming H.-J. Zimmermann. Subject Index.

195 citations


Journal ArticleDOI
TL;DR: Computational tests of three approaches to feature selection algorithm via concave minimization on publicly available real-world databases have been carried out and compared with an adaptation of the optimal brain damage method for reducing neural network complexity.
Abstract: The problem of discriminating between two finite point sets in n-dimensional feature space by a separating plane that utilizes as few of the features as possible is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in the objective function can be approximated by a sigmoid or by a concave exponential on the nonnegative real line, or it can be treated exactly by considering the equivalent linear program with equilibrium constraints. Computational tests of these three approaches on publicly available real-world databases have been carried out and compared with an adaptation of the optimal brain damage method for reducing neural network complexity. One feature selection algorithm via concave minimization reduced cross-validation error on a cancer prognosis database by 35.4% while reducing problem features from 32 to 4.

191 citations


Reference EntryDOI
01 Jan 1998
TL;DR: This book presents a comprehensive survey of Linear Semi-Infinite Optimization Theory and Numerical Methods for Semi-infinite Programming, and discusses the design of Nonrecursive Digital Filters via Convex Optimization and its applications in Control and Reliability Testing.
Abstract: Preface. Part I: Theory. 1. A Comprehensive Survey of Linear Semi-Infinite Optimization Theory M.A. Goberna, M.A. Lopez. 2. On Stability and Deformation in Semi-Infinite Optimization H.Th. Jongen, J.J. Ruckmann. 3. Regularity and Stability in Nonlinear Semi-Infinite Optimization D. Klatte, R. Henrion. 4. First and Second Order Optimality Conditions and Perturbation Analysis of Semi-Infinite Programming Problems A. Shapiro. Part II: Numerical Methods. 5. Exact Penalty Function Methods for Nonlinear Semi-Infinite Programming I.D. Coope, C.J. Price. 6. Feasible Sequential Quadratic Programming for Finely Discretized Problems from SIP C.T. Lawrence, A.L. Tits. 7. Numerical Methods for Semi-Infinite Programming: A Survey R. Reemtsen, S. Goerner. 8. Connections Between Semi-Infinite and Semidefinite Programming L. Vandenberghe, S. Boyd. Part III: Applications. 9. Reliability Testing and Semi-Infinite Linear Programming I. Kuban Altinel, S. OEzekici. 10. Semi-Infinite Programming in Orthogonal Wavelet Filter Design K.O. Kortanek, P. Moulin. 11. The Design of Nonrecursive Digital Filters via Convex Optimization A.W. Potchinkov. 12. Semi-Infinite Programming in Control E.W. Sachs.

114 citations


Journal ArticleDOI
TL;DR: This work considers the problem of constructing prefix-free codes of minimum cost when the encoding alphabet contains letters of unequal length and introduces a new dynamic programming solution that optimally encodes n words in O(n/sup C+2/) time.
Abstract: We consider the problem of constructing prefix-free codes of minimum cost when the encoding alphabet contains letters of unequal length. The complexity of this problem has been unclear for thirty years with the only algorithm known for its solution involving a transformation to integer linear programming. We introduce a new dynamic programming solution to the problem. It optimally encodes n words in O(n/sup C+2/) time, if the costs of the letters are integers between 1 and C. While still leaving open the question of whether the general problem is solvable in polynomial time, our algorithm seems to be the first one that runs in polynomial time for fixed letter costs.

95 citations


Journal ArticleDOI
TL;DR: The softness and the robustness of the optimality in the setting of linear programming problems with a fuzzy objective function is discussed and a solution algorithm for the best necessarily soft-optimal solution is proposed.

92 citations


Book ChapterDOI
Xiaotie Deng1
01 Jan 1998
TL;DR: This chapter reviews some results with several co-authors for problems related to multiple level programming along three directions: complexity, polynomial algorithms for special cases, and potentially important application areas of multiplelevel programming techniques.
Abstract: In this chapter, we review some results with several co-authors for problems related to multiple level programming along three directions: complexity, polynomial algorithms for special cases, and potentially important application areas of multiple level programming techniques.

91 citations


Journal ArticleDOI
TL;DR: In this paper, a new computational test is proposed for nonexistence of a solution to a system of nonlinear equations in a convex polyhedral regionX. The basic idea proposed here is to formulate a linear programming problem whose feasible region contains all solutions inX.
Abstract: A new computational test is proposed for nonexistence of a solution to a system of nonlinear equations in a convex polyhedral regionX. The basic idea proposed here is to formulate a linear programming problem whose feasible region contains all solutions inX. Therefore, if the feasible region is empty (which can be easily checked by Phase I of the simplex method), then the system of nonlinear equations has no solution inX. The linear programming problem is formulated by surrounding the component nonlinear functions by rectangles using interval extensions. This test is much more powerful than the conventional test if the system of nonlinear equations consists of many linear terms and a relatively small number of nonlinear terms. By introducing the proposed test to interval analysis, all solutions of nonlinear equations can be found very efficently.

89 citations


Journal ArticleDOI
TL;DR: It is shown that, under suitable assumptions, the program's optimum value can be approximated by the values of finite-dimensional linear programs, and that every accumulation point of a sequence of optimal solutions for the approximating programs is an optimal solution for the original problem.
Abstract: This paper presents approximation schemes for an infinite linear program. In particular, it is shown that, under suitable assumptions, the program's optimum value can be approximated by the values of finite-dimensional linear programs, and that, in addition, every accumulation point of a sequence of optimal solutions for the approximating programs is an optimal solution for the original problem.

71 citations


Journal ArticleDOI
TL;DR: In this paper, a piecewise linear structure of the cost coefficients is exploited to find the minimax regret solution to a linear program with interval objective function coefficients using an algorithm that, at each iteration, solves a linear programs to generate a candidate solution and a mixed integer program to find a corresponding maximum regret.

63 citations


Journal ArticleDOI
TL;DR: A method for solving probabilistic linear programming problems with exponential random variables is presented and a non-linear programming algorithm can be used to solve the resulting deterministic problem.

Book
01 Jan 1998
TL;DR: This paper presents a meta-modelling framework for Stochastic Linear Programming Algorithms and its applications in Convex Programming, and some examples show how this model can be applied to real-world problems.
Abstract: 1. Stochastic Linear Programming Models 2. Stochastic Linear Programming Algorithms 3. Implementation. The Testing Environment 4. Computational Results 5. Algorithmic Concepts in Convex Programming

Journal ArticleDOI
TL;DR: A variety of variance-reduction techniques that can be used to improve the quality of the objective-function approximations derived from sampled data are presented within the context of SLP objective- function evaluations.
Abstract: Planning under uncertainty requires the explicit representation of uncertain quantities within an underlying decision model. When the underlying model is a linear program, the representation of certain data elements as random variables results in a stochastic linear program (SLP). Precise evaluation of an SLP objective function requires the solution of a large number of linear programs, one for each possible combination of the random variables' outcomes. To reduce the effort required to evaluate the objective function, approximations, especially those derived from randomly sampled data, are often used. In this article, we explore a variety of variance-reduction techniques that can be used to improve the quality of the objective-function approximations derived from sampled data. These techniques are presented within the context of SLP objective-function evaluations. Computational results offering an empirical comparison of the level of variance reduction afforded by the various methods are included.

Journal ArticleDOI
TL;DR: A new method, based on the nested dissection heuristic, provides significantly better orderings than the most commonly used ordering method, minimum degree, on a variety of large-scale linear programming problems.
Abstract: The main cost of solving a linear programming problem using an interior point method is usually the cost of solving a series of sparse, symmetric linear systems of equations, AΘATx = b. These systems are typically solved using a sparse direct method. The first step in such a method is a reordering of the rows and columns of the matrix to reduce fill in the factor and/or reduce the required work. This article evaluates several methods for performing fill-reducing ordering on a variety of large-scale linear programming problems. We find that a new method, based on the nested dissection heuristic, provides significantly better orderings than the most commonly used ordering method, minimum degree.

Journal ArticleDOI
TL;DR: In this paper, several trials in order to overcome the difficulties of the multi-surface method are suggested and it will be shown that using the suggested methods, the additional learning can be easily made.
Abstract: Pattern classification is one of the main themes in pattern recognition, and has been tackled by several methods such as the statistic one, artificial neural networks, mathematical programming and so on. Among them, the multi-surface method proposed by Mangasarian is very attractive, because it can provide an exact discrimination function even for highly nonlinear problems without any assumption on the data distribution. However, the method often causes many slits on the discrimination curve. In other words, the piecewise linear discrimination curve is sometimes too complex resulting in a poor generalization ability. In this paper, several trials in order to overcome the difficulties of the multi-surface method are suggested. One of them is the utilization of goal programming in which the auxiliary linear programming problem is formulated as a goal programming in order to get as simple discrimination curves as possible. Another one is to apply fuzzy programming by which we can get fuzzy discrimination curves with gray zones. In addition, it will be shown that using the suggested methods, the additional learning can be easily made. These features of the methods make the discrimination more realistic. The effectiveness of the methods is shown on the basis of some applications.

01 Jan 1998
TL;DR: This paper considers the inverse linear programming problem under the L 1 norm (where the authors minimize Ejj ldj -cj, with J denoting the index set of variables xj) and under the Lo norm ( where they minimize max{ldj cjl: j E J}), and shows that (under reasonable regularity conditions) the inverse versions of P under L1 and L, norms are also polynomially solvable.
Abstract: In this paper, we study inverse optimization problems defined as follows: Let S denote the set of feasible solutions of an optimization problem P, let c be a specified cost vector, and x0 be a given feasible solution. The solution x° may or may not be an optimal solution of P with respect to the cost vector c. The inverse optimization problem is to perturb the cost vector c to d so that x0 is an optimal solution of P with respect to d and lid clip is minimum, where lid clip is some selected Lp norm. In this paper, we consider the inverse linear programming problem under the L 1 norm (where we minimize Ejj ldj -cj, with J denoting the index set of variables xj) and under the Lo norm (where we minimize max{ldj cjl: j E J}). We show that the dual of the inverse linear programming problem with the L 1 norm reduces to a modification of the original problem obtained by eliminating the non-binding constraints (with respect to x) and imposing the following additional lower and upper bound constraints: Ixj xj < 1 for all j J. We next study the inverse linear programming problem with the Loo norm and show that its dual reduces to a modification of the original problem obtained by eliminating the non-binding constraints (with respect to x) and imposing the following single additional constraint: jEj xj x9° < 1. Finally, we show that (under reasonable regularity conditions) if the problem P is polynomially solvable then the inverse versions of P under L1 and L, norms are also polynomially solvable. This result uses ideas from the ellipsoid algorithm and, therefore, does not lead to combinatorial algorithms for solving inverse optimization problems. 1 Sloan School of Management, MIT, Cambridge, MA 02139, USA; On leave from Indian Institute of Technology, Kanpur 208 016, INDIA. 2 Sloan School of Management, MIT, Cambridge, MA 02139, USA.

Journal ArticleDOI
TL;DR: It is proved that the problem of checking existence of optimal solutions to all linear programming problems whose data range in prescribed intervals is NP‐hard.
Abstract: We prove that the problem of checking existence of optimal solutions to all linear programming problems whose data range in prescribed intervals is NP-hard.

01 Jan 1998
TL;DR: Linear programming as discussed by the authors is a branch of applied mathematics that deals with solving optimization problems of a particular form and is closely related to linear algebra; the most noticeable difference is that linear programming often uses inequalities in the problem statement rather than equalities.
Abstract: 1.1 Definition Linear programming is the name of a branch of applied mathematics that deals with solving optimization problems of a particular form. Linear programming problems consist of a linear cost function (consisting of a certain number of variables) which is to be minimized or maximized subject to a certain number of constraints. The constraints are linear inequalities of the variables used in the cost function. The cost function is also sometimes called the objective function. Linear programming is closely related to linear algebra; the most noticeable difference is that linear programming often uses inequalities in the problem statement rather than equalities. 1.2 History Linear programming is a relatively young mathematical discipline, dating from the invention of the simplex method by G. B. Dantzig in 1947. Historically, development in linear programming is driven by its applications in economics and management. Dantzig initially developed the simplex method to solve U.S. Air Force planning problems, and planning and scheduling problems still dominate the applications of linear programming. One reason that linear programming is a relatively new field is that only the smallest linear programming problems can be solved without a computer. 1.3 Example (Adapted from [1].) Linear programming problems arise naturally in production planning. Suppose a particular Ford plant can build Escorts at the rate of one per minute, Explorer at the rate of one every 2 minutes, and Lincoln Navigators at the rate of one every 3 minutes. The vehicles get 25, 15, and 10 miles per gallon, respectively, and Congress mandates that the average fuel economy of vehicles produced be at least 18 miles per gallon. Ford loses $1000 on each Escort, but makes a profit of $5000 on each Explorer and $15,000 on each Navigator. What is the maximum profit this Ford plant can make in one 8-hour day?

Journal ArticleDOI
TL;DR: A linear programming-based interactive decision-making method with decomposition procedures for deriving a satisficing solution for the decision maker efficiently from an α-Pareto optimal solution set is presented.

Book ChapterDOI
Bernd Gärtner1
08 Oct 1998
TL;DR: This work identifies a "geometric" property of linear programming that goes beyond all abstract notions previously employed in generalized linear programming frameworks, and that can be exploited by the simplex method in a nontrivial setting.
Abstract: We consider a class A of generalized linear programs on the d-cube (due to Matousek) and prove that Kalai's subexponential simplex algorithm Random-Facet is polynomial on all actual linear programs in the class. In contrast, the subexponential analysis is known to be best possible for general instances in A. Thus, we identify a "geometric" property of linear programming that goes beyond all abstract notions previously employed in generalized linear programming frameworks, and that can be exploited by the simplex method in a nontrivial setting.

Journal ArticleDOI
TL;DR: In this article, a deterministic algorithm for solving two-dimensional convex programs with a linear objective function was presented, which requires O(klogk) primitive operations forkconstraints; if a feasible point is given, the bound reduces to O(logk/log logk).

Journal ArticleDOI
TL;DR: An unconstrained convex programming dual approach for solving a class of linear semi-infinite programming problems is proposed and primal and dual convergence results are established under some basic assumptions.
Abstract: In this paper, an unconstrained convex programming dual approach for solving a class of linear semi-infinite programming problems is proposed. Both primal and dual convergence results are established under some basic assumptions. Numerical examples are also included to illustrate this approach.

Journal ArticleDOI
TL;DR: The problem of exactly solving PLP is P-complete, i.e. the special case of Linear Programming in packing/covering form where the input constraint matrix and constraint vector consist entirely of positive entries.
Abstract: In this paper we study the parallel complexity of Positive Linear Programming (PLP), i.e. the special case of Linear Programming in packing/covering form where the input constraint matrix and constraint vector consist entirely of positive entries. We show that the problem of exactly solving PLP is P-complete.

Journal Article
TL;DR: All solutions of the nonlinear system of equations describing equilibrium conditions of the "high polymer liquid system", which is a well-known ill-conditioned system of equation, are identi ed by the method.
Abstract: A linear programming-based method is presented for nding all solutions of nonlinear systems of equations with guaranteed accuracy. In this method, a new e ective linear programming-based method is used to delete regions in which solutions do not exist. On the other hand, Krawczyk's method is used to nd regions in which solutions exist. As an illustrative example, all solutions of the nonlinear system of equations describing equilibrium conditions of the \high polymer liquid system", which is a well-known ill-conditioned system of equations, are identi ed by the method.

Journal ArticleDOI
01 Dec 1998-Top
TL;DR: In this paper, the authors investigated the simple uncapacitated plant location problem on a line and showed that under general conditions the special structure of the problem allows the optimal solution to be obtained directly from a linear programming relaxation.
Abstract: This paper investigates the simple uncapacitated plant location problem on a line. We show that under general conditions the special structure of the problem allows the optimal solution to be obtained directly from a linear programming relaxation. This result may be extended to the related p-median problem on a line. Thus, the practitioner is now able to use readily available LP codes in place of specialized algorithms to solve these one-dimensional models. The findings also shed some light on the “integer friendliness” of the general problem.


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the analyticity of certain paths that arise in the context of feasible interior-point methods and showed that there exists a neigborhood surrounding a strictly complementary optimal point where the path is analytic and all its derivatives with respect to the path parameter exist, even if the linear program is degenerate.
Abstract: This paper investigates the analyticity of certain paths that arise in the context of feasible interior-point-methods. It is shown that there exists a neigborhood surrounding a strictly complementary optimal point where the path is analytic and all its derivatives with respect to the path parameter exist, even if the linear program is degenerate. For this reason it is possible to extend the path through the feasible region from the positive real axis to the left complex half plane. This is done by a canonical transformation of the linear program. The analyticity provides the theoretical foundation for numerical methods following the path by higher-order approximations

Journal ArticleDOI
TL;DR: A constant-potential infeasible-start interior-point (INFCP) algorithm for linear programming (LP) problems with a worst-case iteration complexity analysis as well as some computational results.
Abstract: We present a constant-potential infeasible-start interior-point (INFCP) algorithm for linear programming (LP) problems with a worst-case iteration complexity analysis as well as some computational results.The performance of the INFCP algorithm is compared to those of practical interior-point algorithms. New features of the algorithm include a heuristic method for computing a “good” starting point and a procedure for solving the augmented system arising from stochastic programming with simple recourse. We also present an application to large scale planning problems under uncertainty.

01 Jan 1998
TL;DR: The publisher or other rights-holder may allow further reproduction and re-use of this version refer to the White Rose Research Online record for this item.
Abstract: eprints@whiterose.ac.uk https://eprints.whiterose.ac.uk/ Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website.

Proceedings ArticleDOI
01 Jun 1998
TL;DR: This work unifies in a single framework, the two most important methods for solving the problem of scheduling systems of affine recurrence equations, the Farkas method and the vertex method, both using linear programming.
Abstract: We study the problem of scheduling systems of affine recurrence equations (SAREs), a convenient formalism for modeling massively parallel computations. We unify in a single framework, the two most important methods for solving the problem: the Farkas method and the vertex method, both using linear programming. Then we compare the efficiency of the methods, in term of number of variables, number of constraints and execution time of the resolution, on real-word examples arising from parallelization problems. Our conclusions show that the Farkas method is significantly better than the vertex method.