scispace - formally typeset
Search or ask a question

Showing papers on "Linear programming published in 1997"


Book
01 Jan 1997
TL;DR: p. 27, l.
Abstract: p. 27, l. −11, replace “Schwartz” by “Schwarz” p. 69, l. −13: “ai∗x = bi” should be “ai∗x = bi∗” p. 126, l. 16, replace “inequality constraints” by “linear inequality constraints” p. 153, l. −8, replace aix 6= bi by aix 6= bi p. 163, Example 4.9, first line: replace “from” with “form” p. 165, l. 11, replace p′Ax ≥ 0 by p′Ax ≥ 0 p. 175, l. 1, replace “To this see” by “To see this” p. 203, l. 12: replace x ≥ 0 by x ≥ 0, xn+1 ≥ 0 p. 216, l. −6: replace “≤ c}” by “≤ c′}” p. 216, l. −3: replace c′ by (c1)′ p. 216, l. −2: replace c′ by (c2)′ p. 216, l. −1: right-hand side should be λ(c1)′ + (1− λ)(c2)′ p. 220, l. −12: replace “added to the pivot row” by “added to the zeroth row”

2,780 citations


Journal ArticleDOI
TL;DR: This paper provides a theoretical foundation for efficient interior-point algorithms for convex programming problems expressed in conic form, when the cone and its associated barrier are self-scaled, with long-step and symmetric primal-dual methods.
Abstract: This paper provides a theoretical foundation for efficient interior-point algorithms for convex programming problems expressed in conic form, when the cone and its associated barrier are self-scaled. For such problems we devise long-step and symmetric primal-dual methods. Because of the special properties of these cones and barriers, our algorithms can take steps that go typically a large fraction of the way to the boundary of the feasible region, rather than being confined to a ball of unit radius in the local norm defined by the Hessian of the barrier.

632 citations


Book
01 Jan 1997
TL;DR: This chapter discusses duality Theory for Linear Optimization, a Polynomial Algorithm for the Skew-Symmetric Model, and Parametric and Sensitivity Analysis, as well as implementing Interior Point Methods.
Abstract: Partial table of contents: INTRODUCTION: THEORY AND COMPLEXITY. Duality Theory for Linear Optimization. A Polynomial Algorithm for the Skew-Symmetric Model. Solving the Canonical Problem. THE LOGARITHMIC BARRIER APPROACH. The Dual Logarithmic Barrier Method. Initialization. THE TARGET-FOLLOWING APPROACH. The Primal-Dual Newton Method. Application to the Method of Centers. MISCELLANEOUS TOPICS. Karmarkar's Projective Method. More Properties of the Central Path. Partial Updating. High-Order Methods. Parametric and Sensitivity Analysis. Implementing Interior Point Methods. Appendices. Bibliography. Indexes.

554 citations


Journal ArticleDOI
TL;DR: The near-optimality, speed and simplicity of heuristic algorithms suggests that they are acceptable alternatives for many reserve selection problems, especially when dealing with large data sets or complicated analyses.

456 citations


Journal ArticleDOI
TL;DR: This work considers a variant of the classical symmetric Traveling Salesman Problem in which the nodes are partitioned into clusters and the salesman has to visit at least one node for each cluster.
Abstract: We consider a variant of the classical symmetric Traveling Salesman Problem in which the nodes are partitioned into clusters and the salesman has to visit at least one node for each cluster This NP-hard problem is known in the literature as the symmetric Generalized Traveling Salesman Problem (GTSP), and finds practical applications in routing, scheduling and location-routing In a companion paper (Fischetti et al [Fischetti, M, J J Salazar, P Toth 1995 The symmetric generalized traveling salesman polytope Networks 26 113–123]) we modeled GTSP as an integer linear program, and studied the facial structure of two polytopes associated with the problem Here we propose exact and heuristic separation procedures for some classes of facet-defining inequalities, which are used within a branch-and-cut algorithm for the exact solution of GTSP Heuristic procedures are also described Extensive computational results for instances taken from the literature and involving up to 442 nodes are reported

405 citations


Journal ArticleDOI
TL;DR: In this paper, the authors use linear programming models to define standardised, aggregate environmental performance indicators for firms, the best practice frontier obtained corresponds to decision making units showing the best environmental behaviour, Results are obtained with data from U.S. fossil fuel-fired electric utilities, starting from four alternative models, among which are three linear programming model that differ in the way they account for undesirable outputs (pollutants) and resources used as inputs.
Abstract: I use linear programming models to define standardised, aggregate environmental performance indicators for firms, The best practice frontier obtained corresponds to decision making units showing the best environmental behaviour, Results are obtained with data from U.S. fossil fuel-fired electric utilities, starting from four alternative models, among which are three linear programming models that differ in the way they account for undesirable outputs (pollutants) and resources used as inputs. The results indicate important discrepancies in the rankings obtained by the four models. Rather than contradictory, these results are interpreted as giving different, complementary kinds of information, that should all be taken into account by public decision-makers.

386 citations


Book
14 Aug 1997
TL;DR: Part I: General methodologies complexity and approximability polyhedral combinatorics branch-and-cut algorithms matroids and submodular functions advances in linear programming decomposition and column generation stochastic integer programming randomized algorithms local search graphs and matrices.
Abstract: Part I: General methodologies complexity and approximability polyhedral combinatorics branch-and-cut algorithms matroids and submodular functions advances in linear programming decomposition and column generation stochastic integer programming randomized algorithms local search graphs and matrices. Part II: Specific topics and applications sequencing and scheduling "Travelling Salesman Problem" max cut location problems network design flows and paths quadratic and 3-dimensional assignments linear assignment vehicle routing cutting and packing combinatorial topics in VLSI design applications in computational biology.

376 citations


Journal ArticleDOI
TL;DR: Two search directions within their family are characterized as being (unique) solutions of systems of linear equations in symmetric variables and, for the first time, a polynomially convergent long-step path-following algorithm for SDP which requires an extra $\sqrt{n}$ factor in its iteration-complexity order as compared to its linear programming counterpart.
Abstract: This paper deals with a class of primal--dual interior-point algorithms for semidefinite programming (SDP) which was recently introduced by Kojima, Shindoh, and Hara [SIAM J. Optim., 7 (1997), pp. 86--125]. These authors proposed a family of primal-dual search directions that generalizes the one used in algorithms for linear programming based on the scaling matrix X1/2S-1/2. They study three primal--dual algorithms based on this family of search directions: a short-step path-following method, a feasible potential-reduction method, and an infeasible potential-reduction method. However, they were not able to provide an algorithm which generalizes the long-step path-following algorithm introduced by Kojima, Mizuno, and Yoshise [Progress in Mathematical Programming: Interior Point and Related Methods, N. Megiddor, ed., Springer-Verlag, Berlin, New York, 1989, pp. 29--47]. In this paper, we characterize two search directions within their family as being (unique) solutions of systems of linear equations in symmetric variables. Based on this characterization, we present a simplified polynomial convergence proof for one of their short-step path-following algorithms and, for the first time, a polynomially convergent long-step path-following algorithm for SDP which requires an extra $\sqrt{n}$ factor in its iteration-complexity order as compared to its linear programming counterpart, where n is the number of rows (or columns) of the matrices involved.

330 citations


Journal ArticleDOI
TL;DR: This article proves new necessary and sufficient conditions for equilibrium and force closure, and presents a geometric characterization of all possible types of four-finger equilibrium grasps, and uses linear optimization within the valid configuration space regions to compute the maximal object regions where fingers can be positioned while ensuring force closure.
Abstract: This article addresses the problem of computing stable grasps of three-dimensional polyhedral objects. We consider the case of a hand equipped with four hard fingers and assume point contact with friction. We prove new necessary and sufficient conditions for equilibrium and force closure, and present a geometric characterization of all possible types of four-finger equilibrium grasps. We then focus on concurrent grasps, for which the lines of action of the four contact forces all intersect in a point. In this case, the equilibrium conditions are linear in the unknown grasp parameters, which reduces the problem of computing the stable grasp regions in configuration space to the problem of constructing the eight-dimensional projec tion of an ll-dimensinnal polytope. We present two projection methods: the first one uses a simple Gaussian elimination ap proach, while the second one relies on a novel output-sensitive contour-tracking algorithm. Finally, we use linear optimization within the valid configuration...

295 citations


Journal ArticleDOI
TL;DR: The presented method uses a concise notation to characterize the static structure of a program and its possible execution paths and allows for a description of the feasible paths through the program code that characterizes the behavior of the code sufficiently to compute the exact maximum execution time of the program.
Abstract: The knowledge of program execution times is crucial for the development and the verification of real-time software. Therefore, there is a need for methods and tools to predict the timing behavior of pieces of program code and entire programs. This paper presents a novel method for the analysis of program execution times. The computation of MAximum eXecution Times (MAXTs) is mapped onto a graph-theoretical problem that is a generalization of the computation of a maximum cost circulation in a directed graph. Programs are represented by T-graphs, timing graphs, which are similar to flow graphs. These graphs reflect the structure and the timing behavior of the code. Relative capacity constraints, a generalization of capacity constraints that bound the flow in the edges, express user-supplied information about infeasible paths. To compute MAXTs, T-graphs are searched for those execution paths which correspond to a maximum cost circulation. The search problem is transformed into an integer linear programming problem. The solution of the linear programming problem yields the MAXT. The special merits of the presented method are threefold: It uses a concise notation to characterize the static structure of a program and its possible execution paths. Furthermore, the notation allows for a description of the feasible paths through the program code that characterizes the behavior of the code sufficiently to compute the exact maximum execution time of the program – not just a bound thereof. Finally, linear program solving does not only yield maximum execution times, but also produces detailed information about the execution time and the number of executions of every single program construct in the worst case. This knowledge is valuable for a more comprehensive analysis of the timing of a program.

261 citations


Journal ArticleDOI
TL;DR: In this paper, an exact dual is derived for Semidefinite Programming (SDP), for which strong duality properties hold without any regularity assumptions, and the dual is then applied to derive certain complexity results for SDP.
Abstract: In this paper, we present a new and more complete duality for Semidefinite Programming (SDP), with the following features: \begin{itemize} \item This dual is an explicit semidefinite program, whose number of variables and the coefficient bitlengths are polynomial in those of the primal. \item If the Primal is feasible, then it is bounded if and only if the dual is feasible. \item The duality gap, \ie the difference between the primal and the dual objective function values, is zero whenever the primal is feasible and bounded. Also, in this case, the dual attains its optimum \item It yields a precise Farkas Lemma for semidefinite feasibility systems, \ie characterization of the {\it infeasibility} of a semidefinite inequality in terms of the {\it feasibility} of another polynomial size semidefinite inequality. \end{itemize} Note that the standard duality for Linear Programming satisfies all of the above features, but no such duality theory was previously known for SDP, without Slater-like conditions being assumed. Then we apply the dual to derive certain complexity results for Semidefinite Programming Problems. The decision problem of Semidefinite Feasibility (SDFP), \ie that of determining if a given semidefinite inequality system is feasible, is the central problem of interest. The complexity of SDFP is unknown, but we show the following: 1) In the Turing machine model, SDFP is not NP-Complete unless NP=Co-NP; 2) In the real number model of Blum, Shub and Smale\cite{bss}, SDFP is in NP$\cap$Co-NP. We then give polynomial reductions from the following problems to SDFP: 1) Checking whether an SDP is bounded; 2) Checking whether a feasible and bounded SDP attains the optimum; 3) Checking the optimality of a feasible solution.

Journal ArticleDOI
TL;DR: The Covering Tour Problem is first formulated as an integer linear program, polyhedral properties of several classes of constraints are investigated, and an exact branch-and-cut algorithm is developed.
Abstract: The Covering Tour Problem (CTP) is defined on a graph G = (V ∪ W, E), where W is a set of vertices that must be covered The CTP consists of determining a minimum length Hamiltonian cycle on a subset of V such that every vertex of W is within a prespecified distance from the cycle The problem is first formulated as an integer linear program, polyhedral properties of several classes of constraints are investigated, and an exact branch-and-cut algorithm is developed A heuristic is also described Extensive computational results are presented

Proceedings ArticleDOI
19 Oct 1997
TL;DR: A randomized approximation algorithm which takes an instance of MAX 3SAT as input that is optimal if the instance-a collection of clauses each of length at most three-is satisfiable, and a method of obtaining direct semidefinite relaxations of any constraint satisfaction problem of the form MAX CSP(F), where F is a finite family of Boolean functions.
Abstract: We describe a randomized approximation algorithm which takes an instance of MAX 3SAT as input. If the instance-a collection of clauses each of length at most three-is satisfiable, then the expected weight of the assignment found is at least 7/8 of optimal. We provide strong evidence (but not a proof) that the algorithm performs equally well on arbitrary MAX 3SAT instances. Our algorithm uses semidefinite programming and may be seen as a sequel to the MAX CUT algorithm of Goemans and Williamson (1995) and the MAX 2SAT algorithm of Feige and Goemans (1995). Though the algorithm itself is fairly simple, its analysis is quite complicated as it involves the computation of volumes of spherical tetrahedra. Hastad has recently shown that, assuming P/spl ne/NP, no polynomial-time algorithm for MAX 3SAT can achieve a performance ratio exceeding 7/8, even when restricted to satisfiable instances of the problem. Our algorithm is therefore optimal in this sense. We also describe a method of obtaining direct semidefinite relaxations of any constraint satisfaction problem of the form MAX CSP(F), where F is a finite family of Boolean functions. Our relaxations are the strongest possible within a natural class of semidefinite relaxations.

Journal ArticleDOI
TL;DR: This paper considers both the symmetric and the asymmetric versions of the vehicle routing problem with backhauls, for which a new integer linear programming model is presented and a Lagrangian lower bound is presented which is strengthened in a cutting plane fashion.
Abstract: The Vehicle Routing Problem with Backhauls is an extension of the capacitated Vehicle Routing Problem where the customers' set is partitioned into two subsets. The first is the set of Linehaul, or Delivery, customers, while the second is the set of Backhaul, or Pickup, customers. The problem is known to be NP-hard in the strong sense and finds many practical applications in distribution planning. In this paper we consider, in a unified framework, both the symmetric and the asymmetric versions of the vehicle routing problem with backhauls, for which we present a new integer linear programming model and a Lagrangian lower bound which is strengthened in a cutting plane fashion. The Lagrangian lower bound is then combined, according to-the additive approach, with a lower bound obtained by dropping the capacity constraints, thus obtaining an effective overall bounding procedure. A branch-and-bound algorithm, reduction procedures and dominance criteria are also described. Computational tests on symmetric and as...

Book
01 Jan 1997
TL;DR: In this article, the LINDO program is used to solve the problem of linear program solvers on the computer, and the model formulation process is described as follows: Covering, Staffing, and Cutting Stock Models, Multiperiod Planning Problems, Blending of Input Materials, Decision Making Under Uncertainty and Stochastic LP.
Abstract: 1. What is Linear Programming? 2. Solving LPS on the Computer: The LINDO Program 3. Sensitivity Analysis of LP Solutions 4. The Model Formulation Process 5. Product Mix Problems 6. Covering, Staffing, and Cutting Stock Models 7. Networks, Distribution, and PERT/CPM 8. Multiperiod Planning Problems 9. Blending of Input Materials 10. Decision Making Under Uncertainty and Stochastic LP 11. Economic Equilibria as LPs 12. Game Theory 13. Quadratic Programming 14. Formulating and Solving Integer Programs 15. Application to Statistical Estimation 16. Design and Implementation of Optimization-Based Decision Support Systems 17. Multiple Criteria and Goal Programming 18. Parametric Analysis 19. Methods for Solving Linear Programs References / Index

Journal ArticleDOI
TL;DR: The relationships among various duals are discussed and a unified treatment for strong duality in semidefinite programming is given.
Abstract: It is well known that the duality theory for linear programming (LP) is powerful and elegant and lies behind algorithms such as simplex and interior-point methods. However, the standard Lagrangian for nonlinear programs requires constraint qualifications to avoid duality gaps. Semidefinite linear programming (SDP) is a generalization of LP where the nonnegativity constraints are replaced by a semidefiniteness constraint on the matrix variables. There are many applications, e.g., in systems and control theory and combinatorial optimization. However, the Lagrangian dual for SDP can have a duality gap. We discuss the relationships among various duals and give a unified treatment for strong duality in semidefinite programming. These duals guarantee strong duality, i.e., a zero duality gap and dual attainment. This paper is motivated by the recent paper by Ramana where one of these duals is introduced.

Journal ArticleDOI
TL;DR: A version of dynamic programming, which computes level sets of the value function rather than thevalue function set itself, is used to design robust non-linear controllers for linear, discrete-time, dynamical systems subject to hard constraints on controls and states.

Journal ArticleDOI
TL;DR: This paper presents a new model based on breaking the decision process into two stages, which provides a tighter linear programming bound than that of the conventional set partitioning formulation but is more difficult to solve.
Abstract: Airline crew scheduling is concerned with finding a minimum cost assignment of flight crews to a given flight schedule while satisfying restrictions dictated by collective bargaining agreements and the Federal Aviation Administration. Traditionally, the problem has been modeled as a set partitioning problem. In this paper, we present a new model based on breaking the decision process into two stages. In the first stage we select a set of duty periods that cover the flights in the schedule. Then, in the second stage, we attempt to build pairings using those duty periods. We suggest a decomposition approach for solving the model and present computational results for test problems provided by a major carrier. Our formulation provides a tighter linear programming bound than that of the conventional set partitioning formulation but is more difficult to solve.

Journal ArticleDOI
TL;DR: This chapter discusses Interior Point Approaches for the VLSI Placement Problem, as well as implementation of Interior-Point Methods for Large Scale Linear Programs, and Semidefinite Programming.
Abstract: Preface. Part I: Linear Programming. 1. Introduction to the Theory of Interior Point Methods B. Jansen, et al. 2. Affine Scaling Algorithm T. Tsuchiya. 3. Target-Following Methods for Linear Programming B. Jansen, et al. 4. Potential Reduction Algorithms K.M. Anstreicher. 5. Infeasible-Interior-Point Algorithms S. Mizuno. 6. Implementation of Interior-Point Methods for Large Scale Linear Programs E.D. Andersen, et al. Part II: Convex Programming. 7. Interior-Point Methods for Classes of Convex Programs F. Jarre. 8. Complementarity Problems A. Yoshise. 9. Semidefinite Programming M.V. Ramana, P.M. Pardalos. 10. Implementing Barrier Methods for Nonlinear Programming D.F. Shanno, et al. Part III: Applications, Extensions. 11. Interior Point Methods for Combinatorial Optimization J.E. Mitchell. 12. Interior Point Methods for Global Optimization P.M. Pardalos, M.G.C. Resende. 13. Interior Point Approaches for the VLSI Placement Problem A. Vannelli, et al.

Journal ArticleDOI
TL;DR: The following three optimizations are discussed: (1) inheriting dual variables and partial solutions during partitioning, (2) sorting subproblems by lower cost bounds before solving, and (3) partitioning in an optimized order.
Abstract: We describe an implementation of an algorithm due to Murty for determining a ranked set of solutions to assignment problems. The intended use of the algorithm is in the context of multitarget tracking, where it has been shown that real-time multitarget tracking is feasible for some problems, but many other uses of the algorithm are also possible. The following three optimizations are discussed: (1) inheriting dual variables and partial solutions during partitioning, (2) sorting subproblems by lower cost bounds before solving, and (3) partitioning in an optimized order. When used to find the 100 best solutions to random 100/spl times/100 assignment problems, these optimizations produce a speedup of over a factor of 20, finding all 100 solutions in about 0.6 s. For a random cost matrix, the average time complexity for finding k solutions to random N/spl times/N problems appears to be nearly linear in both k and N, for sufficiently large k.

Journal ArticleDOI
TL;DR: In this article, a decentralized stock control policy for empty equipment in hub-and-spoke networks is proposed. But this approach is applied to center-terminal networks (i.e., center-station networks), by first analytically modeling the stochastic processes representing various stock-control variables and then comparing the analytical results to monte-carlo simulations.
Abstract: Fleet sizing and empty equipment redistribution are important issues in managing transportation systems. Most of the mathematical models that have been developed for these problems are complex and computationally demanding, including dynamic linear programming and stochastic/dynamic mathematical programs. Our research takes an alternate approach by building from inventory theory and developing decentralized stock control policies for empty equipment. This approach is applied to hub-and-spoke networks (i.e., center-terminal networks), by first analytically modeling the stochastic processes representing various stock-control variables, and then comparing the analytical results to monte-carlo simulations. A decomposition approach is also developed to determine stock-out probabilities as a function of the fleet size as a whole, and as a function of localized control parameters.

Journal ArticleDOI
TL;DR: In this article, a parametric programming approach is proposed for the analysis of linear process engineering problems under uncertainty, and a novel branch and bound algorithm is presented for the solut....
Abstract: In this paper, a parametric programming approach is proposed for the analysis of linear process engineering problems under uncertainty. A novel branch and bound algorithm is presented for the solut...

Journal ArticleDOI
TL;DR: In this paper, a new reformulation of the linear mixed 0-1 programming problem into a linear bilevel programming one, which does not require the introduction of a large finite constant, is presented.
Abstract: We study links between the linear bilevel and linear mixed 0–1 programming problems. A new reformulation of the linear mixed 0–1 programming problem into a linear bilevel programming one, which does not require the introduction of a large finite constant, is presented. We show that solving a linear mixed 0–1 problem by a classical branch-and-bound algorithm is equivalent in a strong sense to solving its bilevel reformulation by a bilevel branch-and-bound algorithm. The mixed 0–1 algorithm is embedded in the bilevel algorithm through the aforementioned reformulation; i.e., when applied to any mixed 0–1 instance and its bilevel reformulation, they generate sequences of subproblems which are identical via the reformulation.

Journal ArticleDOI
TL;DR: In this article, the authors argue that a concave utility function should be incorporated in a model whenever the decision maker is risk averse and present applications in which the traditional stochastic linear program fails to identify a robust solution-despite the presence of a cheap robust point.
Abstract: Robust optimization searches for recommendations that are relatively immune to anticipated uncertainty in the problem parameters. Stochasticities are addressed via a set of discrete scenarios. This paper presents applications in which the traditional stochastic linear program fails to identify a robust solution-despite the presence of a cheap robust point. Limitations of piecewise linearization are discussed. We argue that a concave utility function should be incorporated in a model whenever the decision maker is risk averse. Examples are taken from telecommunications and financial planning.

Proceedings ArticleDOI
19 Oct 1997
TL;DR: A distributed algorithm that obtains a (1+/spl epsiv/) approximation to the global optimum solution and runs in a polylogarithmic number of distributed rounds, which is considerably simpler than previous approximation algorithms for positive linear programs, and thus may have practical value in both centralized and distributed settings.
Abstract: Flow control in high speed networks requires distributed routers to make fast decisions based only on local information in allocating bandwidth to connections. While most previous work on this problem focuses on achieving local objective functions, in many cases it may be necessary to achieve global objectives such as maximizing the total flow. This problem illustrates one of the basic aspects of distributed computing: achieving global objectives using local information. Papadimitriou and Yannakakis (1993) initiated the study of such problems in a framework of solving positive linear programs by distributed agents. We take their model further, by allowing the distributed agents to acquire more information over time. We therefore turn attention to the tradeoff between the running time and the quality of the solution to the linear program. We give a distributed algorithm that obtains a (1+/spl epsiv/) approximation to the global optimum solution and runs in a polylogarithmic number of distributed rounds. While comparable in running time, our results exhibit a significant improvement on the logarithmic ratio previously obtained by Awerbuch and Azar (1994). Our algorithm, which draws from techniques developed by Luby and Nisan (1993) is considerably simpler than previous approximation algorithms for positive linear programs, and thus may have practical value in both centralized and distributed settings.

Journal ArticleDOI
TL;DR: The paper deals with nonlinear multicommodity flow problems with convex costs and proposes a decomposition method that takes full advantage of the supersparsity of the network in the linear algebra operations.
Abstract: The paper deals with nonlinear multicommodity flow problems with convex costs. A decomposition method is proposed to solve them. The approach applies a potential reduction algorithm to solve the master problem approximately and a column generation technique to define a sequence of primal linear programming problems. Each subproblem consists of finding a minimum cost flow between an origin and a destination node in an uncapacited network. It is thus formulated as a shortest path problem and solved with Dijkstra's d-heap algorithm. An implementation is described that takes full advantage of the supersparsity of the network in the linear algebra operations. Computational results show the efficiency of this approach on well-known nondifferentiable problems and also large scale randomly generated problems (up to 1000 arcs and 5000 commodities).

01 May 1997
TL;DR: A Minimax Regret formulation suitable for large-scale linear programming models and experimentally verified that the minimax regret strategy depends only on the extremal scenarios and not on the intermediate ones, making the approach computationally efficient.
Abstract: Classical stochastic programming has already been used with large-scale LP models for long-term analysis of energy-environment systems. We propose a Minimax Regret formulation suitable for large-scale linear programming models. It has been experimentally verified that the minimax regret strategy depends only on the extremal scenarios and not on the intermediate ones, thus making the approach computationally efficient. Key results of minimax regret and minimum expected value strategies for Greenhouse Gas abatement in the Province of Quebec, are compared.

Journal ArticleDOI
TL;DR: This paper describes the problems that may occur when using standard software and advocate a framework for performing complete sensitivity analysis and elucidate problems and solutions with an academic example and gives results from an implementation of these approaches to a large practical linear programming model of an oil refinery.

Journal ArticleDOI
TL;DR: Linear programming integer programming graph theory and networks dynamic programming nonlinear programming multiobjective programming stochastic programming heuristic methods.
Abstract: Linear programming integer programming graph theory and networks dynamic programming nonlinear programming multiobjective programming stochastic programming heuristic methods.

Journal ArticleDOI
TL;DR: This work provides a systematic way to generate penalty and barrier functions in this class of penalty methods for convex programming, and analyzes the existence of primal and dual optimal paths generated by these penalty methods, as well as their convergence to the primal andDual optimal sets.
Abstract: We consider a wide class of penalty and barrier methods for convex programming which includes a number of specific functions proposed in the literature. We provide a systematic way to generate penalty and barrier functions in this class, and we analyze the existence of primal and dual optimal paths generated by these penalty methods, as well as their convergence to the primal and dual optimal sets. For linear programming we prove that these optimal paths converge to single points.