scispace - formally typeset
Search or ask a question

Showing papers on "Linear programming published in 2007"


Book
01 Jan 2007
TL;DR: This chapter discusses Deterministic Dynamic Programming, a model for nonlinear programming, and nonlinear Programming Algorithms, a system for solving linear programming problems.
Abstract: 1. Overview of Operations Research. I. DETERMINISTIC MODELS. 2. Introduction to Linear Programming. 3. The Simplex Method. 4. Duality and Sensitivity Analysis. 5. Transportation Model and Its Variants. 6. Network Models. 7. Advanced Linear Programming. 8. Goal Programming. 9. Integer Linear Programming. 10. Deterministic Dynamic Programming. 11. Deterministic Inventory Models. II. PROBABILISTIC MODELS. 12. Review of Basic Probability. 13. Forecasting Models. 14. Decision Analysis and Games. 15. Probabilistic Dynamic Programming. 16. Probabilistic Inventory Models. 17. Queueing Systems. 18. Simulation Modeling. 19. Markovian Decision Process. III. NONLINEAR MODELS. 20. Classical Optimization Theory. 21. Nonlinear Programming Algorithms. Appendix A: Review of Matrix Algebra. Appendix B: Introduction to Simnet II. Appendix C: Tora and Simnet II Installation and Execution. Appendix D: Statistical Tables. Appendix E: Answers to Odd-Numbered Problems. Index.

1,819 citations


Journal ArticleDOI
TL;DR: The proposed EMD-L1 significantly simplifies the original linear programming formulation of EMD, and empirically shows that this new algorithm has an average time complexity of O(N2), which significantly improves the best reported supercubic complexity of the original EMD.
Abstract: We propose EMD-L1: a fast and exact algorithm for computing the earth mover's distance (EMD) between a pair of histograms. The efficiency of the new algorithm enables its application to problems that were previously prohibitive due to high time complexities. The proposed EMD-L1 significantly simplifies the original linear programming formulation of EMD. Exploiting the L1 metric structure, the number of unknown variables in EMD-L1 is reduced to O(N) from O(N2) of the original EMD for a histogram with N bins. In addition, the number of constraints is reduced by half and the objective function of the linear program is simplified. Formally, without any approximation, we prove that the EMD-L1 formulation is equivalent to the original EMD with a L1 ground distance. To perform the EMD-L1 computation, we propose an efficient tree-based algorithm, Tree-EMD. Tree-EMD exploits the fact that a basic feasible solution of the simplex algorithm-based solver forms a spanning tree when we interpret EMD-L1 as a network flow optimization problem. We empirically show that this new algorithm has an average time complexity of O(N2), which significantly improves the best reported supercubic complexity of the original EMD. The accuracy of the proposed methods is evaluated by experiments for two computation-intensive problems: shape recognition and interest point matching using multidimensional histogram-based local features. For shape recognition, EMD-L1 is applied to compare shape contexts on the widely tested MPEG7 shape data set, as well as an articulated shape data set. For interest point matching, SIFT, shape context and spin image are tested on both synthetic and real image pairs with large geometrical deformation, illumination change, and heavy intensity noise. The results demonstrate that our EMD-L1-based solutions outperform previously reported state-of-the-art features and distance measures in solving the two tasks

456 citations


Journal ArticleDOI
TL;DR: The synthesis of state-feedback controllers is solved in terms of linear programming problem, including the requirement of positiveness of the controller and its extension to uncertain plants.
Abstract: This brief solves some synthesis problems for a class of linear systems for which the state takes nonnegative values whenever the initial conditions are nonnegative. In particular, the synthesis of state-feedback controllers is solved in terms of linear programming problem, including the requirement of positiveness of the controller and its extension to uncertain plants. In addition, the synthesis problem with nonsymmetrical bounds on the stabilizing control is treated

424 citations


Journal ArticleDOI
TL;DR: This work reviews a not widely known approach to the max-sum labeling problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and shows how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product.
Abstract: The max-sum labeling problem, defined as maximizing a sum of binary (ie, pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product In particular, we review Schlesinger et al's upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound We revisit problems with Boolean variables and supermodular problems We describe two algorithms for decreasing the upper bound We present an example application for structural image analysis

410 citations


Journal ArticleDOI
TL;DR: A mixed-integer linear programming model is presented and new additional valid inequalities used to strengthen the linear relaxation of the model are derived and the optimal solution of two problems obtained by relaxing in different ways the deterministic order-up-to level policy is compared.
Abstract: We consider a distribution problem in which a product has to be shipped from a supplier to several retailers over a given time horizon. Each retailer defines a maximum inventory level. The supplier monitors the inventory of each retailer and determines its replenishment policy, guaranteeing that no stockout occurs at the retailer (vendor-managed inventory policy). Every time a retailer is visited, the quantity delivered by the supplier is such that the maximum inventory level is reached (deterministic order-up-to level policy). Shipments from the supplier to the retailers are performed by a vehicle of given capacity. The problem is to determine for each discrete time instant the quantity to ship to each retailer and the vehicle route. We present a mixed-integer linear programming model and derive new additional valid inequalities used to strengthen the linear relaxation of the model. We implement a branch-and-cut algorithm to solve the model optimally. We then compare the optimal solution of the problem with the optimal solution of two problems obtained by relaxing in different ways the deterministic order-up-to level policy. Computational results are presented on a set of randomly generated problem instances.

386 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: A linear programming relaxation scheme for the class of multiple object tracking problems where the inter-object interaction metric is convex and the intra-object term quantifying object state continuity may use any metric is found to be able to find the global optimum with high probability.
Abstract: We propose a linear programming relaxation scheme for the class of multiple object tracking problems where the inter-object interaction metric is convex and the intra-object term quantifying object state continuity may use any metric. The proposed scheme models object tracking as a multi-path searching problem. It explicitly models track interaction, such as object spatial layout consistency or mutual occlusion, and optimizes multiple object tracks simultaneously. The proposed scheme does not rely on track initialization and complex heuristics. It has much less average complexity than previous efficient exhaustive search methods such as extended dynamic programming and is found to be able to find the global optimum with high probability. We have successfully applied the proposed method to multiple object tracking in video streams.

375 citations


Journal ArticleDOI
TL;DR: An approach for constructing uncertainty sets for robust optimization using new deviation measures for random variables termed the forward and backward deviations is introduced, which converts the original model into a second-order cone program, which is computationally tractable both in theory and in practice.
Abstract: In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for random variables termed the forward and backward deviations. These deviation measures capture distributional asymmetry and lead to better approximations of chance constraints. Using a linear decision rule, we also propose a tractable approximation approach for solving a class of multistage chance-constrained stochastic linear optimization problems. An attractive feature of the framework is that we convert the original model into a second-order cone program, which is computationally tractable both in theory and in practice. We demonstrate the framework through an application of a project management problem with uncertain activity completion time.

358 citations


Journal ArticleDOI
TL;DR: The coordination of directional overcurrent relays (DOCR) is treated in this paper using particle swarm optimization (PSO), a recently proposed optimizer that utilizes the swarm behavior in searching for an optimum.
Abstract: The coordination of directional overcurrent relays (DOCR) is treated in this paper using particle swarm optimization (PSO), a recently proposed optimizer that utilizes the swarm behavior in searching for an optimum. PSO gained a lot of interest for its simplicity, robustness, and easy implementation. The problem of setting DOCR is a highly constrained optimization problem that has been stated and solved as a linear programming (LP) problem. To deal with such constraints a modification to the standard PSO algorithm is introduced. Three case studies are presented, and the results are compared to those of LP technique to demonstrate the effectiveness of the proposed methodology.

303 citations


Journal ArticleDOI
TL;DR: The honey-bee mating optimization (HBMO) algorithm is presented and tested with a nonlinear, continuous constrained problem with continuous decision and state variables to demonstrate the efficiency of the algorithm in handling the single reservoir operation optimization problems.
Abstract: In recent years, evolutionary and meta-heuristic algorithms have been extensively used as search and optimization tools in various problem domains, including science, commerce, and engineering. Ease of use, broad applicability, and global perspective may be considered as the primary reason for their success. The honey-bee mating process has been considered as a typical swarm-based approach to optimization, in which the search algorithm is inspired by the process of real honey-bee mating. In this paper, the honey-bee mating optimization (HBMO) algorithm is presented and tested with a nonlinear, continuous constrained problem with continuous decision and state variables to demonstrate the efficiency of the algorithm in handling the single reservoir operation optimization problems. It is shown that the performance of the model is quite comparable with the results of the well-developed traditional linear programming (LP) solvers such as LINGO 8.0. Results obtained are quite promising and compare well with the final results of the other approach.

287 citations


Journal ArticleDOI
TL;DR: A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov random fields (MRFs) that are frequently encountered in computer vision.
Abstract: A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov random fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.

273 citations


Journal ArticleDOI
TL;DR: A unifying treatment of max-min fairness is given, which encompasses all existing results in a simplifying framework, and extends its applicability to new examples, and shows that, if the set of feasible allocations has the free disposal property, then Max-min Programming reduces to a simpler algorithm, called Water Filling, whose complexity is much lower.
Abstract: Max-min fairness is widely used in various areas of networking. In every case where it is used, there is a proof of existence and one or several algorithms for computing it; in most, but not all cases, they are based on the notion of bottlenecks. In spite of this wide applicability, there are still examples, arising in the context of wireless or peer-to-peer networks, where the existing theories do not seem to apply directly. In this paper, we give a unifying treatment of max-min fairness, which encompasses all existing results in a simplifying framework, and extend its applicability to new examples. First, we observe that the existence of max-min fairness is actually a geometric property of the set of feasible allocations. There exist sets on which max-min fairness does not exist, and we describe a large class of sets on which a max-min fair allo cation does exist. This class contains, but is not limited to the compact, convex sets of RN. Second, we give a general purpose centralized algorithm, called Max-min Programming, for computing the max-min fair allocation in all cases where it exists (whether the set of feasible allocations is in our class or not). Its complexity is of the order of N linear programming steps in RN, in the case where the feasible set is defined by linear constraints. We show that, if the set of feasible allocations has the free disposal property, then Max-min Programming reduces to a simpler algorithm, called Water Filling, whose complexity is much lower. Free disposal corresponds to the cases where a bottleneck argument can be made, andWater Filling is the general form of all previously known centralized algorithms for such cases. All our results apply mutatis mutandis to min-max fairness. Our results apply to weighted, unweighted and util-max-min and min-max fairness. Distributed algorithms for the computation of max-min fair allocations are outside the scope of this paper.

Journal ArticleDOI
TL;DR: This work combines mixed-integer linear programming (MILP) and constraint programming (CP) to solve an important class of planning and scheduling problems and obtains significant computational speedups, of several orders of magnitude for the first two objectives.
Abstract: We combine mixed-integer linear programming (MILP) and constraint programming (CP) to solve an important class of planning and scheduling problems. Tasks are allocated to facilities using MILP and scheduled using CP, and the two are linked via logic-based Benders decomposition. Tasks assigned to a facility may run in parallel subject to resource constraints (cumulative scheduling). We solve problems in which the objective is to minimize cost, makespan, or total tardiness. We obtain significant computational speedups, of several orders of magnitude for the first two objectives, relative to the state of the art in both MILP and CP. We also obtain better solutions and bounds for problems than cannot be solved to optimality.

Journal ArticleDOI
TL;DR: This work considers the problem of scheduling under uncertainty where the uncertain problem parameters can be described by a known probability distribution function and introduces a small number of auxiliary variables and additional constraints into the original MILP problem, generating a deterministic robust counterpart problem which provides the optimal/feasible solution.

Journal ArticleDOI
TL;DR: This work formally derive the standard deterministic linear program (LP) for bid-price control by making an affine functional approximation to the optimal dynamic programming value function, and gives rise to a new LP that yields tighter bounds than the standard LP.
Abstract: We formally derive the standard deterministic linear program (LP) for bid-price control by making an affine functional approximation to the optimal dynamic programming value function. This affine functional approximation gives rise to a new LP that yields tighter bounds than the standard LP. Whereas the standard LP computes static bid prices, our LP computes a time trajectory of bid prices. We show that there exist dynamic bid prices, optimal for the LP, that are individually monotone with respect to time. We provide a column generation procedure for solving the LP within a desired optimality tolerance, and present numerical results on computational and economic performance.

Journal ArticleDOI
Eunjeong Choi1, Dong-Wan Tcha1
TL;DR: An approach based on column generation (CG) is applied for its solution, hitherto successful only in the vehicle routing problem with time windows, and outperforms all the existing algorithms both in terms of the quality of solutions generated and the solution time.

01 Jan 2007
TL;DR: The theoretical background for an implementation which is based upon the LU decomposition, computed with row interchanges, of the basic matrix of the simplex method, which is slow, but has good round-off error behavior.

Journal ArticleDOI
TL;DR: It is argued that two stage (say linear) stochastic programming problems can be solved with a reasonable accuracy by Monte Carlo sampling techniques while there are indications that complexity of multistage programs grows fast with increase of the number of stages.
Abstract: In this paper we discuss computational complexity and risk averse approaches to two and multistage stochastic programming problems. We argue that two stage (say linear) stochastic programming problems can be solved with a reasonable accuracy by Monte Carlo sampling techniques while there are indications that complexity of multistage programs grows fast with increase of the number of stages. We discuss an extension of coherent risk measures to a multistage setting and, in particular, dynamic programming equations for such problems.

Journal ArticleDOI
TL;DR: Various objectives of reactive power planning are reviewed and various optimization models, identified as optimal power flow model, security-constrained OPF model, and SCOPF with voltage-stability consideration are discussed.
Abstract: The key of reactive power planning (RPP), or Var planning, is the optimal allocation of reactive power sources considering location and size. Traditionally, the locations for placing new Var sources were either simply estimated or directly assumed. Recent research works have presented some rigorous optimization-based methods in RPP. This paper will first review various objectives of RPP. The objectives may consider many cost functions such as variable Var cost, fixed Var cost, real power losses, and fuel cost. Also considered may be the deviation of a given voltage schedule, voltage stability margin, or even a combination of different objectives as a multi-objective model. Secondly, different constraints in RPP are discussed. These different constraints are the key of various optimization models, identified as optimal power flow (OPF) model, security-constrained OPF (SCOPF) model, and SCOPF with voltage-stability consideration. Thirdly, the optimization-based models will be categorized as conventional algorithms, intelligent searches, and fuzzy set applications. The conventional algorithms include linear programming, nonlinear programming, mixed-integer nonlinear programming, etc. The intelligent searches include simulated annealing, evolutionary algorithms, and tabu search. The fuzzy set applications in RPP address the uncertainties in objectives and constraints. Finally, this paper will conclude the discussion with a summary matrix for different objectives, models, and algorithms.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the technique of rounding mathematical programs to the problem of modularity maximization, presenting two novel algorithms, namely, the linear programing algorithm comes with an a posteriori approximation guarantee: by comparing the solution quality to the fractional solution, a bound on the available "room for improvement" can be obtained.
Abstract: In many networks, it is of great interest to identify "communities", unusually densely knit groups of individuals. Such communities often shed light on the function of the networks or underlying properties of the individuals. Recently, Newman suggested "modularity" as a natural measure of the quality of a network partitioning into communities. Since then, various algorithms have been proposed for (approximately) maximizing the modularity of the partitioning determined. In this paper, we introduce the technique of rounding mathematical programs to the problem of modularity maximization, presenting two novel algorithms. More specifically, the algorithms round solutions to linear and vector programs. Importantly, the linear programing algorithm comes with an a posteriori approximation guarantee: by comparing the solution quality to the fractional solution of the linear program, a bound on the available "room for improvement" can be obtained. The vector programming algorithm provides a similar bound for the best partition into two communities. We evaluate both algorithms using experiments on several standard test cases for network partitioning algorithms, and find that they perform comparably or better than past algorithms.

Journal ArticleDOI
TL;DR: This work establishes the dual problem of the linear programming problem with trapezoidal fuzzy variables and deduces some duality results, and proves that the auxiliary problem is indeed the dual of the FVLP problem.

Journal ArticleDOI
Jiming Peng1, Yu Wei1
TL;DR: This paper first model MSSC as a so-called 0-1 semidefinite programming (SDP) problem, and shows that this model provides a unified framework for several clustering approaches such as normalized k-cut and spectral clustering.
Abstract: One of the fundamental clustering problems is to assign $n$ points into $k$ clusters based on minimal sum-of-squared distances (MSSC), which is known to be NP-hard In this paper, by using matrix arguments, we first model MSSC as a so-called 0-1 semidefinite programming (SDP) problem We show that our 0-1 SDP model provides a unified framework for several clustering approaches such as normalized k-cut and spectral clustering Moreover, the 0-1 SDP model allows us to solve the underlying problem approximately via the linear programming and SDP relaxations Second, we consider the issue of how to extract a feasible solution of the original 0-1 SDP model from the optimal solution of the relaxed SDP problem By using principal component analysis, we develop a rounding procedure to construct a feasible partitioning from a solution of the relaxed problem In our rounding procedure, we need to solve a K-means clustering problem in $\Re^{k-1}$, which can be done in $O(n^{k^2-2k+2})$ time In case of biclustering, the running time of our rounding procedure can be reduced to $O(n\log n)$ We show that our algorithm provides a 2-approximate solution to the original problem Promising numerical results for biclustering based on our new method are reported

Journal ArticleDOI
TL;DR: This paper presents an extended version of the R-model for multi-criteria inventory classification that provides a more reasonable and encompassing index since it uses two sets of weights that are most favourable and least favourable for each item.

01 Jan 2007
TL;DR: In this article, the authors proposed a method to approximate the nonlinear objective function of the problem by means of piecewise-linear functions, so that UC can be approximated by an mixed-integer linear program (MILP).
Abstract: The short-term unit commitment (UC) problem in hydrothermal power generation is a large-scale, mixed-integer nonlinear program, which is difficult to solve efficiently, especially for large-scale instances. It is possible to approximate the nonlinear objective function of the problem by means of piecewise-linear functions, so that UC can be approximated by an mixed-integer linear program (MILP); applying the available efficient general-purpose MILP solvers to the resulting formulations, good quality solutions can be obtained in a relatively short amount of time. We build on this approach, presenting a novel way to approximating the nonlinear objective function based on a recently developed class of valid inequalities for the problem, called ldquoperspective cuts.rdquo At least for many realistic instances of a general basic formulation of UC, an MILP-based heuristic obtains comparable or slightly better solutions in less time when employing the new approach rather than the standard piecewise linearizations, while being not more difficult to implement and use. Furthermore, ldquodynamicrdquo formulations, whereby the approximation is iteratively improved, provide even better results if the approximation is appropriately controlled.

Proceedings ArticleDOI
01 May 2007
TL;DR: Simulation results show that solutions obtained by this algorithm are very close to lower bounds obtained via relaxation, thus suggesting that the solution produced by the algorithm is near-optimal.
Abstract: Software defined radio (SDR) capitalizes advances in signal processing and radio technology and is capable of reconfiguring RF and switching to desired frequency bands. It is a frequency-agile data communication device that is vastly more powerful than recently proposed multi-channel multi-radio (MC-MR) technology. In this paper, we investigate the important problem of multi-hop networking with SDR nodes. For such network, each node has a pool of frequency bands (not necessarily of equal size) that can be used for communication. The uneven size of bands in the radio spectrum prompts the need of further division into sub-bands for optimal spectrum sharing. We characterize behaviors and constraints for such multi-hop SDR network from multiple layers, including modeling of spectrum sharing and sub-band division, scheduling and interference constraints, and flow routing. We give a formal mathematical formulation with the objective of minimizing the required network-wide radio spectrum resource for a set of user sessions. Since such problem formulation falls into mixed integer non-linear programming (MINLP), which is NP-hard in general, we develop a lower bound for the objective by relaxing the integer variables and linearization. Subsequently, we develop a near-optimal algorithm to this MINLP problem. This algorithm is based on a novel sequential fixing procedure, where the integer variables are determined iteratively via a sequence of linear programming. Simulation results show that solutions obtained by this algorithm are very close to lower bounds obtained via relaxation, thus suggesting that the solution produced by the algorithm is near-optimal.

Proceedings Article
19 Jul 2007
TL;DR: Convex BP is defined as BP algorithms based on a convex free energy approximation and it is shown that this class includes ordinary BP with single-cycle, tree reweighted BP and many other BP variants, and fixed-points of convex max-product BP will provably give the MAP solution when there are no ties.
Abstract: Finding the most probable assignment (MAP) in a general graphical model is known to be NP hard but good approximations have been attained with max-product belief propagation (BP) and its variants. In particular, it is known that using BP on a single-cycle graph or tree reweighted BP on an arbitrary graph will give the MAP solution if the beliefs have no ties. In this paper we extend the setting under which BP can be used to provably extract the MAP. We define Convex BP as BP algorithms based on a convex free energy approximation and show that this class includes ordinary BP with single-cycle, tree reweighted BP and many other BP variants. We show that when there are no ties, fixed-points of convex max-product BP will provably give the MAP solution. We also show that convex sum-product BP at sufficiently small temperatures can be used to solve linear programs that arise from relaxing the MAP problem. Finally, we derive a novel condition that allows us to derive the MAP solution even if some of the convex BP beliefs have ties. In experiments, we show that our theorems allow us to find the MAP in many real-world instances of graphical models where exact inference using junction-tree is impossible.

Proceedings ArticleDOI
10 Nov 2007
TL;DR: A system that determines a bound on the energy savings for an application is developed that applies to three scientific programs, two of which exhibit load imbalance---particle simulation and UMT2K.
Abstract: Power is now a first-order design constraint in large-scale parallel computing. Used carefully, dynamic voltage scaling can execute parts of a program at a slower CPU speed to achieve energy savings with a relatively small (possibly zero) time delay. However, the problem of when to change frequencies in order to optimize energy savings is NP-complete, which has led to many heuristic energy-saving algorithms. To determine how closely these algorithms approach optimal savings, we developed a system that determines a bound on the energy savings for an application. Our system uses a linear programming solver that takes as inputs the application communication trace and the cluster power characteristics and then outputs a schedule that realizes this bound. We apply our system to three scientific programs, two of which exhibit load imbalance---particle simulation and UMT2K. Results from our bounding technique show particle simulation is more amenable to energy savings than UMT2K.

Journal ArticleDOI
TL;DR: This paper provides an illustrated overview of the state of the art of Interval Programming in the context of multiple objective linear programming models.

Journal ArticleDOI
TL;DR: A procedure for solving multilevel programming problems in a large hierarchical decentralized organization through linear fuzzy goal programming approach, which achieves highest degree of each of the membership goals by minimizing negative deviational variables.

Journal ArticleDOI
TL;DR: This paper formulates a model for finding a minimum cost routing in a network for a heterogeneous fleet of ships engaged in pickup and delivery of several liquid bulk products and shows that the model can be reformulated as an equivalent mixed-integer linear program with special structure.

Journal ArticleDOI
Yinyu Ye1
TL;DR: A continuous path leading to the set of the Arrow–Debreu equilibrium, similar to the central path developed for linear programming interior-point methods is presented, derived from the weighted logarithmic utility and barrier functions and the Brouwer fixed-point theorem.
Abstract: We present polynomial-time interior-point algorithms for solving the Fisher and Arrow–Debreu competitive market equilibrium problems with linear utilities and n players. Both of them have the arithmetic operation complexity bound of $${O(n^{4}log(1/\epsilon}$$)) for computing an $${\epsilon}$$ -equilibrium solution. If the problem data are rational numbers and their bit-length is L, then the bound to generate an exact solution is O(n4L) which is in line with the best complexity bound for linear programming of the same dimension and size. This is a significant improvement over the previously best bound $$O(n^{8}log(1/\epsilon$$)) for approximating the two problems using other methods. The key ingredient to derive these results is to show that these problems admit convex optimization formulations, efficient barrier functions and fast rounding techniques. We also present a continuous path leading to the set of the Arrow–Debreu equilibrium, similar to the central path developed for linear programming interior-point methods. This path is derived from the weighted logarithmic utility and barrier functions and the Brouwer fixed-point theorem. The defining equations are bilinear and possess some primal-dual structure for the application of the Newton-based path-following method.