scispace - formally typeset
Search or ask a question

Showing papers in "Computational Optimization and Applications in 2010"


Journal ArticleDOI
TL;DR: Application of gradient projection (GP) algorithms to the dual formulation of total variation models and a sequential quadratic programming (SQP) approach that takes account of the curvature of the boundary of the dual feasible set are proposed.
Abstract: Image restoration models based on total variation (TV) have become popular since their introduction by Rudin, Osher, and Fatemi (ROF) in 1992. The dual formulation of this model has a quadratic objective with separable constraints, making projections onto the feasible set easy to compute. This paper proposes application of gradient projection (GP) algorithms to the dual formulation. We test variants of GP with different step length selection and line search strategies, including techniques based on the Barzilai-Borwein method. Global convergence can in some cases be proved by appealing to existing theory. We also propose a sequential quadratic programming (SQP) approach that takes account of the curvature of the boundary of the dual feasible set. Computational experiments show that the proposed approaches perform well in a wide range of applications and that some are significantly faster than previously proposed methods, particularly when only modest accuracy in the solution is required.

182 citations


Journal ArticleDOI
TL;DR: The method is an extension of the Gauss-Seidel andGauss-Jacobi method with overrelaxation for symmetric convex linear complementarity problems and is proved to be convergent under fairly standard assumptions.
Abstract: Aiming at a fast and robust simulation of large multibody systems with contacts and friction, this work presents a novel method for solving large cone complementarity problems by means of a fixed-point iteration. The method is an extension of the Gauss-Seidel and Gauss-Jacobi method with overrelaxation for symmetric convex linear complementarity problems. The method is proved to be convergent under fairly standard assumptions and is shown by our tests to scale well up to 500,000 contact points and more than two millions of unknowns.

139 citations


Journal ArticleDOI
TL;DR: The method the advocate first convexifies the problem and then solves a sequence of subproblems, whose solutions form a trajectory that leads to the solution, to illustrate how well the algorithm performs.
Abstract: One of the challenging optimization problems is determining the minimizer of a nonlinear programming problem that has binary variables. A vexing difficulty is the rate the work to solve such problems increases as the number of discrete variables increases. Any such problem with bounded discrete variables, especially binary variables, may be transformed to that of finding a global optimum of a problem in continuous variables. However, the transformed problems usually have astronomically large numbers of local minimizers, making them harder to solve than typical global optimization problems. Despite this apparent disadvantage, we show that the approach is not futile if we use smoothing techniques. The method we advocate first convexifies the problem and then solves a sequence of subproblems, whose solutions form a trajectory that leads to the solution. To illustrate how well the algorithm performs we show the computational results of applying it to problems taken from the literature and new test problems with known optimal solutions.

133 citations


Journal ArticleDOI
TL;DR: It is shown that the use of minimum Frobenius norm quadratic models can improve the performance of direct-search methods and the minimization of the models within a trust region provides an enhanced search step.
Abstract: The goal of this paper is to show that the use of minimum Frobenius norm quadratic models can improve the performance of direct-search methods. The approach taken here is to maintain the structure of directional direct-search methods, organized around a search and a poll step, and to use the set of previously evaluated points generated during a direct-search run to build the models. The minimization of the models within a trust region provides an enhanced search step. Our numerical results show that such a procedure can lead to a significant improvement of direct search for smooth, piecewise smooth, and noisy problems.

126 citations


Journal ArticleDOI
TL;DR: This paper first investigates a smoothing function in the context of symmetric cones and shows that it is coercive under suitable assumptions, and extends two generic frameworks of smoothing algorithms to solve the complementarity problems over asymmetric cones, and proves the proposed algorithms are globally convergent under suitable assumption.
Abstract: There recently has been much interest in studying optimization problems over symmetric cones. In this paper, we first investigate a smoothing function in the context of symmetric cones and show that it is coercive under suitable assumptions. We then extend two generic frameworks of smoothing algorithms to solve the complementarity problems over symmetric cones, and prove the proposed algorithms are globally convergent under suitable assumptions. We also give a specific smoothing Newton algorithm which is globally and locally quadratically convergent under suitable assumptions. The theory of Euclidean Jordan algebras is a basic tool which is extensively used in our analysis. Preliminary numerical results for second-order cone complementarity problems are reported.

100 citations


Journal ArticleDOI
TL;DR: This work deals with the analysis of cone-constrained eigenvalue problems, and discusses some theoretical issues like, for instance, the estimation of the maximal number of eigenvalues in a cone- Constrained problem.
Abstract: Equilibria in mechanics or in transportation models are not always expressed through a system of equations, but sometimes they are characterized by means of complementarity conditions involving a convex cone. This work deals with the analysis of cone-constrained eigenvalue problems. We discuss some theoretical issues like, for instance, the estimation of the maximal number of eigenvalues in a cone-constrained problem. Special attention is paid to the Paretian case. As a short addition to the theoretical part, we introduce and study two algorithms for solving numerically such type of eigenvalue problems.

89 citations


Journal ArticleDOI
TL;DR: This paper compares instantiations of Mads under different strategies to handle constraints from feasible and/or infeasible starting points on three real engineering applications.
Abstract: The class of Mesh Adaptive Direct Search (Mads) algorithms is designed for the optimization of constrained black-box problems. The purpose of this paper is to compare instantiations of Mads under different strategies to handle constraints. Intensive numerical tests are conducted from feasible and/or infeasible starting points on three real engineering applications.

88 citations


Journal ArticleDOI
TL;DR: A modified BFGS method is proposed for unconstrained optimization and the global convergence and the superlinear convergence of the convex functions are established under suitable assumptions.
Abstract: A modified BFGS method is proposed for unconstrained optimization. The global convergence and the superlinear convergence of the convex functions are established under suitable assumptions. Numerical results show that this method is interesting.

85 citations


Journal ArticleDOI
TL;DR: The problem of managing an Agile Earth Observing Satellite consists of selecting and scheduling a subset of photographs among a set of candidate ones that satisfy imperative constraints and maximize a gain function and a tabu search algorithm is proposed to solve this NP-hard problem.
Abstract: The problem of managing an Agile Earth Observing Satellite consists of selecting and scheduling a subset of photographs among a set of candidate ones that satisfy imperative constraints and maximize a gain function. We propose a tabu search algorithm to solve this NP-hard problem. This one is formulated as a constrained optimization problem and involves stereoscopic and time window visibility constraints; and a convex evaluation function that increases its hardness. To obtain a wide-ranging and an efficient exploration of the search space, we sample it by consistent and saturated configurations. Our algorithm is also hybridized with a systematic search that uses partial enumerations. To increase the solution quality, we introduce and solve a secondary problem; the minimization of the sum of the transition durations between the acquisitions. Upper bounds are also calculated by a dynamic programming algorithm on a relaxed problem. The obtained results show the efficiency of our approach.

81 citations


Journal ArticleDOI
TL;DR: This paper proposes a two-phase approach that is suitable for solving CVaR minimization problems having a large number of price scenarios and provides highly-accurate near-optimal solutions with a significantly improved performance over the interior point barrier implementation of CPLEX 9.0.
Abstract: Conditional Value-at-Risk (CVaR) is a portfolio evaluation function having appealing features such as sub-additivity and convexity. Although the CVaR function is nondifferentiable, scenario-based CVaR minimization problems can be reformulated as linear programs (LPs) that afford solutions via widely-used commercial softwares. However, finding solutions through LP formulations for problems having many financial instruments and a large number of price scenarios can be time-consuming as the dimension of the problem greatly increases. In this paper, we propose a two-phase approach that is suitable for solving CVaR minimization problems having a large number of price scenarios. In the first phase, conventional differentiable optimization techniques are used while circumventing nondifferentiable points, and in the second phase, we employ a theoretically convergent, variable target value nondifferentiable optimization technique. The resultant two-phase procedure guarantees infinite convergence to optimality. As an optional third phase, we additionally perform a switchover to a simplex solver starting with a crash basis obtained from the second phase when finite convergence to an exact optimum is desired. This three phase procedure substantially reduces the effort required in comparison with the direct use of a commercial stand-alone simplex solver (CPLEX 9.0). Moreover, the two-phase method provides highly-accurate near-optimal solutions with a significantly improved performance over the interior point barrier implementation of CPLEX 9.0 as well, especially when the number of scenarios is large. We also provide some benchmarking results on using an alternative popular proximal bundle nondifferentiable optimization technique.

80 citations


Journal ArticleDOI
TL;DR: This paper formally prove the equivalence between the approximating problems and the original nonsmooth problem, and defines an effective and efficient version of the Frank-Wolfe algorithm for the minimization of concave separable functions over polyhedral sets.
Abstract: Given a non empty polyhedral set, we consider the problem of finding a vector belonging to it and having the minimum number of nonzero components, i.e., a feasible vector with minimum zero-norm. This combinatorial optimization problem is NP-Hard and arises in various fields such as machine learning, pattern recognition, signal processing. One of the contributions of this paper is to propose two new smooth approximations of the zero-norm function, where the approximating functions are separable and concave. In this paper we first formally prove the equivalence between the approximating problems and the original nonsmooth problem. To this aim, we preliminarily state in a general setting theoretical conditions sufficient to guarantee the equivalence between pairs of problems. Moreover we also define an effective and efficient version of the Frank-Wolfe algorithm for the minimization of concave separable functions over polyhedral sets in which variables which are null at an iteration are eliminated for all the following ones, with significant savings in computational time, and we prove the global convergence of the method. Finally, we report the numerical results on test problems showing both the usefulness of the new concave formulations and the efficiency in terms of computational time of the implemented minimization algorithm.

Journal ArticleDOI
TL;DR: It is established that a greedy approximation with an integer-valued polymatroid potential function f is H(γ)-approximation of the minimum submodular cover problem with linear cost where γ is the maximum value of f over all singletons andH(γ) is the γ-th harmonic number.
Abstract: It is well-known that a greedy approximation with an integer-valued polymatroid potential function f is H(?)-approximation of the minimum submodular cover problem with linear cost where ? is the maximum value of f over all singletons and H(?) is the ?-th harmonic number. In this paper, we establish similar results for the minimum submodular cover problem with a submodular cost (possibly nonlinear) and/or fractional submodular potential function f.

Journal ArticleDOI
TL;DR: This work proposes a (block) coordinate gradient descent method for solving support vector machines (SVMs) training as a large quadratic program (QP) with bound constraints and a single linear equality constraint and suggests that the method can be efficient for SVM training with nonlinear kernel.
Abstract: Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. We establish global convergence and, under a local error bound assumption (which is satisfied by the SVM QP), linear rate of convergence for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We show that, for the SVM QP with n variables, this rule can be implemented in O(n) operations using Rockafellar's notion of conformal realization. Thus, for SVM training, our method requires only O(n) operations per iteration and, in contrast to existing decomposition methods, achieves linear convergence without additional assumptions. We report our numerical experience with the method on some large SVM QP arising from two-class data classification. Our experience suggests that the method can be efficient for SVM training with nonlinear kernel.

Journal ArticleDOI
TL;DR: A new algorithm for solving difficult large-scale global optimization problems using the well-known DIRECT algorithm, which produces a set of points that tries to cover the most interesting regions of the feasible set when the dimension of the problem increases.
Abstract: In this paper we propose a new algorithm for solving difficult large-scale global optimization problems. We draw our inspiration from the well-known DIRECT algorithm which, by exploiting the objective function behavior, produces a set of points that tries to cover the most interesting regions of the feasible set. Unfortunately, it is well-known that this strategy suffers when the dimension of the problem increases. As a first step we define a multi-start algorithm using DIRECT as a deterministic generator of starting points. Then, the new algorithm consists in repeatedly applying the previous multi-start algorithm on suitable modifications of the variable space that exploit the information gained during the optimization process. The efficiency of the new algorithm is pointed out by a consistent numerical experimentation involving both standard test problems and the optimization of Morse potential of molecular clusters.

Journal ArticleDOI
TL;DR: This paper investigates the use of the column generation and effective solution procedures in the spirit of well-known local search metaheuristics, in which the search process is composed of two complementary stages.
Abstract: In this paper, we propose to solve large-scale multiple-choice multi-dimensional knapsack problems. We investigate the use of the column generation and effective solution procedures. The method is in the spirit of well-known local search metaheuristics, in which the search process is composed of two complementary stages: (i) a rounding solution stage and (ii) a restricted exact solution procedure. The method is analyzed computationally on a set of problem instances of the literature and compared to the results reached by both Cplex solver and a recent reactive local search. For these instances, most of which cannot be solved to proven optimality in a reasonable runtime, the proposed method improves 21 out of 27.

Journal ArticleDOI
TL;DR: The mutual impact of linear algebra and optimization is discussed, focusing on interior point methods and on the iterative solution of the KKT system, with a focus on preconditioning, termination control for the inner iterations, and inertia control.
Abstract: The solution of KKT systems is ubiquitous in optimization methods and often dominates the computation time, especially when large-scale problems are considered. Thus, the effective implementation of such methods is highly dependent on the availability of effective linear algebra algorithms and software, that are able, in turn, to take into account specific needs of optimization. In this paper we discuss the mutual impact of linear algebra and optimization, focusing on interior point methods and on the iterative solution of the KKT system. Three critical issues are addressed: preconditioning, termination control for the inner iterations, and inertia control.

Journal ArticleDOI
TL;DR: A cutting plane algorithm working in the space of the variables of the basic formulation of the Generalized Assignment Problem, whose core is an exact separation procedure for the knapsack polytopes induced by the capacity constraints is proposed.
Abstract: The Generalized Assignment Problem is a well-known NP-hard combinatorial optimization problem which consists of minimizing the assignment costs of a set of jobs to a set of machines satisfying capacity constraints Most of the existing algorithms are of a Branch-and-Price type, with lower bounds computed through Dantzig---Wolfe reformulation and column generation In this paper we propose a cutting plane algorithm working in the space of the variables of the basic formulation, whose core is an exact separation procedure for the knapsack polytopes induced by the capacity constraints We show that an efficient implementation of the exact separation procedure allows to deal with large-scale instances and to solve to optimality several previously unsolved instances

Journal ArticleDOI
TL;DR: This work analyzes one-step direct methods for variational inequality problems, establishing convergence under paramonotonicity of the operator, and implies uniqueness of solution.
Abstract: We analyze one-step direct methods for variational inequality problems, establishing convergence under paramonotonicity of the operator. Previous results on the method required much more demanding assumptions, like strong or uniform monotonicity, implying uniqueness of solution, which is not the case for our approach.

Journal ArticleDOI
TL;DR: Results show that the HBMO algorithm is applicable to projects with or without resource constraints, and results obtained are promising and compare well with those of well-known heuristic approaches and gradient-based methods.
Abstract: Effective project management requires the development of a realistic plan and a clear communication of the plan from the beginning to the end of the project. The critical path method (CPM) of scheduling is the fundamental tool used to develop and interconnect project plans. Ensuring the integrity and transparency of those schedules is paramount for project success. The complex and discrete nature of the solution domain for such problems causes failing of traditional and gradient-based methods in finding the optimal or even feasible solution in some cases. The difficulties encountered in scheduling construction projects with resource constraints are highlighted by means of a simplified bridge construction problem and a basic masonry construction problem. The honey-bee mating optimization (HBMO) algorithm has been previously adopted to solve mathematical and engineering problems and has proven to be efficient for searching optimal solutions in large-problem domains. This paper presents the HBMO algorithm for scheduling projects with both constrained and unconstrained resources. Results show that the HBMO algorithm is applicable to projects with or without resource constraints. Furthermore, results obtained are promising and compare well with those of well-known heuristic approaches and gradient-based methods.

Journal ArticleDOI
TL;DR: This research fully takes advantage of the recently developed theory of strongly semismooth matrix valued functions, which makes fast convergent numerical methods applicable to the underlying unconstrained optimization problem.
Abstract: Correlation stress testing is employed in several financial models for determining the value-at-risk (VaR) of a financial institution's portfolio. The possible lack of mathematical consistence in the target correlation matrix, which must be positive semidefinite, often causes breakdown of these models. The target matrix is obtained by fixing some of the correlations (often contained in blocks of submatrices) in the current correlation matrix while stressing the remaining to a certain level to reflect various stressing scenarios. The combination of fixing and stressing effects often leads to mathematical inconsistence of the target matrix. It is then naturally to find the nearest correlation matrix to the target matrix with the fixed correlations unaltered. However, the number of fixed correlations could be potentially very large, posing a computational challenge to existing methods. In this paper, we propose an unconstrained convex optimization approach by solving one or a sequence of continuously differentiable (but not twice continuously differentiable) convex optimization problems, depending on different stress patterns. This research fully takes advantage of the recently developed theory of strongly semismooth matrix valued functions, which makes fast convergent numerical methods applicable to the underlying unconstrained optimization problem. Promising numerical results on practical data (RiskMetrics database) and randomly generated problems of larger sizes are reported.

Journal ArticleDOI
TL;DR: A Nonlinear Programming algorithm that converges to second-order stationary points that is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type.
Abstract: A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.

Journal ArticleDOI
TL;DR: A semismooth Newton type method based on the reformulation of the nonsmooth system of equations involving the class of SOC complementarity functions is proposed, and the superlinear convergence is established under strict complementarity.
Abstract: In this paper, we present a detailed investigation for the properties of a one-parametric class of SOC complementarity functions, which include the globally Lipschitz continuity, strong semismoothness, and the characterization of their B-subdifferential. Moreover, for the merit functions induced by them for the second-order cone complementarity problem (SOCCP), we provide a condition for each stationary point to be a solution of the SOCCP and establish the boundedness of their level sets, by exploiting Cartesian P-properties. We also propose a semismooth Newton type method based on the reformulation of the nonsmooth system of equations involving the class of SOC complementarity functions. The global and superlinear convergence results are obtained, and among others, the superlinear convergence is established under strict complementarity. Preliminary numerical results are reported for DIMACS second-order cone programs, which confirm the favorable theoretical properties of the method.

Journal ArticleDOI
TL;DR: A new general scheme for Inexact Restoration methods for Nonlinear Programming is introduced, where after computing an inexactly restored point, the new iterate is determined in an approximate tangent affine subspace by means of a simple line search on a penalty function.
Abstract: A new general scheme for Inexact Restoration methods for Nonlinear Programming is introduced. After computing an inexactly restored point, the new iterate is determined in an approximate tangent affine subspace by means of a simple line search on a penalty function. This differs from previous methods, in which the tangent phase needs both a line search based on the objective function (or its Lagrangian) and a confirmation based on a penalty function or a filter decision scheme. Besides its simplicity the new scheme enjoys some nice theoretical properties. In particular, a key condition for the inexact restoration step could be weakened. To some extent this also enables the application of the new scheme to mathematical programs with complementarity constraints.

Journal ArticleDOI
TL;DR: This work addresses the short-term production planning and scheduling problem coming from the glass container industry with a Lagrangian decomposition based heuristic for generating good feasible solutions and presents two mixed integer programming formulations for this problem.
Abstract: We address the short-term production planning and scheduling problem coming from the glass container industry. A furnace melts the glass that is distributed to a set of parallel molding machines. Both furnace and machine idleness are not allowed. The resulting multi-machine multi-item continuous setup lotsizing problem with a common resource has sequence-dependent setup times and costs. Production losses are penalized in the objective function since we deal with a capital intensive industry. We present two mixed integer programming formulations for this problem, which are reduced to a network flow type problem. The two formulations are improved by adding valid inequalities that lead to good lower bounds. We rely on a Lagrangian decomposition based heuristic for generating good feasible solutions. We report computational experiments for randomly generated instances and for real-life data on the aforementioned problem, as well as on a discrete lotsizing and scheduling version.

Journal ArticleDOI
TL;DR: Four heuristics are proposed for this hard-to-solve global optimization problem, namely, a grid search procedure, an alternating method and two evolutionary algorithms, and Computational experiments show that the evolutionary algorithm called UEGO_cent provides the best results.
Abstract: A chain (the leader) wants to set up a single new facility in a planar market where similar facilities of a competitor (the follower), and possibly of its own chain, are already present. The follower will react by locating another single facility after the leader locates its own facility. Fixed demand points split their demand probabilistically over all facilities in the market in proportion to their attraction to each facility, determined by the different perceived qualities of the facilities and the distances to them, through a gravitational model. Both the location and the quality (design) of the new leader’s facility are to be found. The aim is to maximize the profit obtained by the leader following the follower’s entry. Four heuristics are proposed for this hard-to-solve global optimization problem, namely, a grid search procedure, an alternating method and two evolutionary algorithms. Computational experiments show that the evolutionary algorithm called UEGO_cent.SASS provides the best results.

Journal ArticleDOI
TL;DR: Two projection-like methods for solving the generalized Nash equilibria are presented and it is shown that under certain assumptions, they are globally convergent.
Abstract: A generalized Nash game is an m-person noncooperative game in which each player's strategy depends on the rivals' strategies. Based on a quasi-variational inequality formulation for the generalized Nash game, we present two projection-like methods for solving the generalized Nash equilibria in this paper. It is shown that under certain assumptions, these methods are globally convergent. Preliminary computational experience is also reported.

Journal ArticleDOI
TL;DR: It is proved that the nonlinear system of m-quadratic equations in n-dimensional space is first formulated as a nonconvex optimization problem and is equivalent to a concave maximization problem in ℝm, which can be solved easily by well-developed convex optimization techniques.
Abstract: This paper presents a canonical dual approach for solving general nonlinear algebraic systems. By using least square method, the nonlinear system of m-quadratic equations in n-dimensional space is first formulated as a nonconvex optimization problem. We then proved that, by the canonical duality theory developed by the second author, this nonconvex problem is equivalent to a concave maximization problem in ? m , which can be solved easily by well-developed convex optimization techniques. Both existence and uniqueness of global optimal solutions are discussed, and several illustrative examples are presented.

Journal ArticleDOI
TL;DR: This work uses a specialized variant of bundle methods and shows that such an approach is related to bundle methods with inexact linearizations, and analyzes the convergence properties of two incremental-like bundle methods.
Abstract: An important field of application of non-smooth optimization refers to decomposition of large-scale or complex problems by Lagrangian duality. In this setting, the dual problem consists in maximizing a concave non-smooth function that is defined as the sum of sub-functions. The evaluation of each sub-function requires solving a specific optimization sub-problem, with specific computational complexity. Typically, some sub-functions are hard to evaluate, while others are practically straightforward. When applying a bundle method to maximize this type of dual functions, the computational burden of solving sub-problems is preponderant in the whole iterative process. We propose to take full advantage of such separable structure by making a dual bundle iteration after having evaluated only a subset of the dual sub-functions, instead of all of them. This type of incremental approach has already been applied for subgradient algorithms. In this work we use instead a specialized variant of bundle methods and show that such an approach is related to bundle methods with inexact linearizations. We analyze the convergence properties of two incremental-like bundle methods. We apply the incremental approach to a generation planning problem over an horizon of one to three years. This is a large scale stochastic program, unsolvable by a direct frontal approach. For a real-life application on the French power mix, we obtain encouraging numerical results, achieving a significant improvement in speed without losing accuracy.

Journal ArticleDOI
TL;DR: A new hybrid optimization method, combining Continuous Ant Colony System (CACS) and Tabu Search (TS) is proposed for minimization of continuous multi-minima functions, which incorporates the concepts of promising list, tabu list and tabu balls from TS into the framework of CACS.
Abstract: A new hybrid optimization method, combining Continuous Ant Colony System (CACS) and Tabu Search (TS) is proposed for minimization of continuous multi-minima functions. The new algorithm incorporates the concepts of promising list, tabu list and tabu balls from TS into the framework of CACS. This enables the resultant algorithm to avoid bad regions and to be guided toward the areas more likely to contain the global minimum. New strategies are proposed to dynamically tune the radius of the tabu balls during the execution and also to handle the variable correlations. The promising list is also used to update the pheromone distribution over the search space. The parameters of the new method are tuned based on the results obtained for a set of standard test functions. The results of the proposed scheme are also compared with those of some recent ant based and non-ant based meta-heuristics, showing improvements in terms of accuracy and efficiency.

Journal ArticleDOI
TL;DR: A posteriori error estimates for the solution of some state constraint optimization problem subject to an elliptic PDE using an interior point method combined with a finite element method for the discretization of the problem are derived.
Abstract: In this paper we are concerned with a posteriori error estimates for the solution of some state constraint optimization problem subject to an elliptic PDE The solution is obtained using an interior point method combined with a finite element method for the discretization of the problem We will derive separate estimates for the error in the cost functional introduced by the interior point parameter and by the discretization of the problem Finally we show numerical examples to illustrate the findings for pointwise state constraints and pointwise constraints on the gradient of the state