scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Programming in 2004"


Journal ArticleDOI
TL;DR: The Affinely Adjustable Robust Counterpart (AARC) problem is shown to be, in certain important cases, equivalent to a tractable optimization problem, and in other cases, having a tight approximation which is tractable.
Abstract: We consider linear programs with uncertain parameters, lying in some prescribed uncertainty set, where part of the variables must be determined before the realization of the uncertain parameters (‘‘non-adjustable variables’’), while the other part are variables that can be chosen after the realization (‘‘adjustable variables’’). We extend the Robust Optimization methodology ([1, 3-6, 9, 13, 14]) to this situation by introducing the Adjustable Robust Counterpart (ARC) associated with an LP of the above structure. Often the ARC is significantly less conservative than the usual Robust Counterpart (RC), however, in most cases the ARC is computationally intractable (NP-hard). This difficulty is addressed by restricting the adjustable variables to be affine functions of the uncertain data. The ensuing Affinely Adjustable Robust Counterpart (AARC) problem is then shown to be, in certain important cases, equivalent to a tractable optimization problem (typically an LP or a Semidefinite problem), and in other cases, having a tight approximation which is tractable. The AARC approach is illustrated by applying it to a multi-stage inventory management problem.

1,407 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider linear programs with uncertain parameters, lying in some prescribed uncertainty set, where part of the variables must be determined before the realization of the uncertain parameters.
Abstract: We consider linear programs with uncertain parameters, lying in some prescribed uncertainty set, where part of the variables must be determined before the realization of the uncertain parameters (`...

909 citations


Journal ArticleDOI
TL;DR: A new branch-and-cut algorithm for the capacitated vehicle routing problem (CVRP) that uses a variety of cutting planes, including capacity, framed capacity, generalized capacity, strengthened comb, multistar, partialMultistar, extended hypotour inequalities, and classical Gomory mixed-integer cuts is presented.
Abstract: We present a new branch-and-cut algorithm for the capacitated vehicle routing problem (CVRP). The algorithm uses a variety of cutting planes, including capacity, framed capacity, generalized capacity, strengthened comb, multistar, partial multistar, extended hypotour inequalities, and classical Gomory mixed-integer cuts. For each of these classes of inequalities we describe our separation algorithms in detail. Also we describe the other important ingredients of our branch-and-cut algorithm, such as the branching rules, the node selection strategy, and the cut pool management. Computational results, for a large number of instances, show that the new algorithm is competitive. In particular, we solve three instances (B-n50-k8, B-n66-k9 and B-n78-k10) of Augerat to optimality for the first time.

593 citations


Journal ArticleDOI
TL;DR: The development of an efficient solution strategy for obtaining global optima of continuous, integer, and mixed-integer nonlinear programs is addressed and novel relaxation schemes, range reduction tests, and branching strategies are developed which are incorporated into the prototypical branch-and-bound algorithm.
Abstract: This work addresses the development of an efficient solution strategy for obtaining global optima of continuous, integer, and mixed-integer nonlinear programs. Towards this end, we develop novel relaxation schemes, range reduction tests, and branching strategies which we incorporate into the prototypical branch-and-bound algorithm. In the theoretical/algorithmic part of the paper, we begin by developing novel strategies for constructing linear relaxations of mixed-integer nonlinear programs and prove that these relaxations enjoy quadratic convergence properties. We then use Lagrangian/linear programming duality to develop a unifying theory of domain reduction strategies as a consequence of which we derive many range reduction strategies currently used in nonlinear programming and integer linear programming. This theory leads to new range reduction schemes, including a learning heuristic that improves initial branching decisions by relaying data across siblings in a branch-and-bound tree. Finally, we incorporate these relaxation and reduction strategies in a branch-and-bound algorithm that incorporates branching strategies that guarantee finiteness for certain classes of continuous global optimization problems. In the computational part of the paper, we describe our implementation discussing, wherever appropriate, the use of suitable data structures and associated algorithms. We present computational experience with benchmark separable concave quadratic programs, fractional 0–1 programs, and mixed-integer nonlinear programs from applications in synthesis of chemical processes, engineering design, just-in-time manufacturing, and molecular design.

579 citations


Journal ArticleDOI
TL;DR: The structure of the value function of the second-stage integer problem is exploited to develop a novel global optimization algorithm that avoids explicit enumeration of the search space while guaranteeing finite termination.
Abstract: This paper addresses a general class of two-stage stochastic programs with integer recourse and discrete distributions. We exploit the structure of the value function of the second-stage integer problem to develop a novel global optimization algorithm. The proposed scheme departs from those in the current literature in that it avoids explicit enumeration of the search space while guaranteeing finite termination. Computational experiments on standard test problems indicate superior performance of the proposed algorithm in comparison to those in the existing literature.

266 citations


Journal ArticleDOI
TL;DR: In this paper, the primal-dual interior-point filter is used to solve the problem of global convergence to first-order critical points in nonlinear nonlinear programs, without the use of merit functions and penalty parameters.
Abstract: In this paper, the filter technique of Fletcher and Leyffer (1997) is used to globalize the primal-dual interior-point algorithm for nonlinear programming, avoiding the use of merit functions and the updating of penalty parameters.The new algorithm decomposes the primal-dual step obtained from the perturbed first-order necessary conditions into a normal and a tangential step, whose sizes are controlled by a trust-region type parameter. Each entry in the filter is a pair of coordinates: one resulting from feasibility and centrality, and associated with the normal step; the other resulting from optimality (complementarity and duality), and related with the tangential step.Global convergence to first-order critical points is proved for the new primal-dual interior-point filter algorithm.

198 citations


Journal ArticleDOI
TL;DR: It is shown how, using directed rounding and interval arithmetic, cheap pre- and postprocessing of the linear programs arising in a branch-and-cut framework can guarantee that no solution is lost, at least for mixed-integer programs in which all variables can be bounded rigorously by bounds of reasonable size.
Abstract: Current mixed-integer linear programming solvers are based on linear programming routines that use floating-point arithmetic. Occasionally, this leads to wrong solutions, even for problems where all coefficients and all solution components are small integers. An example is given where many state-of-the-art MILP solvers fail. It is then shown how, using directed rounding and interval arithmetic, cheap pre- and postprocessing of the linear programs arising in a branch-and-cut framework can guarantee that no solution is lost, at least for mixed-integer programs in which all variables can be bounded rigorously by bounds of reasonable size.

180 citations


Journal ArticleDOI
TL;DR: This paper addresses two second-best toll pricing problems, one with fixed and the other with elastic travel demands, as mathematical programs with equilibrium constraints, and several equivalent nonlinear programming formulations for the two problems are discussed.
Abstract: This paper addresses two second-best toll pricing problems, one with fixed and the other with elastic travel demands, as mathematical programs with equilibrium constraints. Several equivalent nonlinear programming formulations for the two problems are discussed. One formulation leads to properties that are of interest to transportation economists. Another produces an algorithm that is capable of solving large problems and easy to implement with existing software for linear and nonlinear programming problems. Numerical results using transportation networks from the literature are also presented.

152 citations


Journal ArticleDOI
TL;DR: This paper describes an active-set algorithm for large-scale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza and incorporates a trust-region constraint.
Abstract: This paper describes an active-set algorithm for large-scale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the active set at the solution. The linear program is obtained by making a linear approximation to the l1 penalty function inside a trust region. In the second stage, an equality constrained quadratic program (EQP) is solved involving only those constraints that are active at the solution of the linear program. The EQP incorporates a trust-region constraint and is solved (inexactly) by means of a projected conjugate gradient method. Numerical experiments are presented illustrating the performance of the algorithm on the CUTEr [1, 15] test set.

129 citations


Journal ArticleDOI
TL;DR: A method is presented for updating all these coefficients of quadratic Lagrange functions of the current interpolation problem in ({m+n}2) operations, which allows the model to be updated too and has a useful stability property that is investigated in some numerical experiments.
Abstract: Quadratic models of objective functions are highly useful in many optimization algorithms. They are updated regularly to include new information about the objective function, such as the difference between two gradient vectors. We consider the case, however, when each model interpolates some function values, so an update is required when a new function value replaces an old one. We let the number of interpolation conditions, m say, be such that there is freedom in each new quadratic model that is taken up by minimizing the Frobenius norm of the second derivative matrix of the change to the model. This variational problem is expressed as the solution of an (m+n+1)×(m+n+1) system of linear equations, where n is the number of variables of the objective function. Further, the inverse of the matrix of the system provides the coefficients of quadratic Lagrange functions of the current interpolation problem. A method is presented for updating all these coefficients in ({m+n}2) operations, which allows the model to be updated too. An extension to the method is also described that suppresses the constant terms of the Lagrange functions. These techniques have a useful stability property that is investigated in some numerical experiments.

126 citations


Journal ArticleDOI
TL;DR: This work shows that the BFGS method and other methods in the Broyden class, with exact line searches, may fail for non-convex objective functions.
Abstract: This work shows that the BFGS method and other methods in the Broyden class, with exact line searches, may fail for non-convex objective functions.

Journal ArticleDOI
TL;DR: A rigorous decomposition approach to solve separable mixed-integer nonlinear programs where the participating functions are nonconvex is presented and numerical results are compared with currently available algorithms for example problems, illuminating the potential benefits of the proposed algorithm.
Abstract: A rigorous decomposition approach to solve separable mixed-integer nonlinear programs where the participating functions are nonconvex is presented. The proposed algorithms consist of solving an alternating sequence of Relaxed Master Problems (mixed-integer linear program) and two nonlinear programming problems (NLPs). A sequence of valid nondecreasing lower bounds and upper bounds is generated by the algorithms which converge in a finite number of iterations. A Primal Bounding Problem is introduced, which is a convex NLP solved at each iteration to derive valid outer approximations of the nonconvex functions in the continuous space. Two decomposition algorithms are presented in this work. On finite termination, the first yields the global solution to the original nonconvex MINLP and the second finds a rigorous bound to the global solution. Convergence and optimality properties, and refinement of the algorithms for efficient implementation are presented. Finally, numerical results are compared with currently available algorithms for example problems, illuminating the potential benefits of the proposed algorithm.

Journal ArticleDOI
TL;DR: This paper shows that, under arbitrary measures for variability, the robust optimization approach might lead to suboptimal solutions to the second-stage planning problem, and proposes sufficient conditions on the variability measure to remedy this problem.
Abstract: Robust-optimization models belong to a special class of stochastic programs, where the traditional expected cost minimization objective is replaced by one that explicitly addresses cost variability. This paper explores robust optimization in the context of two-stage planning systems. We show that, under arbitrary measures for variability, the robust optimization approach might lead to suboptimal solutions to the second-stage planning problem. As a result, the variability of the second-stage costs may be underestimated, thereby defeating the intended purpose of the model. We propose sufficient conditions on the variability measure to remedy this problem. Under the proposed conditions, a robust optimization model can be efficiently solved using a variant of the L-shaped decomposition algorithm for traditional stochastic linear programs. We apply the proposed framework to standard stochastic-programming test problems and to an application that arises in auctioning excess electric power.

Journal ArticleDOI
TL;DR: Numerical results are presented to indicate the potential of the resulting code, both for linear and quadratic problems, as well as insisting that bounds of the variables in the reduced problem be as tight as possible rather than allowing some slack in these bounds.
Abstract: Techniques for the preprocessing of (not-necessarily convex) quadratic programs are discussed. Most of the procedures extend known ones from the linear to quadratic cases, but a few new preprocessing techniques are introduced. The implementation aspects are also discussed. Numerical results are finally presented to indicate the potential of the resulting code, both for linear and quadratic problems. The impact of insisting that bounds of the variables in the reduced problem be as tight as possible rather than allowing some slack in these bounds is also shown to be numerically significant.

Journal ArticleDOI
TL;DR: A mesh independence result for generalized Newton methods is established that the continuous and the discrete Newton process, when initialized properly, converge q-linearly with the same rate.
Abstract: For a class of semismooth operator equations a mesh independence result for generalized Newton methods is established. The main result states that the continuous and the discrete Newton process, when initialized properly, converge q-linearly with the same rate. The problem class considered in the paper includes MCP-function based reformulations of first order conditions of a class of control constrained optimal control problems for partial differential equations for which a numerical validation of the theoretical results is given.

Journal ArticleDOI
TL;DR: It is shown that this vector-valued function inherits from f the properties of continuity, (local) Lipschitz continuity, directional differentiability, Fréchet differentiable, continuousDifferentiability, as well as (ρ-order) semismoothness.
Abstract: Let Open image in new window be the Lorentz/second-order cone in Open image in new window. For any function f from Open image in new window to Open image in new window, one can define a corresponding function fsoc(x) on Open image in new window by applying f to the spectral values of the spectral decomposition of x∈Open image in new window with respect to Open image in new window. We show that this vector-valued function inherits from f the properties of continuity, (local) Lipschitz continuity, directional differentiability, Frechet differentiability, continuous differentiability, as well as (ρ-order) semismoothness. These results are useful for designing and analyzing smoothing methods and nonsmooth methods for solving second-order cone programs and complementarity problems.

Journal ArticleDOI
TL;DR: A new splitting approach is developed to these models, optimality conditions and duality theory, which is used to construct special decomposition methods for stochastic dominance constraints of second order.
Abstract: We consider a new class of optimization problems involving stochastic dominance constraints of second order. We develop a new splitting approach to these models, optimality conditions and duality theory. These results are used to construct special decomposition methods.

Journal ArticleDOI
TL;DR: The paper illustrates how complementarity systems arise in mathematical programming by means of a number of examples of various nature, followed by a brief survey of the results that are available concerning existence, uniqueness, and generation of solutions.
Abstract: Complementarity systems consist of ordinary differential equations coupled to complementarity conditions. They form a class of nonsmooth dynamical systems that is of use in mechanical and electrical engineering as well as in optimization and in other fields. The paper illustrates how complementarity systems arise in mathematical programming by means of a number of examples of various nature. This is followed by a brief survey of the results that are available concerning existence, uniqueness, and generation of solutions. The emphasis in this paper is on linear complementarity systems.

Journal ArticleDOI
TL;DR: It is shown that the modified trust-region filter-SQP method has the same global convergence properties as the original algorithm in [8], and the original trust- Region SQP-steps can be used without an additional second order correction.
Abstract: Transition to superlinear local convergence is shown for a modified version of the trust-region filter-SQP method for nonlinear programming introduced by Fletcher, Leyffer, and Toint [8]. Hereby, the original trust-region SQP-steps can be used without an additional second order correction. The main modification consists in using the Lagrangian function value instead of the objective function value in the filter together with an appropriate infeasibility measure. Moreover, it is shown that the modified trust-region filter-SQP method has the same global convergence properties as the original algorithm in [8].

Journal ArticleDOI
TL;DR: Improved approximation algorithms for column-restricted packing integer programs are developed that are simple to implement and achieve good performance when the input has a special structure and are motivated by the disjoint paths applications.
Abstract: In a packing integer program, we are given a matrix $A$ and column vectors $b,c$ with nonnegative entries. We seek a vector $x$ of nonnegative integers, which maximizes $c^{T}x,$ subject to $Ax \leq b.$ The edge and vertex-disjoint path problems together with their unsplittable flow generalization are NP-hard problems with a multitude of applications in areas such as routing, scheduling and bin packing. These two categories of problems are known to be conceptually related, but this connection has largely been ignored in terms of approximation algorithms. We explore the topic of approximating disjoint-path problems using polynomial-size packing integer programs. Motivated by the disjoint paths applications, we introduce the study of a class of packing integer programs, called column-restricted. We develop improved approximation algorithms for column-restricted programs, a result that we believe is of independent interest. Additional approximation algorithms for disjoint-paths are presented that are simple to implement and achieve good performance when the input has a special structure.

Journal ArticleDOI
TL;DR: This work considers parametric families of constrained problems in mathematical programming and conducts a local sensitivity analysis for multivalued solution maps and estimates are computed for the coderivative of the stationary point multifunction associated with a general parametric optimization model.
Abstract: We consider parametric families of constrained problems in mathematical programming and conduct a local sensitivity analysis for multivalued solution maps. Coderivatives of set-valued mappings are our basic tool to analyze the parametric sensitivity of either stationary points or stationary point-multiplier pairs associated with parameterized optimization problems. An implicit mapping theorem for coderivatives is one key to this analysis for either of these objects, and in addition, a partial coderivative rule is essential for the analysis of stationary points. We develop general results along both of these lines and apply them to study the parametric sensitivity of stationary points alone, as well as stationary point-multiplier pairs. Estimates are computed for the coderivative of the stationary point multifunction associated with a general parametric optimization model, and these estimates are refined and augmented by estimates for the coderivative of the stationary point-multiplier multifunction in the case when the constraints are representable in a special composite form. When combined with existing coderivative formulas, our estimates are entirely computable in terms of the original data of the problem.

Journal ArticleDOI
TL;DR: This paper derives a constant factor approximation algorithm for the k-Steiner tree problem using ideas introduced by Jain and Vazirani via a Lagrangean relaxation technique together with the primal-dual method for approximation algorithms.
Abstract: Garg [10] gives two approximation algorithms for the minimum-cost tree spanning k vertices in an undirected graph. Recently Jain and Vazirani [15] discovered primal-dual approximation algorithms for the metric uncapacitated facility location and k-median problems. In this paper we show how Garg’s algorithms can be explained simply with ideas introduced by Jain and Vazirani, in particular via a Lagrangean relaxation technique together with the primal-dual method for approximation algorithms. We also derive a constant factor approximation algorithm for the k-Steiner tree problem using these ideas, and point out the common features of these problems that allow them to be solved with similar techniques.

Journal ArticleDOI
TL;DR: The combined procedure is shown to strike a balance between computing an approximately optimal solution and bounding its maximum possible suboptimality that holds promise for implementations able to offer the power and flexibility of mixed-integer linear programming models on instances of practical scale.
Abstract: Approximately 40% of all U.S. cancer cases are treated with radiation therapy. In Intensity-Modulated Radiation Therapy (IMRT) the treatment planning problem is to choose external beam angles and their corresponding intensity maps (showing how the intensity varies across a given beam) to maximize tumor dose subject to the tolerances of surrounding healthy tissues. Dose, like temperature, is a quantity defined at each point in the body, and the distribution of dose is determined by the choice of treatment parameters available to the planner. In addition to absolute dose limits in healthy tissues, some tissues have at least one dose-volume restriction that requires a fraction of its volume to not exceed a specified tighter threshold level for damage. There may also be a homogeneity limit for the tumor that restricts the allowed spread of dose across its volume. We formulate this planning problem as a mixed integer program over a coupled pair of column generation processes -- one designed to produce intensity maps, and a second specifying protected area choices for tissues under dose-volume restrictions. The combined procedure is shown to strike a balance between computing an approximately optimal solution and bounding its maximum possible suboptimality that we believe holds promise for implementations able to offer the power and flexibility of mixed-integer linear programming models on instances of practical scale.

Journal ArticleDOI
TL;DR: It is proved in this paper that the proposed smoothing Newton algorithm, which is a modified version of the Qi-Sun-Zhou algorithm, has the following convergence properties: it is well-defined and any accumulation point of the iteration sequence is a solution of the P0–LCP.
Abstract: Given *** equation here ***, the linear complementarity problem (LCP) is to find *** equation here *** such that (x, s)≥ 0,s=Mx+q,xTs=0. By using the Chen-Harker-Kanzow-Smale (CHKS) smoothing function, the LCP is reformulated as a system of parameterized smooth-nonsmooth equations. As a result, a smoothing Newton algorithm, which is a modified version of the Qi-Sun-Zhou algorithm [Mathematical Programming, Vol. 87, 2000, pp. 1–35], is proposed to solve the LCP with M being assumed to be a P0-matrix (P0–LCP). The proposed algorithm needs only to solve one system of linear equations and to do one line search at each iteration. It is proved in this paper that the proposed algorithm has the following convergence properties: (i) it is well-defined and any accumulation point of the iteration sequence is a solution of the P0–LCP; (ii) it generates a bounded sequence if the P0–LCP has a nonempty and bounded solution set; (iii) if an accumulation point of the iteration sequence satisfies a nonsingularity condition, which implies the P0–LCP has a unique solution, then the whole iteration sequence converges to this accumulation point sub-quadratically with a Q-rate 2–t, where t∈(0,1) is a parameter; and (iv) if M is positive semidefinite and an accumulation point of the iteration sequence satisfies a strict complementarity condition, then the whole sequence converges to the accumulation point quadratically.

Journal ArticleDOI
TL;DR: A polynomial–time combinatorial separation algorithm is given for the inequalities of the lot-sizing polytope that cut off all fractional extreme points of its linear programming relaxation, as well as liftings from those facets.
Abstract: The lot-sizing polytope is a fundamental structure contained in many practical production planning problems. Here we study this polytope and identify facet–defining inequalities that cut off all fractional extreme points of its linear programming relaxation, as well as liftings from those facets. We give a polynomial–time combinatorial separation algorithm for the inequalities when capacities are constant. We also report computational experiments on solving the lot–sizing problem with varying cost and capacity characteristics.

Journal ArticleDOI
TL;DR: The first prospective analysis of the online automated algorithm for detecting the preictal transition in ongoing EEG signals constitutes a seizure warning system and indicates that it may be possible to develop automated seizure warning devices for diagnostic and therapeutic purposes.
Abstract: There is growing evidence that temporal lobe seizures are preceded by a preictal transition, characterized by a gradual dynamical change from asymptomatic interictal state to seizure. We herein report the first prospective analysis of the online automated algorithm for detecting the preictal transition in ongoing EEG signals. Such, the algorithm constitutes a seizure warning system. The algorithm estimates STLmax, a measure of the order or disorder of the signal, of EEG signals recorded from individual electrode sites. The optimization techniques were employed to select critical brain electrode sites that exhibit the preictal transition for the warning of epileptic seizures. Specifically, a quadratically constrained quadratic 0-1 programming problem is formulated to identify critical electrode sites. The automated seizure warning algorithm was tested in continuous, long-term EEG recordings obtained from 5 patients with temporal lobe epilepsy. For individual patient, we use the first half of seizures to train the parameter settings, which is evaluated by ROC (Receiver Operating Characteristic) curve analysis. With the best parameter setting, the algorithm applied to all cases predicted an average of 91.7% of seizures with an average false prediction rate of 0.196 per hour. These results indicate that it may be possible to develop automated seizure warning devices for diagnostic and therapeutic purposes.

Journal ArticleDOI
TL;DR: It is shown that a proper lower semicontinuous function f on X has a Lipschitz error bound if and only if the pair {epi(f),X×{0}} of sets in the product space X×ℝ is linearly regular (resp., regular).
Abstract: In this paper, we mainly study various notions of regularity for a finite collection {C1,⋯,Cm} of closed convex subsets of a Banach space X and their relations with other fundamental concepts. We show that a proper lower semicontinuous function f on X has a Lipschitz error bound (resp., ϒ-error bound) if and only if the pair {epi(f),X×{0}} of sets in the product space X×ℝ is linearly regular (resp., regular). Similar results for multifunctions are also established. Next, we prove that {C1,⋯,Cm} is linearly regular if and only if it has the strong CHIP and the collection {NC1(z),⋯,NCm(z)} of normal cones at z has property (G) for each z∈C:=∩i=1mCi. Provided that C1 is a closed convex cone and that C2=Y is a closed vector subspace of X, we show that {C1,Y} is linearly regular if and only if there exists α>0 such that each positive (relative to the order induced by C1) linear functional on Y of norm one can be extended to a positive linear functional on X with norm bounded by α. Similar characterization is given in terms of normal cones.

Journal ArticleDOI
TL;DR: The classical trust-region method for unconstrained minimization can be augmented with a line search that finds a point that satisfies the Wolfe conditions and maintains a positive-definite approximation to the Hessian of the objective function.
Abstract: The classical trust-region method for unconstrained minimization can be augmented with a line search that finds a point that satisfies the Wolfe conditions. One can use this new method to define an algorithm that simultaneously satisfies the quasi-Newton condition at each iteration and maintains a positive-definite approximation to the Hessian of the objective function. This new algorithm has strong global convergence properties and is robust and efficient in practice.

Journal ArticleDOI
TL;DR: This work constructs two fixed-point iteration algorithms that solve convex subproblems and that are guaranteed, for sufficiently small friction coefficients, to retrieve the unique velocity solution of the nonconvex linear complementarity problem whenever the frictionless configuration can be disassembled.
Abstract: Acceleration–force setups for multi-rigid-body dynamics are known to be inconsistent for some configurations and sufficiently large friction coefficients (a Painleve paradox). This difficulty is circumvented by time-stepping methods using impulse-velocity approaches, which solve complementarity problems with possibly nonconvex solution sets. We show that very simple configurations involving two bodies may have a nonconvex solution set for any nonzero value of the friction coefficient. We construct two fixed-point iteration algorithms that solve convex subproblems and that are guaranteed, for sufficiently small friction coefficients, to retrieve, at a linear convergence rate, the unique velocity solution of the nonconvex linear complementarity problem whenever the frictionless configuration can be disassembled. In addition, we show that one step of one of the iterative algorithms provides an excellent approximation to the velocity solution of the original, possibly nonconvex, problem if for all contacts we have that either the friction coefficient is small or the slip velocity is small.

Journal ArticleDOI
TL;DR: It is shown that rather rough approximations to the gradient are sufficient to reduce the pollstep to a single function evaluation, and it is proved that using these less expensive pollsteps does not weaken the known convergence properties of the method, all of which depend only on thepollstep.
Abstract: A common question asked by users of direct search algorithms is how to use derivative information at iterates where it is available. This paper addresses that question with respect to Generalized Pattern Search (GPS) methods for unconstrained and linearly constrained optimization. Specifically, this paper concentrates on the GPS pollstep. Polling is done to certify the need to refine the current mesh, and it requires O(n) function evaluations in the worst case. We show that the use of derivative information significantly reduces the maximum number of function evaluations necessary for pollsteps, even to a worst case of a single function evaluation with certain algorithmic choices given here. Furthermore, we show that rather rough approximations to the gradient are sufficient to reduce the pollstep to a single function evaluation. We prove that using these less expensive pollsteps does not weaken the known convergence properties of the method, all of which depend only on the pollstep.