scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Global Optimization in 2018"


Journal ArticleDOI
TL;DR: A modified version of the algorithm to find a common element of the set of solutions of a variational inequality and theset of fixed points of a nonexpansive mapping in H.
Abstract: In this article, we introduce an inertial projection and contraction algorithm by combining inertial type algorithms with the projection and contraction algorithm for solving a variational inequality in a Hilbert space H. In addition, we propose a modified version of our algorithm to find a common element of the set of solutions of a variational inequality and the set of fixed points of a nonexpansive mapping in H. We establish weak convergence theorems for both proposed algorithms. Finally, we give the numerical experiments to show the efficiency and advantage of the inertial projection and contraction algorithm.

181 citations


Journal ArticleDOI
TL;DR: A new algorithm is proposed, TSEMO, which uses Gaussian processes as surrogates, which gives a simple algorithm without the requirement of a priori knowledge, reduced hypervolume calculations to approach linear scaling with respect to the number of objectives, the capacity to handle noise and the ability for batch-sequential usage.
Abstract: Many engineering problems require the optimization of expensive, black-box functions involving multiple conflicting criteria, such that commonly used methods like multiobjective genetic algorithms are inadequate. To tackle this problem several algorithms have been developed using surrogates. However, these often have disadvantages such as the requirement of a priori knowledge of the output functions or exponentially scaling computational cost with respect to the number of objectives. In this paper a new algorithm is proposed, TSEMO, which uses Gaussian processes as surrogates. The Gaussian processes are sampled using spectral sampling techniques to make use of Thompson sampling in conjunction with the hypervolume quality indicator and NSGA-II to choose a new evaluation point at each iteration. The reference point required for the hypervolume calculation is estimated within TSEMO. Further, a simple extension was proposed to carry out batch-sequential design. TSEMO was compared to ParEGO, an expected hypervolume implementation, and NSGA-II on nine test problems with a budget of 150 function evaluations. Overall, TSEMO shows promising performance, while giving a simple algorithm without the requirement of a priori knowledge, reduced hypervolume calculations to approach linear scaling with respect to the number of objectives, the capacity to handle noise and lastly the ability for batch-sequential usage.

167 citations


Journal ArticleDOI
TL;DR: The simplicial homology global optimisation algorithm is applied to non-convex problems with linear and box constraints with bounds placed on the variables and it is proven that the SHGO algorithm will always outperform TGO on function evaluations if the objective function is Lipschitz smooth.
Abstract: The simplicial homology global optimisation (SHGO) algorithm is a general purpose global optimisation algorithm based on applications of simplicial integral homology and combinatorial topology. SHGO approximates the homology groups of a complex built on a hypersurface homeomorphic to a complex on the objective function. This provides both approximations of locally convex subdomains in the search space through Sperner’s lemma and a useful visual tool for characterising and efficiently solving higher dimensional black and grey box optimisation problems. This complex is built up using sampling points within the feasible search space as vertices. The algorithm is specialised in finding all the local minima of an objective function with expensive function evaluations efficiently which is especially suitable to applications such as energy landscape exploration. SHGO was initially developed as an improvement on the topographical global optimisation (TGO) method. It is proven that the SHGO algorithm will always outperform TGO on function evaluations if the objective function is Lipschitz smooth. In this paper SHGO is applied to non-convex problems with linear and box constraints with bounds placed on the variables. Numerical experiments on linearly constrained test problems show that SHGO gives competitive results compared to TGO and the recently developed Lc-DISIMPL algorithm as well as the PSwarm, LGO and DIRECT-L1 algorithms. Furthermore SHGO is compared with the TGO, basinhopping (BH) and differential evolution (DE) global optimisation algorithms over a large selection of black-box problems with bounds placed on the variables from the SciPy benchmarking test suite. A Python implementation of the SHGO and TGO algorithms published under a MIT license can be found from https://bitbucket.org/upiamcompthermo/shgo/ .

99 citations


Journal ArticleDOI
TL;DR: Two Bayesian Optimization approaches are proposed in this paper, where the surrogate model is based on a Gaussian Process and a Random Forest, respectively, and both approaches are tested with different acquisition functions on a set of test functions.
Abstract: Bayesian optimization has become a widely used tool in the optimization and machine learning communities. It is suitable to problems as simulation/optimization and/or with an objective function computationally expensive to evaluate. Bayesian optimization is based on a surrogate probabilistic model of the objective whose mean and variance are sequentially updated using the observations and an “acquisition” function based on the model, which sets the next observation at the most “promising” point. The most used surrogate model is the Gaussian Process which is the basis of well-known Kriging algorithms. In this paper, the authors consider the pump scheduling optimization problem in a Water Distribution Network with both ON/OFF and variable speed pumps. In a global optimization model, accounting for time patterns of demand and energy price allows significant cost savings. Nonlinearities, and binary decisions in the case of ON/OFF pumps, make pump scheduling optimization computationally challenging, even for small Water Distribution Networks. The well-known EPANET simulator is used to compute the energy cost associated to a pump schedule and to verify that hydraulic constraints are not violated and demand is met. Two Bayesian Optimization approaches are proposed in this paper, where the surrogate model is based on a Gaussian Process and a Random Forest, respectively. Both approaches are tested with different acquisition functions on a set of test functions, a benchmark Water Distribution Network from the literature and a large-scale real-life Water Distribution Network in Milan, Italy.

68 citations


Journal ArticleDOI
TL;DR: The main convergence issues of the line-search based proximal bundle method for the numerical minimization of a nonsmooth difference-of-convex (DC) function are discussed, and computational results on a set of academic benchmark test problems are provided.
Abstract: We introduce a proximal bundle method for the numerical minimization of a nonsmooth difference-of-convex (DC) function. Exploiting some classic ideas coming from cutting-plane approaches for the convex case, we iteratively build two separate piecewise-affine approximations of the component functions, grouping the corresponding information in two separate bundles. In the bundle of the first component, only information related to points close to the current iterate are maintained, while the second bundle only refers to a global model of the corresponding component function. We combine the two convex piecewise-affine approximations, and generate a DC piecewise-affine model, which can also be seen as the pointwise maximum of several concave piecewise-affine functions. Such a nonconvex model is locally approximated by means of an auxiliary quadratic program, whose solution is used to certify approximate criticality or to generate a descent search-direction, along with a predicted reduction, that is next explored in a line-search setting. To improve the approximation properties at points that are far from the current iterate a supplementary quadratic program is also introduced to generate an alternative more promising search-direction. We discuss the main convergence issues of the line-search based proximal bundle method, and provide computational results on a set of academic benchmark test problems.

52 citations


Journal ArticleDOI
TL;DR: The solvability of (DHVI) is proved without imposing any convexity condition on the nonlinear function, which consists of a hemivariational inequality of parabolic type combined with a nonlinear evolution equation in the framework of an evolution triple of spaces.
Abstract: In this paper we investigate an abstract system which consists of a hemivariational inequality of parabolic type combined with a nonlinear evolution equation in the framework of an evolution triple of spaces which is called a differential hemivariational inequality [(DHVI), for short]. A hybrid iterative system corresponding to (DHVI) is introduced by using a temporally semi-discrete method based on the backward Euler difference scheme, i.e., the Rothe method, and a feedback iterative technique. We apply a surjectivity result for pseudomonotone operators and properties of the Clarke subgradient operator to establish existence and a priori estimates for solutions to an approximate problem. Finally, through a limiting procedure for solutions of the hybrid iterative system, the solvability of (DHVI) is proved without imposing any convexity condition on the nonlinear function $$u\mapsto f(t,x,u)$$ and compactness of $$C_0$$ -semigroup $$e^{A(t)}$$ .

50 citations


Journal ArticleDOI
TL;DR: The paper considers two extragradient-like algorithms for solving variational inequality problems involving strongly pseudomonotone and Lipschitz continuous operators in Hilbert spaces which can be computed more easily than the regularized method.
Abstract: The paper considers two extragradient-like algorithms for solving variational inequality problems involving strongly pseudomonotone and Lipschitz continuous operators in Hilbert spaces. The projection method is used to design the algorithms which can be computed more easily than the regularized method. The construction of solution approximations and the proof of convergence of the algorithms are performed without the prior knowledge of the modulus of strong pseudomonotonicity and the Lipschitz constant of the cost operator. Instead of that, the algorithms use variable stepsize sequences which are diminishing and non-summable. The numerical behaviors of the proposed algorithms on a test problem are illustrated and compared with those of several previously known algorithms.

49 citations


Journal ArticleDOI
TL;DR: This work builds an ensemble of surrogate models to be used within the search step of MADS to perform global exploration, and introduces an order-based error tailored to surrogate-based search.
Abstract: We investigate surrogate-assisted strategies for global derivative-free optimization using the mesh adaptive direct search (MADS) blackbox optimization algorithm. In particular, we build an ensemble of surrogate models to be used within the search step of MADS to perform global exploration, and examine different methods for selecting the best model for a given problem at hand. To do so, we introduce an order-based error tailored to surrogate-based search. We report computational experiments for ten analytical benchmark problems and three engineering design applications. Results demonstrate that different metrics may result in different model choices and that the use of order-based metrics improves performance.

38 citations


Journal ArticleDOI
TL;DR: T theoretical bounds on the complexity and the accuracy of the generated approximations are obtained as well as compare proposed approaches theoretically and experimentally.
Abstract: In this paper we propose a method for solving systems of nonlinear inequalities with predefined accuracy based on nonuniform covering concept formerly adopted for global optimization. The method generates inner and outer approximations of the solution set. We describe the general concept and three ways of numerical implementation of the method. The first one is applicable only in a few cases when a minimum and a maximum of the constraints convolution function can be found analytically. The second implementation uses a global optimization method to find extrema of the constraints convolution function numerically. The third one is based on extrema approximation with Lipschitz under- and overestimations. We obtain theoretical bounds on the complexity and the accuracy of the generated approximations as well as compare proposed approaches theoretically and experimentally.

37 citations


Journal ArticleDOI
TL;DR: By using the inverse strongly monotone property of the underlying operator of the SFP, the “optimal” step length is improved to provide the modified projection and contraction methods.
Abstract: In this paper, first, we review the projection and contraction methods for solving the split feasibility problem (SFP), and then by using the inverse strongly monotone property of the underlying operator of the SFP, we improve the “optimal” step length to provide the modified projection and contraction methods. Also, we consider the corresponding relaxed variants for the modified projection and contraction methods, where the two closed convex sets are both level sets of convex functions. Some convergence theorems of the proposed methods are established under suitable conditions. Finally, we give some numerical examples to illustrate that the modified projection and contraction methods have an advantage over other methods, and improve greatly the projection and contraction methods.

37 citations


Journal ArticleDOI
TL;DR: This paper proposes an algorithm based on the alternating direction method of multipliers, and rigorously analyze its convergence properties (to the set of stationary solutions).
Abstract: In this paper, we study a class of nonconvex nonsmooth optimization problems with bilinear constraints, which have wide applications in machine learning and signal processing. We propose an algorithm based on the alternating direction method of multipliers, and rigorously analyze its convergence properties (to the set of stationary solutions). To test the performance of the proposed method, we specialize it to the nonnegative matrix factorization problem and certain sparse principal component analysis problem. Extensive experiments on real and synthetic data sets have demonstrated the effectiveness and broad applicability of the proposed methods.

Journal ArticleDOI
TL;DR: This paper investigates a single machine serial-batching scheduling problem considering release times, setup time, and group scheduling, with the combined effects of deterioration and truncated job-dependent learning, and develops a hybrid VNS–asHLO algorithm incorporating variable neighborhood search (VNS) and adaptive simplified human learning optimization (ASHLO) algorithms to solve the general case.
Abstract: This paper investigates a single machine serial-batching scheduling problem considering release times, setup time, and group scheduling, with the combined effects of deterioration and truncated job-dependent learning The objective of the studied problem is to minimize the makespan Firstly, we analyze the special case where all groups have the same arrival time, and propose the optimal structural properties on jobs sequencing, jobs batching, batches sequencing, and groups sequencing Next, the corresponding batching rule and algorithm are developed Based on these properties and the scheduling algorithm, we develop a hybrid VNS–ASHLO algorithm incorporating variable neighborhood search (VNS) and adaptive simplified human learning optimization (ASHLO) algorithms to solve the general case of the studied problem Computational experiments on randomly generated instances are conducted to compare the proposed VNS–ASHLO with the algorithms of VNS, ASHLO, Simulated Annealing (SA), and Particle Swarm Optimization (PSO) The results based on instances of different scales show the effectiveness and efficiency of the proposed algorithm

Journal ArticleDOI
TL;DR: The goal of the current work is to generalize this approach to the computation of global Pareto fronts for multiobjective multimodal derivative-free optimization problems.
Abstract: The optimization of multimodal functions is a challenging task, in particular when derivatives are not available for use. Recently, in a directional direct search framework, a clever multistart strategy was proposed for global derivative-free optimization of single objective functions. The goal of the current work is to generalize this approach to the computation of global Pareto fronts for multiobjective multimodal derivative-free optimization problems. The proposed algorithm alternates between initializing new searches, using a multistart strategy, and exploring promising subregions, resorting to directional direct search. Components of the objective function are not aggregated and new points are accepted using the concept of Pareto dominance. The initialized searches are not all conducted until the end, merging when they start to be close to each other. The convergence of the method is analyzed under the common assumptions of directional direct search. Numerical experiments show its ability to generate approximations to the different Pareto fronts of a given problem.

Journal ArticleDOI
TL;DR: This paper introduces a new DIRECT-type algorithm, which is based on the well-known DIRECT (DIviding RECTangles) algorithm and motivated by the diagonal partitioning strategy, and uses a bisection instead of a trisection.
Abstract: We consider a global optimization problem for Lipschitz-continuous functions with an unknown Lipschitz constant. Our approach is based on the well-known DIRECT (DIviding RECTangles) algorithm and motivated by the diagonal partitioning strategy. One of the main advantages of the diagonal partitioning scheme is that the objective function is evaluated at two points at each hyper-rectangle and, therefore, more comprehensive information about the objective function is considered than using the central sampling strategy used in most DIRECT-type algorithms. In this paper, we introduce a new DIRECT-type algorithm, which we call BIRECT (BIsecting RECTangles). In this algorithm, a bisection is used instead of a trisection which is typical for diagonal-based and DIRECT-type algorithms. The bisection is preferable to the trisection because of the shapes of hyper-rectangles, but usual evaluation of the objective function at the center or at the endpoints of the diagonal are not favorable for bisection. In the proposed algorithm the objective function is evaluated at two points on the diagonal equidistant between themselves and the endpoints of a diagonal. This sampling strategy enables reuse of the sampling points in descendant hyper-rectangles. The developed algorithm gives very competitive numerical results compared to the DIRECT algorithm and its well know modifications.

Journal ArticleDOI
TL;DR: A practical derivative-free deterministic method reducing the dimensionality of the problem by using space-filling curves and working simultaneously with all possible estimates of Lipschitz and Hölder constants is proposed, which shows clear superiority over w.r.t. the popular method DIRECT and other competitors.
Abstract: Global optimization is a field of mathematical programming dealing with finding global (absolute) minima of multi-dimensional multiextremal functions. Problems of this kind where the objective function is non-differentiable, satisfies the Lipschitz condition with an unknown Lipschitz constant, and is given as a “black-box” are very often encountered in engineering optimization applications. Due to the presence of multiple local minima and the absence of differentiability, traditional optimization techniques using gradients and working with problems having only one minimum cannot be applied in this case. These real-life applied problems are attacked here by employing one of the mostly abstract mathematical objects—space-filling curves. A practical derivative-free deterministic method reducing the dimensionality of the problem by using space-filling curves and working simultaneously with all possible estimates of Lipschitz and Holder constants is proposed. A smart adaptive balancing of local and global information collected during the search is performed at each iteration. Conditions ensuring convergence of the new method to the global minima are established. Results of numerical experiments on 1000 randomly generated test functions show a clear superiority of the new method w.r.t. the popular method DIRECT and other competitors.

Journal ArticleDOI
TL;DR: In this article, the authors reformulate combinatorial problems as equivalent MPECs by the variational characterization of the zero-norm and rank function, and show that their penalized problems, yielded by moving the equilibrium constraint into the objective, are the global exact penalization.
Abstract: This paper proposes a mechanism to produce equivalent Lipschitz surrogates for zero-norm and rank optimization problems by means of the global exact penalty for their equivalent mathematical programs with an equilibrium constraint (MPECs). Specifically, we reformulate these combinatorial problems as equivalent MPECs by the variational characterization of the zero-norm and rank function, show that their penalized problems, yielded by moving the equilibrium constraint into the objective, are the global exact penalization, and obtain the equivalent Lipschitz surrogates by eliminating the dual variable in the global exact penalty. These surrogates, including the popular SCAD function in statistics, are also difference of two convex functions (D.C.) if the function and constraint set involved in zero-norm and rank optimization problems are convex. We illustrate an application by designing a multi-stage convex relaxation approach to the rank plus zero-norm regularized minimization problem.

Journal ArticleDOI
TL;DR: A parametric approach is taken to uncovering topological structure and sparsity on the single quality standard pooling problem in its p-formulation, validating Professor Christodoulos A. Floudas' intuition that pooling problems are rooted in piecewise-defined functions.
Abstract: The standard pooling problem is a NP-hard subclass of non-convex quadratically-constrained optimization problems that commonly arises in process systems engineering applications. We take a parametric approach to uncovering topological structure and sparsity, focusing on the single quality standard pooling problem in its p-formulation. The structure uncovered in this approach validates Professor Christodoulos A. Floudas’ intuition that pooling problems are rooted in piecewise-defined functions. We introduce dominant active topologies under relaxed flow availability to explicitly identify pooling problem sparsity and show that the sparse patterns of active topological structure are associated with a piecewise objective function. Finally, the paper explains the conditions under which sparsity vanishes and where the combinatorial complexity emerges to cross over the P / NP boundary. We formally present the results obtained and their derivations for various specialized single quality pooling problem subclasses.

Journal ArticleDOI
TL;DR: It is shown that the proposed algorithm has a consistently reliable performance for the vast majority of test problems, and this is attributed to the use of Chebyshev-based Sparse Grids and polynomial interpolants, which have not gained significant attention in surrogate-based optimization thus far.
Abstract: A surrogate-based optimization method is presented, which aims to locate the global optimum of box-constrained problems using input–output data. The method starts with a global search of the n-dimensional space, using a Smolyak (Sparse) grid which is constructed using Chebyshev extrema in the one-dimensional space. The collected samples are used to fit polynomial interpolants, which are used as surrogates towards the search for the global optimum. The proposed algorithm adaptively refines the grid by collecting new points in promising regions, and iteratively refines the search space around the incumbent sample until the search domain reaches a minimum hyper-volume and convergence has been attained. The algorithm is tested on a large set of benchmark problems with up to thirty dimensions and its performance is compared to a recent algorithm for global optimization of grey-box problems using quadratic, kriging and radial basis functions. It is shown that the proposed algorithm has a consistently reliable performance for the vast majority of test problems, and this is attributed to the use of Chebyshev-based Sparse Grids and polynomial interpolants, which have not gained significant attention in surrogate-based optimization thus far.

Journal ArticleDOI
TL;DR: An abstract nonsmooth regularization approach is developed that subsumes the total variation regularization and permits the identification of discontinuous parameters and it is proved the existence of a global minimizer and convergence results for the considered optimization problem.
Abstract: In this short note, our aim is to investigate the inverse problem of parameter identification in quasi-variational inequalities. We develop an abstract nonsmooth regularization approach that subsumes the total variation regularization and permits the identification of discontinuous parameters. We study the inverse problem in an optimization setting using the output-least squares formulation. We prove the existence of a global minimizer and give convergence results for the considered optimization problem. We also discretize the identification problem for quasi-variational inequalities and provide the convergence analysis for the discrete problem. We give an application to the gradient obstacle problem.

Journal ArticleDOI
TL;DR: In this paper, the generalized Douglas-Rachford algorithm and its cyclic variants were studied for solving feasibility problems with finitely many closed possibly nonconvex sets under different assumptions.
Abstract: In this paper, we study the generalized Douglas–Rachford algorithm and its cyclic variants which include many projection-type methods such as the classical Douglas–Rachford algorithm and the alternating projection algorithm. Specifically, we establish several local linear convergence results for the algorithm in solving feasibility problems with finitely many closed possibly nonconvex sets under different assumptions. Our findings not only relax some regularity conditions but also improve linear convergence rates in the literature. In the presence of convexity, the linear convergence is global.

Journal ArticleDOI
TL;DR: A new dynamic strategy for activating and deactivating MIP relaxations in various stages of a branch-and-bound algorithm that does not use meta-parameters, thus avoiding parameter tuning and capitalizes on the availability of parallel MIP solver technology to exploit multicore computing hardware while solving MINLPs.
Abstract: Solving mixed-integer nonlinear programming (MINLP) problems to optimality is a NP-hard problem, for which many deterministic global optimization algorithms and solvers have been recently developed. MINLPs can be relaxed in various ways, including via mixed-integer linear programming (MIP), nonlinear programming, and linear programming. There is a tradeoff between the quality of the bounds and CPU time requirements of these relaxations. Unfortunately, these tradeoffs are problem-dependent and cannot be predicted beforehand. This paper proposes a new dynamic strategy for activating and deactivating MIP relaxations in various stages of a branch-and-bound algorithm. The primary contribution of the proposed strategy is that it does not use meta-parameters, thus avoiding parameter tuning. Additionally, this paper proposes a strategy that capitalizes on the availability of parallel MIP solver technology to exploit multicore computing hardware while solving MINLPs. Computational tests for various benchmark libraries reveal that our MIP activation strategy works efficiently in single-core and multicore environments.

Journal ArticleDOI
TL;DR: Under weak assumptions, sufficient conditions of the Berge-lower semicontinuity and lower Painlev convergence of weak efficient solutions for (SVO) under functional perturbations of both objective functions and constraint sets are established.
Abstract: This paper is concerned with the stability of semi-infinite vector optimization problems (SVO). Under weak assumptions, we establish sufficient conditions of the Berge-lower semicontinuity and lower Painlev $$\acute{e}$$ –Kuratowski convergence of weak efficient solutions for (SVO) under functional perturbations of both objective functions and constraint sets. Some examples are given to illustrate that our results are new and interesting.

Journal ArticleDOI
TL;DR: If the dimension of the feasible domain is large then it is impossible to give any guarantee that the global minimizer is found by a general GRS algorithm with reasonable accuracy, and precision of statistical estimates of the global minimum in the case of large dimensions is studied.
Abstract: We investigate the rate of convergence of general global random search (GRS) algorithms. We show that if the dimension of the feasible domain is large then it is impossible to give any guarantee that the global minimizer is found by a general GRS algorithm with reasonable accuracy. We then study precision of statistical estimates of the global minimum in the case of large dimensions. We show that these estimates also suffer the curse of dimensionality. Finally, we demonstrate that the use of quasi-random points in place of the random ones does not give any visible advantage in large dimensions.

Journal ArticleDOI
TL;DR: It is proved that under mild assumptions, it is possible to obtain tighter linear approximations for a type of functions referred to as almost additively separable and it is shown that solvers, by a simple reformulation, can benefit from the tighter approximation.
Abstract: Several deterministic methods for convex mixed integer nonlinear programming generate a polyhedral approximation of the feasible region, and utilize this approximation to obtain trial solutions. Such methods are, e.g., outer approximation, the extended cutting plane method and the extended supporting hyperplane method. In order to obtain the optimal solution and verify global optimality, these methods often require a quite accurate polyhedral approximation. In case the nonlinear functions are convex and separable to some extent, it is possible to obtain a tighter approximation by using a lifted polyhedral approximation, which can be achieved by reformulating the problem. We prove that under mild assumptions, it is possible to obtain tighter linear approximations for a type of functions referred to as almost additively separable. Here it is also shown that solvers, by a simple reformulation, can benefit from the tighter approximation, and a numerical comparison demonstrates the potential of the reformulation. The reformulation technique can also be combined with other known transformations to make it applicable to some nonseparable convex functions. By using a power transform and a logarithmic transform the reformulation technique can for example be applied to p-norms and some convex signomial functions, and the benefits of combining these transforms with the reformulation technique are illustrated with some numerical examples.

Journal ArticleDOI
TL;DR: An inexact proximal bundle method for constrained nonsmooth nonconvex optimization problems whose objective and constraint functions are known through oracles which provide inexact information is proposed.
Abstract: We propose an inexact proximal bundle method for constrained nonsmooth nonconvex optimization problems whose objective and constraint functions are known through oracles which provide inexact information. The errors in function and subgradient evaluations might be unknown, but are merely bounded. To handle the nonconvexity, we first use the redistributed idea, and consider even more difficulties by introducing inexactness in the available information. We further examine the modified improvement function for a series of difficulties caused by the constrained functions. The numerical results show the good performance of our inexact method for a large class of nonconvex optimization problems. The approach is also assessed on semi-infinite programming problems, and some encouraging numerical experiences are provided.

Journal ArticleDOI
TL;DR: This paper abstracts from the corresponding resolvents employed in these problems the natural notion of jointly firmly nonexpansive families of mappings, which leads to a streamlined method of proving weak convergence of this class of algorithms in the context of complete CAT(0) spaces (and hence also in Hilbert spaces).
Abstract: The proximal point algorithm is a widely used tool for solving a variety of convex optimization problems such as finding zeros of maximally monotone operators, fixed points of nonexpansive mappings, as well as minimizing convex functions. The algorithm works by applying successively so-called “resolvent” mappings associated to the original object that one aims to optimize. In this paper we abstract from the corresponding resolvents employed in these problems the natural notion of jointly firmly nonexpansive families of mappings. This leads to a streamlined method of proving weak convergence of this class of algorithms in the context of complete CAT(0) spaces (and hence also in Hilbert spaces). In addition, we consider the notion of uniform firm nonexpansivity in order to similarly provide a unified presentation of a case where the algorithm converges strongly. Methods which stem from proof mining, an applied subfield of logic, yield in this situation computable and low-complexity rates of convergence.

Journal ArticleDOI
TL;DR: In this paper, primal and dual second-order Fritz John necessary conditions for weak efficiency of nonsmooth vector equilibrium problems involving inequality, equality and set constraints in terms of the Pales-Zeidan secondorder directional derivatives are established under suitable secondorder constraint qualifications.
Abstract: This paper presents primal and dual second-order Fritz John necessary conditions for weak efficiency of nonsmooth vector equilibrium problems involving inequality, equality and set constraints in terms of the Pales–Zeidan second-order directional derivatives. Dual second-order Karush–Kuhn–Tucker necessary conditions for weak efficiency are established under suitable second-order constraint qualifications.

Journal ArticleDOI
TL;DR: A sampling-based exact algorithm aimed at solving large-sized datasets containing more than 500,000 observations within moderate time limits, this is two orders of magnitude larger than the limits of previous exact methods.
Abstract: We consider the problem of clustering a set of points so as to minimize the maximum intra-cluster dissimilarity, which is strongly NP-hard. Exact algorithms for this problem can handle datasets containing up to a few thousand observations, largely insufficient for the nowadays needs. The most popular heuristic for this problem, the complete-linkage hierarchical algorithm, provides feasible solutions that are usually far from optimal. We introduce a sampling-based exact algorithm aimed at solving large-sized datasets. The algorithm alternates between the solution of an exact procedure on a small sample of points, and a heuristic procedure to prove the optimality of the current solution. Our computational experience shows that our algorithm is capable of solving to optimality problems containing more than 500,000 observations within moderate time limits, this is two orders of magnitude larger than the limits of previous exact methods.

Journal ArticleDOI
TL;DR: A new deterministic decomposition-based successive approximation method for general modular and/or sparse MINLPs, based on a block-separable reformulation of the model into sub-models that generates inner- and outer-approximations using column generation.
Abstract: Traditional deterministic global optimization methods are often based on a Branch-and-Bound (BB) search tree, which may grow rapidly, preventing the method to find a good solution. Motivated by decomposition-based inner approximation (column generation) methods for solving transport scheduling problems with over 100 million variables, we present a new deterministic decomposition-based successive approximation method for general modular and/or sparse MINLPs. The new method, called Decomposition-based Inner- and Outer-Refinement, is based on a block-separable reformulation of the model into sub-models. It generates inner- and outer-approximations using column generation, which are successively refined by solving many easier MINLP and MIP subproblems in parallel (using BB), instead of searching over one (global) BB search tree. We present preliminary numerical results with Decogo (Decomposition-based Global Optimizer), a new parallel decomposition MINLP solver implemented in Python and Pyomo.

Journal ArticleDOI
TL;DR: A bilevel decomposition algorithm that iteratively solves a discretized MILP version of the Generalized Disjunctive Programming model, and its nonconvex NLP for a fixed selection of discrete variables is proposed.
Abstract: In this paper we propose a nonlinear Generalized Disjunctive Programming model to optimize the 2-dimensional continuous location and allocation of the potential facilities based on their maximum capacity and the given coordinates of the suppliers and customers. The model belongs to the class of Capacitated Multi-facility Weber Problem. We propose a bilevel decomposition algorithm that iteratively solves a discretized MILP version of the model, and its nonconvex NLP for a fixed selection of discrete variables. Based on the bounding properties of the subproblems, $$\epsilon $$ -convergence is proved for this algorithm. We apply the proposed method to random instances varying from 2 suppliers and 2 customers to 40 suppliers and 40 customers, from one type of facility to 3 different types, and from 2 to 32 potential facilities. The results show that the algorithm is more effective at finding global optimal solutions than general purpose global optimization solvers tested.