scispace - formally typeset
Search or ask a question

Showing papers on "Discrete optimization published in 2017"


Journal ArticleDOI
TL;DR: The proposed improved variant of the differential grouping (DG) algorithm, DG2, finds a reliable threshold value by estimating the magnitude of roundoff errors and automatic calculation of its threshold parameter, which makes it parameter-free.
Abstract: Identification of variable interaction is essential for an efficient implementation of a divide-and-conquer algorithm for large-scale black-box optimization. In this paper, we propose an improved variant of the differential grouping (DG) algorithm, which has a better efficiency and grouping accuracy. The proposed algorithm, DG2, finds a reliable threshold value by estimating the magnitude of roundoff errors. With respect to efficiency, DG2 reuses the sample points that are generated for detecting interactions and saves up to half of the computational resources on fully separable functions. We mathematically show that the new sampling technique achieves the lower bound with respect to the number of function evaluations. Unlike its predecessor, DG2 checks all possible pairs of variables for interactions and has the capacity to identify overlapping components of an objective function. On the accuracy aspect, DG2 outperforms the state-of-the-art decomposition methods on the latest large-scale continuous optimization benchmark suites. DG2 also performs reliably in the presence of imbalance among contribution of components in an objective function. Another major advantage of DG2 is the automatic calculation of its threshold parameter ( $\epsilon $ ), which makes it parameter-free. Finally, the experimental results show that when DG2 is used within a cooperative co-evolutionary framework, it can generate competitive results as compared to several state-of-the-art algorithms.

243 citations


Journal Article
TL;DR: A canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuous-time black-box optimization method on X, the information-geometric optimization (IGO) method, which achieves maximal invariance properties.
Abstract: We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuous-time black-box optimization method on X, the information-geometric optimization (IGO) method. Invariance as a major design principle keeps the number of arbitrary choices to a minimum. The resulting IGO flow is the flow of an ordinary differential equation conducting the natural gradient ascent of an adaptive, time-dependent transformation of the objective function. It makes no particular assumptions on the objective function to be optimized. The IGO method produces explicit IGO algorithms through time discretization. It naturally recovers versions of known algorithms and offers a systematic way to derive new ones. In continuous search spaces, IGO algorithms take a form related to natural evolution strategies (NES). The cross-entropy method is recovered in a particular case with a large time step, and can be extended into a smoothed, parametrization-independent maximum likelihood update (IGO-ML). When applied to the family of Gaussian distributions on Rd, the IGO framework recovers a version of the well-known CMA-ES algorithm and of xNES. For the family of Bernoulli distributions on {0, 1}d, we recover the seminal PBIL algorithm and cGA. For the distributions of restricted Boltzmann machines, we naturally obtain a novel algorithm for discrete optimization on {0, 1}d. All these algorithms are natural instances of, and unified under, the single information-geometric optimization framework. The IGO method achieves, thanks to its intrinsic formulation, maximal invariance properties: invariance under reparametrization of the search space X, under a change of parameters of the probability distribution, and under increasing transformation of the function to be optimized. The latter is achieved through an adaptive, quantile-based formulation of the objective. Theoretical considerations strongly suggest that IGO algorithms are essentially characterized by a minimal change of the distribution over time. Therefore they have minimal loss in diversity through the course of optimization, provided the initial diversity is high. First experiments using restricted Boltzmann machines confirm this insight. As a simple consequence, IGO seems to provide, from information theory, an elegant way to simultaneously explore several valleys of a fitness landscape in a single run.

175 citations


Journal ArticleDOI
01 Oct 2017
TL;DR: The proposed approach mimics the lightning attachment procedure including the downward leader movement, the upward leader propagation, the unpredictable trajectory of lightning downward leader, and the branch fading feature of lightning.
Abstract: Display Omitted In this paper a novel meta-heuristic optimization algorithm known as lightning LAPO is proposed.The purposed method is free from any parameter tuning.Two phase solution updating in each iteration increase the balancing of exploration and exploitation. In this article, A novel nature-inspired optimization algorithm known as Lightning Attachment Procedure Optimization (LAPO) is proposed. The proposed approach mimics the lightning attachment procedure including the downward leader movement, the upward leader propagation, the unpredictable trajectory of lightning downward leader, and the branch fading feature of lightning. Final optimum result would be the lightning striking point. The proposed method is free from any parameter tuning and it is rarely stuck in the local optimum points. To evaluate the proposed algorithm, 29 mathematical benchmark functions are employed and the results are compared to those of 9 high quality well-known optimization methods The results of the proposed method are compared from different points of views, including quality of the results, convergence behavior, robustness, and CPU time consumption. Superiority and high quality performance of the proposed method are demonstrated through comparing the results. Moreover, the proposed method is also tested by five classical engineering design problems including tension/compression spring, welded beam, pressure vessel designs, Gear train design, and Cantilever beam design and a high constraint optimization problem known as Optimal Power Flow (OPF) which is a high constraint electrical engineering problem. The excellence performance of the proposed method in solving the problems with large number of constraints and also discrete optimization problems are also concluded from the results of the six engineering problem.

146 citations


Journal ArticleDOI
01 Jun 2017
TL;DR: A taxonomy is introduced, which is useful as a guideline for selecting adequate model-based optimization tools and a new approach for combining surrogate information via stacking is proposed in the third part.
Abstract: Graphical abstractDisplay Omitted HighlightsUp-to-date survey and comprehensive taxonomy of surrogate model based optimization algorithms.Covers continuous and discrete/combinatorial search spaces.Presents six strategies for dealing with discrete data structures.New strategy for model selection and combination in surrogate model-based optimization.Outlook on important challenges (model selection, dimensionality, benchmarks, definiteness) and research directions. The use of surrogate models is a standard method for dealing with complex real-world optimization problems. The first surrogate models were applied to continuous optimization problems. In recent years, surrogate models gained importance for discrete optimization problems. This article takes this development into consideration. The first part presents a survey of model-based methods, focusing on continuous optimization. It introduces a taxonomy, which is useful as a guideline for selecting adequate model-based optimization tools. The second part examines discrete optimization problems. Here, six strategies for dealing with discrete data structures are introduced. A new approach for combining surrogate information via stacking is proposed in the third part. The implementation of this approach will be available in the open source R package SPOT2. The article concludes with a discussion of recent developments and challenges in continuous and discrete application domains.

132 citations


Proceedings ArticleDOI
04 Aug 2017
TL;DR: This work presents the design and implementation of a custom discrete optimization technique for building rule lists over a categorical feature space, and demonstrates that this approach produces optimal rule lists on practical problems in seconds.
Abstract: We present the design and implementation of a custom discrete optimization technique for building rule lists over a categorical feature space. Our algorithm provides the optimal solution, with a certificate of optimality. By leveraging algorithmic bounds, efficient data structures, and computational reuse, we achieve several orders of magnitude speedup in time and a massive reduction of memory consumption. We demonstrate that our approach produces optimal rule lists on practical problems in seconds. This framework is a novel alternative to CART and other decision tree methods.

104 citations


Journal ArticleDOI
TL;DR: Progress on the application of firefly algorithm for optimization problems with binary, integer as well as mixed variables will be discussed and possible future works will also be highlighted.
Abstract: Firefly algorithm is a nature-inspired metaheuristic algorithm inspired by the flashing behavior of fireflies. It is originally proposed for continuous problems. However, due to its effectiveness and success in solving continuous problems, different studies are conducted in modifying the algorithm to suit discrete problems. Many engineering as well as optimization problems from other disciplines involve discrete variables. Recent reviews on the application and modifications of firefly algorithm mainly focus on continuous problems. This paper is devoted to the detailed review of the modifications done on firefly algorithm in order to solve optimization problems with discrete variables. Hence, advances on the application of firefly algorithm for optimization problems with binary, integer as well as mixed variables will be discussed. Possible future works will also be highlighted.

76 citations


Journal ArticleDOI
TL;DR: It is shown in this paper that this method effectively generates the Pareto front and also, this method is easy to implement and algorithmically simple.

76 citations


Proceedings Article
01 Jan 2017
TL;DR: This paper provides an easily computable approximation to the Jacobian complemented with a complete theoretical analysis that lets us experimentally learn probabilistic log-supermodular models via a bi-level variational inference formulation.
Abstract: Can we incorporate discrete optimization algorithms within modern machine learning models? For example, is it possible to use in deep architectures a layer whose output is the minimal cut of a parametrized graph? Given that these models are trained end-to-end by leveraging gradient information, the introduction of such layers seems very challenging due to their non-continuous output. In this paper we focus on the problem of submodular minimization, for which we show that such layers are indeed possible. The key idea is that we can continuously relax the output without sacrificing guarantees. We provide an easily computable approximation to the Jacobian complemented with a complete theoretical analysis. Finally, these contributions let us experimentally learn probabilistic log-supermodular models via a bi-level variational inference formulation.

74 citations


Journal ArticleDOI
01 Jan 2017
TL;DR: Six metaheuristic optimization algorithms to solve the community detection (CD) problem and it has been observed that HDSA is more efficient and competitive than the other algorithms.
Abstract: Display Omitted We propose six metaheuristic optimization algorithms to solve the community detection (CD) problem.The proposed algorithms have been modified in order to use for solving modularity optimization problem which is a discrete optimization problem.The four algorithms (HDSA, BADE, SSGA and BB-BC) have been supported by new techniques or hybrid methods in addition to their original versions.Comparative analyses of the proposed algorithms are performed on the four biological and five social networks.According to acquired experimental results, it has been observed that HDSA is more efficient and competitive than the other algorithms. In order to analyze complex networks to find significant communities, several methods have been proposed in the literature. Modularity optimization is an interesting and valuable approach for detection of network communities in complex networks. Due to characteristics of the problem dealt with in this study, the exact solution methods consume much more time. Therefore, we propose six metaheuristic optimization algorithms, which each contain a modularity optimization approach. These algorithms are the original Bat Algorithm (BA), Gravitational Search Algorithm (GSA), modified Big Bang-Big Crunch algorithm (BB-BC), improved Bat Algorithm based on the Differential Evolutionary algorithm (BADE), effective Hyperheuristic Differential Search Algorithm (HDSA) and Scatter Search algorithm based on the Genetic Algorithm (SSGA). Four of these algorithms (HDSA, BADE, SSGA, BB-BC) contain new methods, whereas the remaining two algorithms (BA and GSA) use original methods. To clearly demonstrate the performance of the proposed algorithms when solving the problems, experimental studies were conducted using nine real-world complex networks - five of which are social networks and the rest of which are biological networks. The algorithms were compared in terms of statistical significance. According to the obtained test results, the HDSA proposed in this study is more efficient and competitive than the other algorithms that were tested.

71 citations


Journal ArticleDOI
TL;DR: A computationally efficient iterative algorithm for the continuous phase case (IA-CPC) is proposed to sequentially optimize the quadratic objective function and an iterative block optimization algorithm is presented for the discrete phase case.
Abstract: This paper considers unimodular sequence synthesis under similarity constraint for both the continuous and discrete phase cases. A computationally efficient iterative algorithm for the continuous phase case (IA-CPC) is proposed to sequentially optimize the quadratic objective function. The quadratic optimization problem is turned into multiple one-dimensional optimization problems with closed-form solutions. For the discrete phase case, we present an iterative block optimization algorithm. Specifically, we partition the design variables into $K$ blocks, and then, we sequentially optimize each block via exhaustive search while fixing the remaining $K-1$ blocks. Finally, we evaluate the computational costs and performance gains of the proposed algorithms in comparison with power method-like and semidefinite relaxation related techniques.

69 citations


Posted Content
TL;DR: The results indicate that it is possible to construct optimal sparse rule lists that are approximately as accurate as the COMPAS proprietary risk prediction tool on data from Broward County, Florida, but that are completely interpretable.
Abstract: We present the design and implementation of a custom discrete optimization technique for building rule lists over a categorical feature space. Our algorithm produces rule lists with optimal training performance, according to the regularized empirical risk, with a certificate of optimality. By leveraging algorithmic bounds, efficient data structures, and computational reuse, we achieve several orders of magnitude speedup in time and a massive reduction of memory consumption. We demonstrate that our approach produces optimal rule lists on practical problems in seconds. Our results indicate that it is possible to construct optimal sparse rule lists that are approximately as accurate as the COMPAS proprietary risk prediction tool on data from Broward County, Florida, but that are completely interpretable. This framework is a novel alternative to CART and other decision tree methods for interpretable modeling.

Proceedings ArticleDOI
04 Aug 2017
TL;DR: A Discrete Content-aware Matrix Factorization (DCMF) model is proposed to derive compact yet informative binary codes at the presence of user/item content information and an efficient discrete optimization algorithm for parameter learning is developed.
Abstract: Precisely recommending relevant items from massive candidates to a large number of users is an indispensable yet computationally expensive task in many online platforms (e.g., Amazon.com and Netflix.com). A promising way is to project users and items into a Hamming space and then recommend items via Hamming distance. However, previous studies didn't address the cold-start challenges and couldn't make the best use of preference data like implicit feedback. To fill this gap, we propose a Discrete Content-aware Matrix Factorization (DCMF) model, 1) to derive compact yet informative binary codes at the presence of user/item content information; 2) to support the classification task based on a local upper bound of logit loss; 3) to introduce an interaction regularization for dealing with the sparsity issue. We further develop an efficient discrete optimization algorithm for parameter learning. Based on extensive experiments on three real-world datasets, we show that DCFM outperforms the state-of-the-arts on both regression and classification tasks.

Journal ArticleDOI
01 Oct 2017
TL;DR: The experimental results demonstrate that TVT-BPSO outperforms existing BPSO variants on both low-dimensional and high-dimensional classical knapsack problems, as well as a 200-member truss problem, suggesting that the new transfer function is able to better scale to high dimensional combinatorial problems than the existing B PSO variants and other metaheuristic algorithms.
Abstract: An illustration of different shapes of the time-varying transfer function with different values of the control parameter Display Omitted Analyse how transfer function in BPSO affects the balance between exploration and exploitationPropose a time-varying transfer function for BPSO to achieve a better such balanceValidate the advantage of the new transfer function on knapsack instances and a truss design problem Many real-world problems belong to the family of discrete optimization problems Most of these problems are NP-hard and difficult to solve efficiently using classical linear and convex optimization methods In addition, the computational difficulties of these optimization tasks increase rapidly with the increasing number of decision variables A further difficulty can be also caused by the search space being intrinsically multimodal and non-convex In such a case, it is more desirable to have an effective optimization method that can cope better with these problem characteristics Binary particle swarm optimization (BPSO) is a simple and effective discrete optimization method The original BPSO and its variants have been used to solve a number of classic discrete optimization problems However, it is reported that the original BPSO and its variants are unable to provide satisfactory results due to the use of inappropriate transfer functions More specifically, these transfer functions are unable to provide BPSO a good balance between exploration and exploitation in the search space, limiting their performances To overcome this problem, this paper proposes to employ a time-varying transfer function in the BPSO, namely TVT-BPSO To understand the search behaviour of the TVT-BPSO, we provide a systematic analysis of its exploration and exploitation capability Our experimental results demonstrate that TVT-BPSO outperforms existing BPSO variants on both low-dimensional and high-dimensional classical 01 knapsack problems, as well as a 200-member truss problem, suggesting that TVT-BPSO is able to better scale to high dimensional combinatorial problems than the existing BPSO variants and other metaheuristic algorithms

Journal ArticleDOI
TL;DR: The related research on PSO is surveyed: multi-objective large-scale optimization, many-Objective optimization, and distributed parallelism, and the proposed methodologies and future research trends are illuminated.
Abstract: With the advent of big data era, complex optimization problems with many objectives and large numbers of decision variables are constantly emerging. Traditional research about multi-objective particle swarm optimization (PSO) focuses on multi-objective optimization problems (MOPs) with small numbers of variables and less than four objectives. At present, MOPs with large numbers of variables and many objectives (greater than or equal to four) are constantly emerging. When tackling this type of MOPs, the traditional multi-objective PSO algorithms have low efficiency. Aiming at these multi-objective large-scale optimization problems (MOLSOPs) and many-objective large-scale optimization problems (MaOLSOPs), we need to explore thoroughly parallel attributes of the particle swarm, and design the novel PSO algorithms according to the characteristics of distributed parallel computation. We survey the related research on PSO: multi-objective large-scale optimization, many-objective optimization, and distributed parallelism. Based on the aforementioned three aspects, the multi-objective large-scale distributed parallel PSO and many-objective large-scale distributed parallel PSO methodologies are proposed and discussed, and the other future research trends are also illuminated.

Journal ArticleDOI
TL;DR: This survey paper discusses key challenges for using embedded optimization methods and summarizes their main use cases in current industrial practice and a number of dedicated embedded optimization algorithms and their actual implementations are reviewed.

Journal ArticleDOI
01 Aug 2017
TL;DR: This paper presents a new CS algorithm, called NCS, for solving flow shop scheduling problems (FSSP), which hybridizes four strategies and obtains better performance than the standard CS and some other meta-heuristic algorithms.
Abstract: Cuckoo search (CS) is a recently developed meta-heuristic algorithm, which has shown good performance on many continuous optimization problems. In this paper, we present a new CS algorithm, called NCS, for solving flow shop scheduling problems (FSSP). The NCS hybridizes four strategies: (1) The FSSP is a typical NP-hard problem with discrete characteristics. To deal with the discrete variables, the smallest position value (SPV) rule is employed to convert continuous solutions into discrete job permutations; (2) To generate high quality initial solutions, a new method based on the Nawaz-Enscore-Ham (NEH) heuristic is used for population initialization; (3) A modified generalized opposition-based learning (GOBL) is utilized to accelerate the convergence speed; and (4) To enhance the exploitation, a local search strategy is proposed. Experimental study is conducted on a set of Taillard's benchmark instances. Results show that NCS obtains better performance than the standard CS and some other meta-heuristic algorithms.

Journal ArticleDOI
TL;DR: In this paper, a novel metaheuristic optimization method, namely human behavior-based optimization (HBBO), is presented and it is shown that how it can be used for solving the practical optimization problems.
Abstract: Optimization techniques, specially evolutionary algorithms, have been widely used for solving various scientific and engineering optimization problems because of their flexibility and simplicity. In this paper, a novel metaheuristic optimization method, namely human behavior-based optimization (HBBO), is presented. Despite many of the optimization algorithms that use nature as the principal source of inspiration, HBBO uses the human behavior as the main source of inspiration. In this paper, first some human behaviors that are needed to understand the algorithm are discussed and after that it is shown that how it can be used for solving the practical optimization problems. HBBO is capable of solving many types of optimization problems such as high-dimensional multimodal functions, which have multiple local minima, and unimodal functions. In order to demonstrate the performance of HBBO, the proposed algorithm has been tested on a set of well-known benchmark functions and compared with other optimization algorithms. The results have been shown that this algorithm outperforms other optimization algorithms in terms of algorithm reliability, result accuracy and convergence speed.

Journal ArticleDOI
TL;DR: This work proposes shape convexity as a new high-order regularization constraint for binary image segmentation and derives a second order approximation model that is more accurate but computationally intensive.
Abstract: Convexity is a known important cue in human vision. We propose shape convexity as a new high-order regularization constraint for binary image segmentation. In the context of discrete optimization, object convexity is represented as a sum of three-clique potentials penalizing any $1$ - $0$ - $1$ configuration on all straight lines. We show that these non-submodular potentials can be efficiently optimized using an iterative trust region approach. At each iteration the energy is linearly approximated and globally optimized within a small trust region around the current solution. While the quadratic number of all three-cliques is prohibitively high, we design a dynamic programming technique for evaluating and approximating these cliques in linear time. We also derive a second order approximation model that is more accurate but computationally intensive. We discuss limitations of our local optimization and propose gradual non-submodularization scheme that alleviates some limitations. Our experiments demonstrate general usefulness of the proposed convexity shape prior on synthetic and real image segmentation examples. Unlike standard second-order length regularization, our convexity prior does not have shrinking bias, and is robust to changes in scale and parameter selection.

Journal ArticleDOI
TL;DR: It is crucial to select an adequate binarization approach to ensure that the solving algorithm reaches its full potential when solving a discrete optimization problem, concretely the Set Covering Problem.
Abstract: The Set Covering Problem (SCP) is one of the classical Karp’s 21 NP-complete problems. Although this is a traditional optimization problem, we find many papers assuming metaheuristics for solving the SCP in the current literature. However, while the SCP is a discrete problem, most metaheuristics are defined for solving continuous optimization problems, specially Swarm Intelligence Algorithms (SIAs). Hence, such algorithms should be adapted for working on the discrete scope, but most authors did not perform any study to select a concrete binarization approach. This situation might lead to the conclusion that selecting a concrete binarization technique does not influence the behavior of the algorithm, but rather the general approach of the metaheuristic. This circumstance led us to write this paper focusing on the inherent difficulty in binarization of metaheuristics designed for continuous optimization, when solving a discrete optimization problem, concretely the SCP. To this end, we consider a recent SIA inspired by the behavior of cats and adapted to the discrete scope, which is called Binary Cat Swarm Optimization (BCSO). We replace the binarization technique assumed in the original BCSO by forty different approaches from the current literature. The results obtained while solving a standard SCP benchmark are analyzed through a widely accepted statistical method, concluding that it is crucial to select an adequate binarization approach to ensure that the solving algorithm reaches its full potential. Thus, the task of adapting a metaheuristic to the discrete scope is not a simple matter and should be carefully studied. To this end and as a result of this study, we give some recommendations to perform this task.

Journal ArticleDOI
TL;DR: It can be concluded that FATLBO is able to deliver excellence and competitive performance in solving various structural optimization problems.
Abstract: This paper presents a new optimization algorithm called fuzzy adaptive teaching---learning-based optimization (FATLBO) for solving numerical structural problems. This new algorithm introduces three new mechanisms for increasing the searching capability of teaching---learning-based optimization namely status monitor, fuzzy adaptive teaching---learning strategies, and remedial operator. The performance of FATLBO is compared with well-known optimization methods on 26 unconstrained mathematical problems and five structural engineering design problems. Based on the obtained results, it can be concluded that FATLBO is able to deliver excellence and competitive performance in solving various structural optimization problems.

Journal ArticleDOI
TL;DR: Methods from vector optimization in general spaces, set-valued optimization and scalarization techniques are applied to develop a unified characterization of different concepts of robust optimization and stochastic programming to provide new insights on the interrelated concepts for handling uncertainties.

Journal ArticleDOI
TL;DR: It is shown that the distributed extremum-seeking control system achieves the optimization of the total network cost.
Abstract: In this study, a distributed extremum-seeking control approach is proposed to solve a class of constrained real-time optimization problems. Each agent operates over a sensor network. The agents have access to the measurement of a local cost and local constraints which it can communicate to neighbouring agents over the network. A dynamic consensus algorithm is used to provide all agents with an estimate of the total network cost and global constraints. A local extremum seeking controller is used to manipulate local input variables. It is shown that the distributed extremum-seeking control system achieves the optimization of the total network cost.

Journal ArticleDOI
TL;DR: A novel tailor-made modeling approach is proposed and the computational cost required for dynamic analysis to form the IE with respect to the entire periodic structure can be dramatically reduced regardless of the number of contained periodic units.
Abstract: The number of sensors and the corresponding locations are very important for the information content obtained from the measured data, which is a recognized challenging problem for large-scale structural systems. This article pays special attention to the sensor placement issues on a large-scale periodically articulated structure representing typical pipelines to extract the most information from measured data for the purpose of model identification. The minimal model parameter estimation uncertainties quantified by the information entropy (IE) measure is taken as the optimality criterion for sensors placement. By utilizing the inherent periodicity property of this type of structure together with the Bloch theorem, a novel tailor-made modeling approach is proposed and the computational cost required for dynamic analysis to form the IE with respect to the entire periodic structure can be dramatically reduced regardless of the number of contained periodic units. In addition, to avoid the error of dynamic modeling induced by conventional finite element method based on static shape function, the spectral element method, a highly accurate dynamic modeling method, is employed for modeling the periodic unit. Moreover, a novel discrete optimization method is developed, which is very efficient in terms of the number of function evaluations. The proposed methodology is demonstrated by both numerical and laboratory experiments conducted for a bolt-connected periodic beam model.

Journal ArticleDOI
TL;DR: A direct-search derivative-free Matlab optimizer for bound-constrained problems is described, whose remarkable features are its ability to handle a mix of continuous and discrete variables, a versatile interface as well as a novel self-training option.
Abstract: A direct-search derivative-free Matlab optimizer for bound-constrained problems is described, whose remarkable features are its ability to handle a mix of continuous and discrete variables, a versatile interface as well as a novel self-training option. Its performance compares favorably with that of NOMAD (Nonsmooth Optimization by Mesh Adaptive Direct Search), a well-known derivative-free optimization package. It is also applicable to multilevel equilibrium- or constrained-type problems. Its easy-to-use interface provides a number of user-oriented features, such as checkpointing and restart, variable scaling, and early termination tools.

Journal ArticleDOI
TL;DR: The multilevel framework is extended to enable efficient optimization under uncertainty and new treatments include the use of a sample validation procedure for realization selection and theuse of the standalone MADS optimizer (rather than PSO–MADS) after the first optimization stage.
Abstract: The robust optimization of reservoir performance under geological uncertainty typically requires the simulation of multiple geological realizations at each iteration of the optimization run. This results in high computational expense, particularly when simulation models are highly resolved and many realizations are employed to characterize geological uncertainty. In recent work we introduced a multilevel optimization procedure that uses a sequence of upscaled models to accelerate field development optimization. The core optimizer is a particle swarm optimization–mesh adaptive direct search (PSO–MADS) hybrid technique. Coarse-scale models are constructed from the fine-grid geological characterization using an accurate global transmissibility upscaling procedure. In this paper we extend the multilevel framework to enable efficient optimization under uncertainty. New treatments include the use of a sample validation procedure for realization selection and the use of the standalone MADS optimizer (rather than PSO–MADS) after the first optimization stage. Numerical results are presented for two example systems, and for each case optimization over both ten and 100 realizations is performed. For ten-realization cases we achieve comparable results, and speedups of a factor of 10 or more, relative to the conventional single-level optimization procedure. Speedups are estimated to be even more substantial for 100 realization cases, for which conventional optimization is not practical. We also investigate the application of a multilevel Monte Carlo approach as an alternative to our proposed techniques for optimization under uncertainty. Although this method is faster than the conventional approach, it is not as efficient as the multilevel procedures developed in this work.

Journal ArticleDOI
TL;DR: Experiments with the standard and the electric vehicle routing problem with time windows as well as the vehicle routing and truck driver scheduling problem confirm that dynamic half-way points better balance forward and backward labeling and reduce the overall runtime.

Journal ArticleDOI
TL;DR: Modified versions of Teaching–Learning-Based Optimization (TLBO), Heat Transfer Search (HTS), Water Waveoptimization (WWO), and Passing Vehicle Search (PVS) are proposed by integrating the random mutation-based search technique with them by combining four basic meta-heuristics to solve discrete TTO problems.

Journal ArticleDOI
TL;DR: A multi-objective discrete Biogeography Based Optimization (BBO) algorithm to find communities in social networks with node attributes using the Pareto-based approach and introduces a method for mutation probability approximation and uses a chaotic mechanism to dynamically tune the mutation probability in each iteration.

Proceedings ArticleDOI
Peng-Fei Zhang1, Chuan-Xiang Li1, Meng-Yuan Liu1, Liqiang Nie1, Xin-Shun Xu1 
23 Oct 2017
TL;DR: This paper proposes a novel supervised cross-modal hashing method---Semi-Relaxation Supervised Hashing (SRSH), which can learn the hash functions and the binary codes simultaneously and relaxes a part of binary constraints, instead of all of them, by introducing an intermediate representation variable.
Abstract: Recently, some cross-modal hashing methods have been devised for cross-modal search task. Essentially, given a similarity matrix, most of these methods tackle a discrete optimization problem by separating it into two stages, i.e., first relaxing the binary constraints and finding a solution of the relaxed optimization problem, then quantizing the solution to obtain the binary codes. This scheme will generate large quantization error. Some discrete optimization methods have been proposed to tackle this; however, the generation of the binary codes is independent of the features in the original space, which makes it not robust to noise. To consider these problems, in this paper, we propose a novel supervised cross-modal hashing method---Semi-Relaxation Supervised Hashing (SRSH). It can learn the hash functions and the binary codes simultaneously. At the same time, to tackle the optimization problem, it relaxes a part of binary constraints, instead of all of them, by introducing an intermediate representation variable. By doing this, the quantization error can be reduced and the optimization problem can also be easily solved by an iterative algorithm proposed in this paper. Extensive experimental results on three benchmark datasets demonstrate that SRSH can obtain competitive results and outperform state-of-the-art unsupervised and supervised cross-modal hashing methods.

Posted Content
TL;DR: In this paper, the authors introduce the problem of stochastic submodular optimization, where one needs to optimize a sub-modular objective which is given as an expectation.
Abstract: Stochastic optimization of continuous objectives is at the heart of modern machine learning. However, many important problems are of discrete nature and often involve submodular objectives. We seek to unleash the power of stochastic continuous optimization, namely stochastic gradient descent and its variants, to such discrete problems. We first introduce the problem of stochastic submodular optimization, where one needs to optimize a submodular objective which is given as an expectation. Our model captures situations where the discrete objective arises as an empirical risk (e.g., in the case of exemplar-based clustering), or is given as an explicit stochastic model (e.g., in the case of influence maximization in social networks). By exploiting that common extensions act linearly on the class of submodular functions, we employ projected stochastic gradient ascent and its variants in the continuous domain, and perform rounding to obtain discrete solutions. We focus on the rich and widely used family of weighted coverage functions. We show that our approach yields solutions that are guaranteed to match the optimal approximation guarantees, while reducing the computational cost by several orders of magnitude, as we demonstrate empirically.