scispace - formally typeset
Search or ask a question

Showing papers on "Discrete optimization published in 2018"


Journal ArticleDOI
TL;DR: This work proposes a simple yet effective unsupervised hashing framework, named Similarity-Adaptive Deep Hashing (SADH), which alternatingly proceeds over three training modules: deep hash model training, similarity graph updating and binary code optimization.
Abstract: Recent vision and learning studies show that learning compact hash codes can facilitate massive data processing with significantly reduced storage and computation. Particularly, learning deep hash functions has greatly improved the retrieval performance, typically under the semantic supervision. In contrast, current unsupervised deep hashing algorithms can hardly achieve satisfactory performance due to either the relaxed optimization or absence of similarity-sensitive objective. In this work, we propose a simple yet effective unsupervised hashing framework, named Similarity-Adaptive Deep Hashing (SADH), which alternatingly proceeds over three training modules: deep hash model training, similarity graph updating and binary code optimization. The key difference from the widely-used two-step hashing method is that the output representations of the learned deep model help update the similarity graph matrix, which is then used to improve the subsequent code optimization. In addition, for producing high-quality binary codes, we devise an effective discrete optimization algorithm which can directly handle the binary constraints with a general hashing loss. Extensive experiments validate the efficacy of SADH, which consistently outperforms the state-of-the-arts by large gaps.

343 citations


Journal ArticleDOI
TL;DR: Particle swarm optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems which cannot be solved using traditional deterministic algorithms as discussed by the authors.
Abstract: Particle Swarm Optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems which cannot be solved using traditional deterministic algorithms. The canonical particle swarm optimizer is based on the flocking behavior and social co-operation of birds and fish schools and draws heavily from the evolutionary behavior of these organisms. This paper serves to provide a thorough survey of the PSO algorithm with special emphasis on the development, deployment and improvements of its most basic as well as some of the state-of-the-art implementations. Concepts and directions on choosing the inertia weight, constriction factor, cognition and social weights and perspectives on convergence, parallelization, elitism, niching and discrete optimization as well as neighborhood topologies are outlined. Hybridization attempts with other evolutionary and swarm paradigms in selected applications are covered and an up-to-date review is put forward for the interested reader.

260 citations


Journal ArticleDOI
10 Oct 2018
TL;DR: Particle swarm optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems that cannot be solved using traditional deterministic algorithms as discussed by the authors.
Abstract: Particle Swarm Optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems that cannot be solved using traditional deterministic algorithms. The canonical particle swarm optimizer is based on the flocking behavior and social co-operation of birds and fish schools and draws heavily from the evolutionary behavior of these organisms. This paper serves to provide a thorough survey of the PSO algorithm with special emphasis on the development, deployment, and improvements of its most basic as well as some of the very recent state-of-the-art implementations. Concepts and directions on choosing the inertia weight, constriction factor, cognition and social weights and perspectives on convergence, parallelization, elitism, niching and discrete optimization as well as neighborhood topologies are outlined. Hybridization attempts with other evolutionary and swarm paradigms in selected applications are covered and an up-to-date review is put forward for the interested reader.

155 citations


Book
16 Jun 2018
TL;DR: This book introduces a novel approach to discrete optimization, providing both theoretical insights and algorithmic developments that lead to improvements over state-of-the-art technology.
Abstract: This book introduces a novel approach to discrete optimization, providing both theoretical insights and algorithmic developments that lead to improvements over state-of-the-art technology. The authors present chapters on the use of decision diagrams for combinatorial optimization and constraint programming, with attention to general-purpose solution methods as well as problem-specific techniques. The book will be useful for researchers and practitioners in discrete optimization and constraint programming."Decision Diagrams for Optimization is one of the most exciting developments emerging from constraint programming in recent years. This book is a compelling summary of existing results in this space and a must-read for optimizers around the world." [Pascal Van Hentenryck]

123 citations


Journal ArticleDOI
TL;DR: This work proposes a novel supervised hashing approach, termed as Robust Discrete Code Modeling (RDCM), which directly learns high-quality discretebinary codes and hash functions by effectively suppressing the influence of unreliable binary codes and potentially noisily-labeled samples.

101 citations


BookDOI
01 Jan 2018
TL;DR: This work intends to analyze nature-inspired algorithms both qualitatively and quantitatively, and briefly outline the links between self-organization and algorithms, and then analyze algorithms using Markov chain theory, dynamic system and other methods.
Abstract: Nature-inspired algorithms are a class of effective tools for solving optimization problems and these algorithms have good properties such as simplicity, flexibility and high efficiency. Despite their popularity in practice, a mathematical framework is yet to be developed to analyze these algorithms theoretically. This work intends to analyze nature-inspired algorithms both qualitatively and quantitatively. We briefly outline the links between self-organization and algorithms, and then analyze algorithms using Markov chain theory, dynamic system and other methods. This can serve as a basis for building a multidisciplinary framework for algorithm analysis.

98 citations


Journal Article
TL;DR: In this article, the authors present a custom discrete optimization technique for building rule lists over a categorical feature space, which produces rule lists with optimal training performance, according to the regularized empirical risk, with a certificate of optimality.
Abstract: We present the design and implementation of a custom discrete optimization technique for building rule lists over a categorical feature space. Our algorithm produces rule lists with optimal training performance, according to the regularized empirical risk, with a certificate of optimality. By leveraging algorithmic bounds, efficient data structures, and computational reuse, we achieve several orders of magnitude speedup in time and a massive reduction of memory consumption. We demonstrate that our approach produces optimal rule lists on practical problems in seconds. Our results indicate that it is possible to construct optimal sparse rule lists that are approximately as accurate as the COMPAS proprietary risk prediction tool on data from Broward County, Florida, but that are completely interpretable. This framework is a novel alternative to CART and other decision tree methods for interpretable modeling.

69 citations


Journal ArticleDOI
08 Mar 2018-PLOS ONE
TL;DR: More robust EV distribution paths with multiple distribution centers can be obtained using the robust optimization model based on Bertsimas’ theory of robust discrete optimization.
Abstract: To identify electrical vehicle (EV) distribution paths with high robustness, insensitivity to uncertainty factors, and detailed road-by-road schemes, optimization of the distribution path problem of EV with multiple distribution centers and considering the charging facilities is necessary. With the minimum transport time as the goal, a robust optimization model of EV distribution path with adjustable robustness is established based on Bertsimas' theory of robust discrete optimization. An enhanced three-segment genetic algorithm is also developed to solve the model, such that the optimal distribution scheme initially contains all road-by-road path data using the three-segment mixed coding and decoding method. During genetic manipulation, different interlacing and mutation operations are carried out on different chromosomes, while, during population evolution, the infeasible solution is naturally avoided. A part of the road network of Xifeng District in Qingyang City is taken as an example to test the model and the algorithm in this study, and the concrete transportation paths are utilized in the final distribution scheme. Therefore, more robust EV distribution paths with multiple distribution centers can be obtained using the robust optimization model.

64 citations


Journal ArticleDOI
TL;DR: A generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D is proposed, in order to coordinate with the property of permutation-based MOCOPs, which utilizes an element-based representation and a constructive approach.
Abstract: This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

63 citations


Proceedings Article
03 Jul 2018
TL;DR: In this article, an adaptive, scalable model that identifies useful combinatorial structure even when data is scarce is proposed, which uses semidefinite programming to achieve efficiency and scalability.
Abstract: The optimization of expensive-to-evaluate black-box functions over combinatorial structures is an ubiquitous task in machine learning, engineering and the natural sciences. The combinatorial explosion of the search space and costly evaluations pose challenges for current techniques in discrete optimization and machine learning, and critically require new algorithmic ideas. This article proposes, to the best of our knowledge, the first algorithm to overcome these challenges, based on an adaptive, scalable model that identifies useful combinatorial structure even when data is scarce. Our acquisition function pioneers the use of semidefinite programming to achieve efficiency and scalability. Experimental evaluations demonstrate that this algorithm consistently outperforms other methods from combinatorial and Bayesian optimization.

61 citations


Posted Content
TL;DR: In this article, an adaptive, scalable model that identifies useful combinatorial structure even when data is scarce is proposed, which uses semidefinite programming to achieve efficiency and scalability.
Abstract: The optimization of expensive-to-evaluate black-box functions over combinatorial structures is an ubiquitous task in machine learning, engineering and the natural sciences. The combinatorial explosion of the search space and costly evaluations pose challenges for current techniques in discrete optimization and machine learning, and critically require new algorithmic ideas. This article proposes, to the best of our knowledge, the first algorithm to overcome these challenges, based on an adaptive, scalable model that identifies useful combinatorial structure even when data is scarce. Our acquisition function pioneers the use of semidefinite programming to achieve efficiency and scalability. Experimental evaluations demonstrate that this algorithm consistently outperforms other methods from combinatorial and Bayesian optimization.

Journal ArticleDOI
TL;DR: Numerical experiments indicate that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic structural optimization algorithms, especially in the early stages of the optimization.
Abstract: In spite of considerable research work on the development of efficient algorithms for discrete sizing optimization of steel truss structures, only a few studies have addressed non-algorithmic issues affecting the general performance of algorithms. For instance, an important question is whether starting the design optimization from a feasible solution is fruitful or not. This study is an attempt to investigate the effect of seeding the initial population with feasible solutions on the general performance of metaheuristic techniques. To this end, the sensitivity of recently proposed metaheuristic algorithms to the feasibility of initial candidate designs is evaluated through practical discrete sizing of real-size steel truss structures. The numerical experiments indicate that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic structural optimization algorithms, especially in the early stages of the optimization. This paves the way for efficient m...

Proceedings ArticleDOI
18 Jun 2018
TL;DR: This paper presents a discrepancy minimizing model to address the discrete optimization problem in hashing learning and transforms the original binary optimization into differentiable optimization problem over hash functions through series expansion.
Abstract: This paper presents a discrepancy minimizing model to address the discrete optimization problem in hashing learning. The discrete optimization introduced by binary constraint is an NP-hard mixed integer programming problem. It is usually addressed by relaxing the binary variables into continuous variables to adapt to the gradient based learning of hashing functions, especially the training of deep neural networks. To deal with the objective discrepancy caused by relaxation, we transform the original binary optimization into differentiable optimization problem over hash functions through series expansion. This transformation decouples the binary constraint and the similarity preserving hashing function optimization. The transformed objective is optimized in a tractable alternating optimization framework with gradual discrepancy minimization. Extensive experimental results on three benchmark datasets validate the efficacy of the proposed discrepancy minimizing hashing.

Journal ArticleDOI
TL;DR: A new four-instant finite difference (FIFD) formula is proposed that helps to discretize the continuous ZTND model with high accuracy and is applied to robot motion planning and shows its feasibility in practice.

Journal ArticleDOI
TL;DR: An improved binary particle swarm optimization (BPSO) algorithm is proposed for the design of high-dimensional, multifunctional, and compact fragment-type antenna (FTA) and a new transfer function with a time-variant transfer factor is proposed to improve the problem of easily falling into local optimum in basic BPSO.
Abstract: An improved binary particle swarm optimization (BPSO) algorithm is proposed for the design of high-dimensional, multifunctional, and compact fragment-type antenna (FTA). First, orthogonal array-based initialization instead of randomized initialization is employed to uniformly sample the design space for better population diversity. Then, a new transfer function with a time-variant transfer factor is proposed to improve the problem of easily falling into local optimum in basic BPSO. Experimental results of the two miniaturized FTA designs show that the proposed BPSO exhibits better convergence performance than that of other published discrete optimization algorithms and can provide excellent candidates for the internal miniaturized antenna designs in wireless and portable applications.

Journal ArticleDOI
TL;DR: A generalised version of a continuous-time analog solver for MaxSAT is presented and it is shown that the scaling of the escape rate, an invariant of the solver's dynamics, can predict the maximum number of satisfiable constraints, often well before finding the optimal assignment.
Abstract: Many real-life optimization problems can be formulated in Boolean logic as MaxSAT, a class of problems where the task is finding Boolean assignments to variables satisfying the maximum number of logical constraints. Since MaxSAT is NP-hard, no algorithm is known to efficiently solve these problems. Here we present a continuous-time analog solver for MaxSAT and show that the scaling of the escape rate, an invariant of the solver’s dynamics, can predict the maximum number of satisfiable constraints, often well before finding the optimal assignment. Simulating the solver, we illustrate its performance on MaxSAT competition problems, then apply it to two-color Ramsey number R(m, m) problems. Although it finds colorings without monochromatic 5-cliques of complete graphs on N ≤ 42 vertices, the best coloring for N = 43 has two monochromatic 5-cliques, supporting the conjecture that R(5, 5) = 43. This approach shows the potential of continuous-time analog dynamical systems as algorithms for discrete optimization. Continuous-time computation paradigm could represent a viable alternative to the standard digital one when dealing with certain classes of problems. Here, the authors propose a generalised version of a continuous-time solver and simulate its performances in solving MaxSAT and two-colour Ramsey problems.

Journal ArticleDOI
TL;DR: It is shown that the FC-MOPSO can effectively find acceptable approximations of Pareto fronts for structural MOPs within very limited number of function evaluations.
Abstract: This paper presents a new multi-objective optimization algorithm called FC-MOPSO for optimal design of engineering problems with a small number of function evaluations. The proposed algorithm expands the main idea of the single-objective particle swarm optimization (PSO) algorithm to deal with constrained and unconstrained multi-objective problems (MOPs). FC-MOPSO employs an effective procedure in selection of the leader for each particle to ensure both diversity and fast convergence. Fifteen benchmark problems with continuous design variables are used to validate the performance of the proposed algorithm. Finally, a modified version of FC-MOPSO is introduced for handling discrete optimization problems. Its performance is demonstrated by optimizing five space truss structures. It is shown that the FC-MOPSO can effectively find acceptable approximations of Pareto fronts for structural MOPs within very limited number of function evaluations.

Journal ArticleDOI
TL;DR: Comparisons demonstrate that hybrid variant of the TSA is better than the other variants of the algorithm and state-of-art algorithms in terms of solution quality and robustness.

Journal ArticleDOI
TL;DR: In this paper, the discrete optimization design of TWBs structures with top-hat thin-walled section subjected to front dynamic impact is performed by using Taguchi-based gray relational analysis material grades and thicknesses with three levels are taken as discrete design variables.
Abstract: In order to further improve crashworthiness and reduce weight, tailor-welded blanks (TWBs) have been widely applied in auto-body design In this paper, the discrete optimization design of TWBs structures with top-hat thin-walled section subjected to front dynamic impact is performed by using Taguchi-based gray relational analysis Material grades and thicknesses with three levels are taken as discrete design variables The total energy absorption (EA), the total weight (Mass) and the peak crashing force (Fmax) are chosen as optimization indicators Considering the uncertain weight ratio of responses, four different cases would be analyzed In order to determine the optimal parameter combination more accurately and eliminate errors from range analysis, the analysis of variance (ANOVA) would be performed The optimized results demonstrate that it is feasible to increase the crashworthiness of TWBs by increasing the gray correlation of the structure Compared to initial structure, case 1 (w(Fmax):w(EA):w(Mass)= 1/3:1/3:1/3) has the largest improvement among the four cases, ie, the Fmax and the Mass are reduced by 293% and 27%, respectively, while the EA is increased by 35% The discrete optimization method with only 27 iterations is a low computing cost or cost-effective and provides some guidance for some similar structural design More comprehensive studies are essential to optimize performance of multi-components with more discrete variables

Book ChapterDOI
08 Sep 2018
TL;DR: This approach formulates the non-smooth part of the hashing network as sampling with a stochastic policy, so that the retrieval performance degradation caused by the relaxation can be avoided and the differentiation challenge for discrete optimization can be naturally addressed.
Abstract: In this paper, we propose a simple yet effective relaxation-free method to learn more effective binary codes via policy gradient for scalable image search. While a variety of deep hashing methods have been proposed in recent years, most of them are confronted by the dilemma to obtain optimal binary codes in a truly end-to-end manner with non-smooth sign activations. Unlike existing methods which usually employ a general relaxation framework to adapt to the gradient-based algorithms, our approach formulates the non-smooth part of the hashing network as sampling with a stochastic policy, so that the retrieval performance degradation caused by the relaxation can be avoided. Specifically, our method directly generates the binary codes and maximizes the expectation of rewards for similarity preservation, where the network can be trained directly via policy gradient. Hence, the differentiation challenge for discrete optimization can be naturally addressed, which leads to effective gradients and binary codes. Extensive experimental results on three benchmark datasets validate the effectiveness of the proposed method.

Journal ArticleDOI
05 Jul 2018-PLOS ONE
TL;DR: Experimental results showed that the SOS with LPT (SOS-LPT) heuristic has the best performance compared to other tested method, which is closely followed by SOS algorithm, indicating that the two proposed algorithms’ solution approaches are reasonable and effective for solving large-scale UPMSPs.
Abstract: This paper addresses the problem of makespan minimization on unrelated parallel machines with sequence dependent setup times. The symbiotic organisms search (SOS) algorithm is a new and popular global optimization technique that has received wide acceptance in recent years from researchers in continuous and discrete optimization domains. An improved SOS algorithm is developed to solve the parallel machine scheduling problem. Since the standard SOS algorithm was originally developed to solve continuous optimization problems, a new solution representation and decoding procedure is designed to make the SOS algorithm suitable for the unrelated parallel machine scheduling problem (UPMSP). Similarly, to enhance the solution quality of the SOS algorithm, an iterated local search strategy based on combining variable numbers of insertion and swap moves is incorporated into the SOS algorithm. More so, to further improve the SOS optimization speed and performance, the longest processing time first (LPT) rule is used to design a machine assignment heuristic that assigns processing machines to jobs based on the machine dynamic load-balancing mechanism. Subsequently, the machine assignment scheme is incorporated into SOS algorithms and used to solve the UPMSP. The performances of the proposed methods are evaluated by comparing their solutions with other existing techniques from the literature. A number of statistical tests were also conducted to determine the variations in performance for each of the techniques. The experimental results showed that the SOS with LPT (SOS-LPT) heuristic has the best performance compared to other tested method, which is closely followed by SOS algorithm, indicating that the two proposed algorithms’ solution approaches are reasonable and effective for solving large-scale UPMSPs.

Proceedings ArticleDOI
Thomas Weise1, Zijun Wu1
06 Jul 2018
TL;DR: The W-Model is put into the context of related model problems targeting ruggedness, neutrality, and epistasis and given an idea about suitable configurations of it that could be included in the BB-DOB benchmark suite.
Abstract: The first event of the Black-Box Discrete Optimization Benchmarking (BB-DOB) workshop series aims to establish a set of example problems for benchmarking black-box optimization algorithms for discrete or combinatorial domains In this paper, we 1) discuss important features that should be embodied by these benchmark functions and 2) present the W-Model problem which exhibits them The W-Model follows a layered approach, where each layer can either be omitted or introduce a different characteristic feature such as neutrality via redundancy, ruggedness and deceptiveness, epistasis, and multi-objectivity, in a tunable way The model problem is defined over bit string representations, which allows for extracting some of its layers and stacking them on top of existing problems that use this representation, such as OneMax, the Maximum Satisfiability or the Set Covering tasks, and the NK landscape The ruggedness and deceptiveness layer can be stacked on top of any problem with integer-valued objectives We put the W-Model into the context of related model problems targeting ruggedness, neutrality, and epistasis We then present the results of a series of experiments to further substantiate the utility of the W-Model and to give an idea about suitable configurations of it that could be included in the BB-DOB benchmark suite

Journal ArticleDOI
TL;DR: The performance of an emerging socio inspired metaheuristic optimization technique referred to as Cohort Intelligence (CI) algorithm is evaluated on discrete and mixed variable nonlinear constrained optimization problems.
Abstract: In this study, the performance of an emerging socio inspired metaheuristic optimization technique referred to as Cohort Intelligence (CI) algorithm is evaluated on discrete and mixed variable nonlinear constrained optimization problems. The investigated problems are mainly adopted from discrete structural optimization and mixed variable mechanical engineering design domains. For handling the discrete solution variables, a round off integer sampling approach is proposed. Furthermore, in order to deal with the nonlinear constraints, a penalty function method is incorporated. The obtained results are promising and computationally more efficient when compared to the other existing optimization techniques including a Multi Random Start Local Search algorithm. The associated advantages and disadvantages of CI algorithm are also discussed evaluating the effect of its two parameters namely the number of candidates, and sampling space reduction factor.

Proceedings ArticleDOI
02 Jul 2018
TL;DR: This documentation adjusts the COCO software to pseudo-Boolean optimization problems, and obtains from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics.
Abstract: Theoretical and empirical research on evolutionary computation methods complement each other by providing two fundamentally different approaches towards a better understanding of black-box optimization heuristics. In discrete optimization, both streams developed rather independently of each other, but we observe today an increasing interest in reconciling these two sub-branches. In continuous optimization, the COCO (Comparing Continuous Optimisers) benchmarking suite has established itself as an important platform that theoreticians and practitioners use to exchange research ideas and questions. No widely accepted equivalent exists in the research domain of discrete black-box optimization. Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics. In this documentation we demonstrate how this test bed can be used to profile the performance of evolutionary algorithms. More concretely, we study the optimization behavior of several (1 + λ) EA variants on the two benchmark problems OneMax and LeadingOnes. This comparison motivates a refined analysis for the optimization time of the (1 + λ) EA on LeadingOnes.

Journal ArticleDOI
TL;DR: Results demonstrated that the fuzzy CRS method is computationally more efficient and is strongly more robust than the HL-RF for fuzzy-based reliability analysis of the nonlinear structural reliability problems.
Abstract: Fuzzy reliability analysis can be implemented using two discrete optimization maps in the processes of reliability and fuzzy analysis. Actually, the efficiency and robustness of the iterative reliability methods are two main factors in the fuzzy-based reliability analysis due to the huge computational burdens and unstable results. In the structural fuzzy reliability analysis, the first-order reliability method (FORM) using discrete nonlinear map can provide a C membership function. In this paper, a discrete nonlinear conjugate map is proposed using a relaxed-finite step size method for fuzzy structural reliability analysis, namely Fuzzy conjugate relaxed-finite step size method fuzzy CRS. A discrete conjugate map is stabilized using two adaptive factors to compute the relaxed factor and step size in FORM. The framework of the proposed fuzzy structural reliability method is established using two linked iterative discrete maps as an outer loop, which constructs the membership function of the response using alpha level set optimization based on genetic operator, and the inner loop, implemented for reliability analysis using proposed conjugate relaxed-finite step size method. The fuzzy CRS and fuzzy HL-RF methods are compared to evaluate the membership functions of five structural problems with highly nonlinear limit state functions. Results demonstrated that the fuzzy CRS method is computationally more efficient and is strongly more robust than the HL-RF for fuzzy-based reliability analysis of the nonlinear structural reliability problems.

Journal ArticleDOI
TL;DR: A very general discrete covering location model that accounts for uncertainty and time-dependent aspects is introduced and a Lagrangian relaxation based heuristic is developed in order to tackle large instances of this problem.

Proceedings ArticleDOI
TL;DR: In this paper, the authors adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics.
Abstract: Theoretical and empirical research on evolutionary computation methods complement each other by providing two fundamentally different approaches towards a better understanding of black-box optimization heuristics. In discrete optimization, both streams developed rather independently of each other, but we observe today an increasing interest in reconciling these two sub-branches. In continuous optimization, the COCO (COmparing Continuous Optimisers) benchmarking suite has established itself as an important platform that theoreticians and practitioners use to exchange research ideas and questions. No widely accepted equivalent exists in the research domain of discrete black-box optimization. Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics. In this documentation we demonstrate how this test bed can be used to profile the performance of evolutionary algorithms. More concretely, we study the optimization behavior of several $(1+\lambda)$ EA variants on the two benchmark problems OneMax and LeadingOnes. This comparison motivates a refined analysis for the optimization time of the $(1+\lambda)$ EA on LeadingOnes.

Journal ArticleDOI
TL;DR: The technique for decomposition of a discrete optimization system through projection of an original problem on two-dimensional coordinate planes was proposed, which makes it possible to obtain a system of graphic solutions to a complex problem of linear discrete optimization.
Abstract: Typically, the search for solutions in discrete optimization problems is associated with fundamental computational difficulties. The known methods of accurate or approximated solution of such problems are studied talking into consideration their belonging to so-called problems from P and NP class (algorithms for polynomial and exponential implementation of solution). Modern combinatorial methods for practical solution of discrete optimization problems are focused on the development of algorithms which allow obtaining an approximated solution with guaranteed evaluation of deviations from the optimum. Simplification algorithms are an effective technique of the search for solutions to an optimization problem. If we make a projection of a multi-dimensional process onto a two-dimensional plane, this technique will make it possible to clearly display a set of all solutions to the problem in graphical form. The method for simplification of the combinatorial solution to the discrete optimization problem was proposed in the framework of this research. It is based on performance of decomposition of a system that reflects the system of constraints of the original five-dimensional original problem on a two-dimensional plane. This method enables obtaining a simple system of graphic solutions to a complex problem of linear discrete optimization. From the practical point of view, the proposed method enables us to simplify computational complexity of optimization problems of such a class. The applied aspect of the proposed approach is the use of obtained scientific results in order to provide a possibility to improve the typical technological processes, described by systems of linear equations with existence of systems of linear constraints. This is a prerequisite for subsequent development and improvement of similar systems. In this study, the technique for decomposition of a discrete optimization system through projection of an original problem on two-dimensional coordinate planes was proposed. In this case, the original problem is transformed to a combinatorial family of subsystems, which makes it possible to obtain a system of graphic solutions to a complex problem of linear discrete optimization.

Proceedings ArticleDOI
06 May 2018
TL;DR: In this paper, a generic feature-based recommendation model, called Discrete Factorization Machine (DFM), is proposed for fast and accurate recommendation, which binarizes the real-valued model parameters (e.g., float32) of every feature embedding into binary codes, and thus supports efficient storage and fast user-item score computation.
Abstract: User and item features of side information are crucial for accurate recommendation. However, the large number of feature dimensions, e.g., usually larger than 10^7, results in expensive storage and computational cost. This prohibits fast recommendation especially on mobile applications where the computational resource is very limited. In this paper, we develop a generic feature-based recommendation model, called Discrete Factorization Machine (DFM), for fast and accurate recommendation. DFM binarizes the real-valued model parameters (e.g., float32) of every feature embedding into binary codes (e.g., boolean), and thus supports efficient storage and fast user-item score computation. To avoid the severe quantization loss of the binarization, we propose a convergent updating rule that resolves the challenging discrete optimization of DFM. Through extensive experiments on two real-world datasets, we show that 1) DFM consistently outperforms state-of-the-art binarized recommendation models, and 2) DFM shows very competitive performance compared to its real-valued version (FM), demonstrating the minimized quantization loss. This work is accepted by IJCAI 2018.

Journal ArticleDOI
TL;DR: This study represents a novel supplementary technique implemented in optimization algorithms to increase its speed, significantly, in Fast-SAGD process, and indicates that among various optimization algorithms, GA worked 6% better in comparison to other optimization techniques and linear discretization function resulted in better optimized point in a shorter time.