scispace - formally typeset
Search or ask a question

Showing papers on "Discrete optimization published in 2016"


Journal ArticleDOI
TL;DR: Discrete Choice Methods with Simulation by Kenneth Train has been available in the second edition since 2009 and contains two additional chapters, one on endogenous regressors and one on the expectation–maximization (EM) algorithm.
Abstract: Discrete Choice Methods with Simulation by Kenneth Train has been available in the second edition since 2009. The book is published by Cambridge University Press and is also available for download ...

2,977 citations


Journal ArticleDOI
TL;DR: In this article, a discrete extension of modern first-order continuous optimization methods is proposed to find high quality feasible solutions that are used as warm starts to a MIO solver that finds provably optimal solutions.
Abstract: In the period 1991–2015, algorithmic advances in Mixed Integer Optimization (MIO) coupled with hardware improvements have resulted in an astonishing 450 billion factor speedup in solving MIO problems. We present a MIO approach for solving the classical best subset selection problem of choosing $k$ out of $p$ features in linear regression given $n$ observations. We develop a discrete extension of modern first-order continuous optimization methods to find high quality feasible solutions that we use as warm starts to a MIO solver that finds provably optimal solutions. The resulting algorithm (a) provides a solution with a guarantee on its suboptimality even if we terminate the algorithm early, (b) can accommodate side constraints on the coefficients of the linear regression and (c) extends to finding best subset solutions for the least absolute deviation loss function. Using a wide variety of synthetic and real datasets, we demonstrate that our approach solves problems with $n$ in the 1000s and $p$ in the 100s in minutes to provable optimality, and finds near optimal solutions for $n$ in the 100s and $p$ in the 1000s in minutes. We also establish via numerical experiments that the MIO approach performs better than Lasso and other popularly used sparse learning procedures, in terms of achieving sparse solutions with good predictive power.

441 citations


Journal ArticleDOI
TL;DR: A description of recent research advances in the design of B&B algorithms is presented, particularly with regards to the search strategy, the branching strategy, and the pruning rules.

340 citations


Proceedings ArticleDOI
07 Jul 2016
TL;DR: This paper proposes a principled CF hashing framework called Discrete Collaborative Filtering (DCF), which directly tackles the challenging discrete optimization that should have been treated adequately in hashing, and devise a computationally efficient algorithm with a rigorous convergence proof of DCF.
Abstract: We address the efficiency problem of Collaborative Filtering (CF) by hashing users and items as latent vectors in the form of binary codes, so that user-item affinity can be efficiently calculated in a Hamming space. However, existing hashing methods for CF employ binary code learning procedures that most suffer from the challenging discrete constraints. Hence, those methods generally adopt a two-stage learning scheme composed of relaxed optimization via discarding the discrete constraints, followed by binary quantization. We argue that such a scheme will result in a large quantization loss, which especially compromises the performance of large-scale CF that resorts to longer binary codes. In this paper, we propose a principled CF hashing framework called Discrete Collaborative Filtering (DCF), which directly tackles the challenging discrete optimization that should have been treated adequately in hashing. The formulation of DCF has two advantages: 1) the Hamming similarity induced loss that preserves the intrinsic user-item similarity, and 2) the balanced and uncorrelated code constraints that yield compact yet informative binary codes. We devise a computationally efficient algorithm with a rigorous convergence proof of DCF. Through extensive experiments on several real-world benchmarks, we show that DCF consistently outperforms state-of-the-art CF hashing techniques, e.g, though using only 8 bits, DCF is even significantly better than other methods using 128 bits.

252 citations


Journal ArticleDOI
TL;DR: This work presents a multi-target tracking approach that explicitly models both tasks as minimization of a unified discrete-continuous energy function and introduces pairwise label costs to describe mutual interactions between targets in order to avoid collisions.
Abstract: The task of tracking multiple targets is often addressed with the so-called tracking-by-detection paradigm, where the first step is to obtain a set of target hypotheses for each frame independently. Tracking can then be regarded as solving two separate, but tightly coupled problems. The first is to carry out data association, i.e., to determine the origin of each of the available observations. The second problem is to reconstruct the actual trajectories that describe the spatio-temporal motion pattern of each individual target. The former is inherently a discrete problem, while the latter should intuitively be modeled in continuous space. Having to deal with an unknown number of targets, complex dependencies, and physical constraints, both are challenging tasks on their own and thus most previous work focuses on one of these subproblems. Here, we present a multi-target tracking approach that explicitly models both tasks as minimization of a unified discrete-continuous energy function. Trajectory properties are captured through global label costs, a recent concept from multi-model fitting, which we introduce to tracking. Specifically, label costs describe physical properties of individual tracks, e.g., linear and angular dynamics, or entry and exit points. We further introduce pairwise label costs to describe mutual interactions between targets in order to avoid collisions. By choosing appropriate forms for the individual energy components, powerful discrete optimization techniques can be leveraged to address data association, while the shapes of individual trajectories are updated by gradient-based continuous energy minimization. The proposed method achieves state-of-the-art results on diverse benchmark sequences.

167 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel binary code optimization method, dubbed discrete proximal linearized minimization (DPLM), which directly handles the discrete constraints during the learning process and encodes the whole NUS-WIDE database into 64-b binary codes within 10 s on a standard desktop computer.
Abstract: Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision, and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this paper, we propose a novel binary code optimization method, dubbed discrete proximal linearized minimization (DPLM) , which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this paper by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised $\ell _{2}$ loss encodes the whole NUS-WIDE database into 64-b binary codes within 10 s on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale data sets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

154 citations


Journal ArticleDOI
TL;DR: This review presents developed models, theory, and numerical methods for structural optimization of trusses with discrete design variables in the period 1968 – 2014 and collects, for the first time, the articles in the field presenting deterministic optimization methods and meta heuristics.
Abstract: This review presents developed models, theory, and numerical methods for structural optimization of trusses with discrete design variables in the period 1968 --- 2014. The comprehensive reference list collects, for the first time, the articles in the field presenting deterministic optimization methods and meta heuristics. The field has experienced a shift in focus from deterministic methods to meta heuristics, i.e. stochastic search methods. Based on the reported numerical results it is however not possible to conclude that this shift has improved the competences to solve application relevant problems. This, and other, observations lead to a set of recommended research tasks and objectives to bring the field forward. The development of a publicly available benchmark library is urgently needed to support development and assessment of existing and new heuristics and methods. Combined with this effort, it is recommended that the field begins to use modern methods such as performance profiles for fair and accurate comparison of optimization methods. Finally, theoretical results are rare in this field. This means that most recent methods and heuristics are not supported by mathematical theory. The field should therefore re-focus on theoretical issues such as problem analysis and convergence properties of new methods.

124 citations


01 Jan 2016
TL;DR: The numerical optimization of computer models is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading numerical optimization of computer models. Maybe you have knowledge that, people have search hundreds times for their chosen readings like this numerical optimization of computer models, but end up in malicious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some infectious bugs inside their computer. numerical optimization of computer models is available in our book collection an online access to it is set as public so you can get it instantly. Our book servers saves in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the numerical optimization of computer models is universally compatible with any devices to read.

119 citations


Journal ArticleDOI
TL;DR: This work introduces a mixed-integer formulation whose standard relaxation still has the same solutions as the underlying cardinality-constrained problem; the relation between the local minima is also discussed in detail.
Abstract: Optimization problems with cardinality constraints are very difficult mathematical programs which are typically solved by global techniques from discrete optimization. Here we introduce a mixed-integer formulation whose standard relaxation still has the same solutions (in the sense of global minima) as the underlying cardinality-constrained problem; the relation between the local minima is also discussed in detail. Since our reformulation is a minimization problem in continuous variables, it allows us to apply ideas from that field to cardinality-constrained problems. Here, in particular, we therefore also derive suitable stationarity conditions and suggest an appropriate regularization method for the solution of optimization problems with cardinality constraints. This regularization method is shown to be globally convergent to a Mordukhovich-stationary point. Extensive numerical results are given to illustrate the behavior of this method.

114 citations


Book ChapterDOI
TL;DR: In this chapter a review of recent results on robust discrete optimization is presented, and the most popular discrete and interval uncertainty representations are discussed.
Abstract: In this chapter a review of recent results on robust discrete optimization is presented. The most popular discrete and interval uncertainty representations are discussed. Various robust concepts are presented, namely the traditional minmax (regret) approach with some of its recent extensions, and several two-stage concepts. A special attention is paid to the computational properties of the robust problems considered.

98 citations


Journal ArticleDOI
TL;DR: A general branch-and-bound algorithm for discrete optimization in which binary decision diagrams play the role of the traditional linear programming relaxation, in which relaxed BDD representations of the problem provide bounds and guidance for branching, and restricted BDDs supply a primal heuristic.
Abstract: We propose a general branch-and-bound algorithm for discrete optimization in which binary decision diagrams (BDDs) play the role of the traditional linear programming relaxation. In particular, relaxed BDD representations of the problem provide bounds and guidance for branching, and restricted BDDs supply a primal heuristic. Each problem is given a dynamic programming model that allows one to exploit recursive structure, even though the problem is not solved by dynamic programming. A novel search scheme branches within relaxed BDDs rather than on values of variables. Preliminary testing shows that a rudimentary BDD-based solver is competitive with or superior to a leading commercial integer programming solver for the maximum stable set problem, the maximum cut problem on a graph, and the maximum 2-satisfiability problem. Specific to the maximum cut problem, we tested the BDD-based solver on a classical benchmark set and identified tighter relaxation bounds than have ever been found by any technique, nearl...

Journal ArticleDOI
TL;DR: A discrete bat-inspired algorithm is extended to solve the famous TSP to achieve significant improvements, not only compared to traditional algorithms but also to another metaheuristics.
Abstract: The travelling salesman problem (TSP) is one of the well-known NP-hard combinatorial optimization and extensively studied problems in discrete optimization. The bat algorithm is a new nature-inspired metaheuristic optimization algorithm introduced by Yang in 2010, especially based on echolocation behavior of microbats when searching their prey. Firstly, this algorithm is used to solve various continuous optimization problems. In this paper we extend a discrete bat-inspired algorithm to solve the famous TSP. Although many algorithms have been used to solve TSP, the main objective of this research is to investigate this discrete version to achieve significant improvements, not only compared to traditional algorithms but also to another metaheuristics. Moreover, this study is based on a benchmark dataset of symmetric TSP from TSPLIB library.

Journal ArticleDOI
TL;DR: A new real-time optimization scheme that explores the inherent smoothness of the plant mapping to enable a reliable optimization and combines the quadratic approximation approach used in derivative-free optimization techniques with the iterative gradient-modification optimization scheme.

Book ChapterDOI
20 Nov 2016
TL;DR: This work presents an efficient way of training a context network with a large receptive field size on top of a local network using dilated convolutions on patches and provides an extensive empirical investigation of network architectures and model parameters.
Abstract: Motivated by the success of deep learning techniques in matching problems, we present a method for learning context-aware features for solving optical flow using discrete optimization. Towards this goal, we present an efficient way of training a context network with a large receptive field size on top of a local network using dilated convolutions on patches. We perform feature matching by comparing each pixel in the reference image to every pixel in the target image, utilizing fast GPU matrix multiplication. The matching cost volume from the network’s output forms the data term for discrete MAP inference in a pairwise Markov random field. We provide an extensive empirical investigation of network architectures and model parameters. At the time of submission, our method ranks second on the challenging MPI Sintel test set.

Journal ArticleDOI
TL;DR: The main contribution of the proposed particle swarm optimization method is that it provides high quality solutions for the time-cost optimization of large size projects within seconds, and enables optimal planning of real-life-size projects.
Abstract: A novel PSO method is presented for the discrete time-cost trade-off problem (DTCTP).The proposed discrete PSO outperforms the state-of-the-art methods.High quality solutions are achieved within seconds for large-scale instances.New large scale benchmark DTCTP instances are generated and are solved to optimal. Despite many research studies have concentrated on designing heuristic and meta-heuristic methods for the discrete time-cost trade-off problem (DTCTP), very little success has been achieved in solving large-scale instances. This paper presents a discrete particle swarm optimization (DPSO) to achieve an effective method for the large-scale DTCTP. The proposed DPSO is based on the novel principles for representation, initialization and position-updating of the particles, and brings several benefits for solving the DTCTP, such as an adequate representation of the discrete search space, and enhanced optimization capabilities due to improved quality of the initial swarm. The computational experiment results reveal that the new method outperforms the state-of-the-art methods, both in terms of the solution quality and computation time, especially for medium and large-scale problems. High quality solutions with minor deviations from the global optima are achieved within seconds, for the first time for instances including up to 630 activities. The main contribution of the proposed particle swarm optimization method is that it provides high quality solutions for the time-cost optimization of large size projects within seconds, and enables optimal planning of real-life-size projects.

Proceedings Article
12 Feb 2016
TL;DR: This paper proposes the randomized coordinate shrinking classification algorithm to learn the model, forming the RACOS algorithm, for optimization in continuous and discrete domains, and proves that optimization problems with Local Lipschitz continuity can be solved in polynomial time by proper configurations of this framework.
Abstract: Many randomized heuristic derivative-free optimization methods share a framework that iteratively learns a model for promising search areas and samples solutions from the model. This paper studies a particular setting of such framework, where the model is implemented by a classification model discriminating good solutions from bad ones. This setting allows a general theoretical characterization, where critical factors to the optimization are discovered. We also prove that optimization problems with Local Lipschitz continuity can be solved in polynomial time by proper configurations of this framework. Following the critical factors, we propose the randomized coordinate shrinking classification algorithm to learn the model, forming the RACOS algorithm, for optimization in continuous and discrete domains. Experiments on the testing functions as well as on the machine learning tasks including spectral clustering and classification with Ramp loss demonstrate the effectiveness of RACOS.

Journal ArticleDOI
TL;DR: A comparison between the proposed algorithm and other existing methods shows the effectiveness and capability of the proposed method to reach the global optimum and rapid convergence to the optimal solution.

Journal ArticleDOI
15 Mar 2016
TL;DR: The proposed algorithms are proved to be effective for those heterogeneous nonlinear agents to achieve the optimization solution in the semi-global sense, even with the exponential convergence rate.
Abstract: In this paper, distributed optimization control for a group of autonomous Lagrangian systems is studied to achieve an optimization task with local cost functions. To solve the problem, two continuous-time distributed optimization algorithms are designed for multiple heterogeneous Lagrangian agents with uncertain parameters. The proposed algorithms are proved to be effective for those heterogeneous nonlinear agents to achieve the optimization solution in the semi-global sense, even with the exponential convergence rate. Moreover, simulation adequately illustrates the effectiveness of our optimization algorithms.

Journal ArticleDOI
01 Jul 2016
TL;DR: A new algorithm called “Quantum-inspired Firefly Algorithm with Particle Swarm Optimization (QIFAPSO)” that among other things, adapts the firefly approach to solve discrete optimization problems to ensure a better control of the solutions diversity.
Abstract: The firefly algorithm is a recent meta-heuristic inspired from nature. It is based on swarm intelligence of fireflies and generally used for solving continuous optimization problems. This paper proposes a new algorithm called "Quantum-inspired Firefly Algorithm with Particle Swarm Optimization (QIFAPSO)" that among other things, adapts the firefly approach to solve discrete optimization problems. The proposed algorithm uses the basic concepts of quantum computing such as superposition states of Q-bit and quantum measure to ensure a better control of the solutions diversity. Moreover, we use a discrete representation for fireflies and we propose a variant of the well-known Hamming distance to compute the attractiveness between them. Finally, we combine two strategies that cooperate in exploring the search space: the first one is the move of less bright fireflies towards the brighter ones and the second strategy is the PSO movement in which a firefly moves by taking into account its best position as well as the best position of its neighborhood. Of course, these two strategies of fireflies' movement are adapted to the quantum representation used in the algorithm for potential solutions. In order to validate our idea and show the efficiency of the proposed algorithm, we have used the multidimensional knapsack problem which is known as an NP-Complete problem and we have conducted various tests of our algorithm on different instances of this problem. The experimental results of our algorithm are competitive and in most cases are better than that of existing methods.

Journal ArticleDOI
TL;DR: A recently new intelligent optimization algorithm called discrete state transition algorithm is considered, for solving unconstrained integer optimization problems, and a dynamic adjustment strategy called "risk and restoration in probability" is proposed to capture global solutions with high probability.

Journal ArticleDOI
Mikio Sakai1
TL;DR: In this article, the authors describe an industrial application of the discrete element method (DEM) and present a coarse-grain DEM model for large-scale simulations, a signed distance function-based wall boundary model for complexly shaped walls and a DEM-moving particle semi-implicit method was developed for solid-liquid flow involving a free surface.
Abstract: In this paper, we describe an industrial application of the discrete element method (DEM). The DEM has been applied to various powder systems thus far and therefore appears to be an established approach. However, it cannot be applied to many industrial systems because of several critical problems such as modeling of large-scale simulations, complexly shaped wall boundaries and free surface fluid flow. To solve these problems, novel models were developed by our group. A coarse-grain DEM model was developed for large-scale simulations, a signed distance function-based wall boundary model was developed for complexly shaped walls and a DEM-moving particle semi-implicit method was developed for solid-liquid flow involving a free surface. The adequacy of these models was demonstrated through verification and validation tests. Our approach shows promise in industrial

Journal ArticleDOI
TL;DR: This is the first evaluation that encompasses such a large set of related NP-complete optimization frameworks, despite their tight connections, and shows that a simple portfolio approach can be very effective.
Abstract: By representing the constraints and objective function in factorized form, graphical models can concisely define various NP-hard optimization problems. They are therefore extensively used in several areas of computer science and artificial intelligence. Graphical models can be deterministic or stochastic, optimize a sum or product of local functions, defining a joint cost or probability distribution. Simple transformations exist between these two types of models, but also with MaxSAT or linear programming. In this paper, we report on a large comparison of exact solvers which are all state-of-the-art for their own target language. These solvers are all evaluated on deterministic and probabilistic graphical models coming from the Probabilistic Inference Challenge 2011, the Computer Vision and Pattern Recognition OpenGM2 benchmark, the Weighted Partial MaxSAT Evaluation 2013, the MaxCSP 2008 Competition, the MiniZinc Challenge 2012 & 2013, and the CFLib (a library of Cost Function Networks). All 3026 instances are made publicly available in five different formats and seven formulations. To our knowledge, this is the first evaluation that encompasses such a large set of related NP-complete optimization frameworks, despite their tight connections. The results show that a small number of evaluated solvers are able to perform well on multiple areas. By exploiting the variability and complementarity of solver performances, we show that a simple portfolio approach can be very effective. This portfolio won the last UAI Evaluation 2014 (MAP task).

Journal ArticleDOI
TL;DR: In this article, a general formulation for hypergraph correlation clustering is proposed and a comparison of LP and ILP cutting plane methods and rounding procedures for the multicut problem is provided.

Journal ArticleDOI
TL;DR: It is demonstrated that CRN leads to improved optimization performance for VOI-based algorithms in sequential sampling environments with a combinatorial number of alternatives and costly samples.
Abstract: This paper addresses discrete optimization via simulation. We show that allowing for both a correlated prior distribution on the means (e.g., with discrete Kriging models) and sampling correlation (e.g., with common random numbers, or CRN) can significantly improve the ability to quickly identify the best alternative. These two correlations are brought together for the first time in a highly sequential knowledge-gradient sampling algorithm, which chooses points to sample using a Bayesian value of information (VOI) criterion. We provide almost sure convergence guarantees as the number of samples grows without bound when parameters are known and provide approximations that allow practical implementation including a novel use of the VOI’s gradient rather than the response surface’s gradient. We demonstrate that CRN leads to improved optimization performance for VOI-based algorithms in sequential sampling environments with a combinatorial number of alternatives and costly samples.

Journal ArticleDOI
TL;DR: A new measure of the convergence rate is proposed, called the average convergence rate, which is a normalized geometric mean of the reduction ratio of the fitness difference per generation, applicable for most evolutionary algorithms on both continuous and discrete optimization.
Abstract: In evolutionary optimization, it is important to understand how fast evolutionary algorithms converge to the optimum per generation, or their convergence rates. This letter proposes a new measure of the convergence rate, called the average convergence rate. It is a normalized geometric mean of the reduction ratio of the fitness difference per generation. The calculation of the average convergence rate is very simple and it is applicable for most evolutionary algorithms on both continuous and discrete optimization. A theoretical study of the average convergence rate is conducted for discrete optimization. Lower bounds on the average convergence rate are derived. The limit of the average convergence rate is analyzed and then the asymptotic average convergence rate is proposed.

Proceedings ArticleDOI
20 Jul 2016
TL;DR: This paper presents a version of PSO that is able to optimize over discrete variables, which is called Integer and Categorical PSO (ICPSO), and incorporates ideas from Estimation of Distribution Algorithms (EDAs) in that particles represent probability distributions rather than solution values, and the PSO update modifies the probability distributions.
Abstract: Particle Swarm Optimization (PSO) has been shown to perform very well on a wide range of optimization problems. One of the drawbacks to PSO is that the base algorithm assumes continuous variables. In this paper, we present a version of PSO that is able to optimize over discrete variables. This new PSO algorithm, which we call Integer and Categorical PSO (ICPSO), incorporates ideas from Estimation of Distribution Algorithms (EDAs) in that particles represent probability distributions rather than solution values, and the PSO update modifies the probability distributions. In this paper, we describe our new algorithm and compare its performance against other discrete PSO algorithms. In our experiments, we demonstrate that our algorithm outperforms comparable methods on both discrete benchmark functions and NK landscapes, a mathematical framework that generates tunable fitness landscapes for evaluating EAs.

Journal ArticleDOI
TL;DR: In this article, a structural optimization framework for the seismic design of multi-storey composite buildings, which have steel HEB-columns fully encased in concrete, steel IPE-beams and steel L-bracings, is presented.

Journal ArticleDOI
01 Mar 2016
TL;DR: An ILS approach, strengthened by a hyper-heuristic which generates heuristics based on a fixed number of add and delete operations is introduced, which achieves generality across two variants of the timetabling problem.
Abstract: Graphical abstractDisplay Omitted HighlightsAdd and delete operations are encoded as a list/string of integers (ADL).An effective hyper-heuristic approach operating with ADLs is proposed.Low level heuristics perform search over the space of feasible solutions.Proposed approach produces new best solutions to some instances.Proposed approach achieves generality across two variants of the timetabling problem. Hyper-heuristics are (meta-)heuristics that operate at a higher level to choose or generate a set of low-level (meta-)heuristics in an attempt of solve difficult optimization problems. Iterated local search (ILS) is a well-known approach for discrete optimization, combining perturbation and hill-climbing within an iterative framework. In this study, we introduce an ILS approach, strengthened by a hyper-heuristic which generates heuristics based on a fixed number of add and delete operations. The performance of the proposed hyper-heuristic is tested across two different problem domains using real world benchmark of course timetabling instances from the second International Timetabling Competition Tracks 2 and 3. The results show that mixing add and delete operations within an ILS framework yields an effective hyper-heuristic approach.

Journal ArticleDOI
TL;DR: A hybrid discrete optimization algorithm based on teaching-probabilistic learning mechanism (HDTPL) to solve the no-wait flow shop scheduling (NWFSSP) with minimization of makespan is presented.
Abstract: Inspired by the phenomenon of teaching and learning introduced by the teaching-learning based optimization (TLBO) algorithm, this paper presents a hybrid discrete optimization algorithm based on teaching-probabilistic learning mechanism (HDTPL) to solve the no-wait flow shop scheduling (NWFSSP) with minimization of makespan. The HDTPL consists of four components, i.e. discrete teaching phase, discrete probabilistic learning phase, population reconstruction, neighborhood search. In the discrete teaching phase, Forward-insert and Backward-insert are adopted to imitate the teaching process. In the discrete probabilistic learning phase, an effective probabilistic model is established with consideration of both job orders in the sequence and similar job blocks of selected superior learners, and then each learner interacts with the probabilistic model by using the crossover operator to learn knowledge. The population reconstruction re-initializes the population every several generations to escape from a local optimum. Furthermore, three types of neighborhood search structures based on the speed-up methods, i.e. Referenced-insert-search, Insert-search and Swap-search, are designed to improve the quality of the current learner and the global best learner. Moreover, the main parameters of HDTPL are investigated by the Taguchi method to find appropriate values. The effectiveness of HDTPL components is analyzed by numerical comparisons, and the comparisons with some efficient algorithms demonstrate the effectiveness and robustness of the proposed HDTPL in solving the NWFSSP.

Journal ArticleDOI
TL;DR: A discrete variant of TLBO (DTLBO) is proposed to address discrete optimization problems to solve community detection problems for complex networks and experimental results indicate that MODTLBO/D is effective compared with other algorithms used for community detection in complex networks.