scispace - formally typeset
Search or ask a question

Showing papers on "Discrete optimization published in 2021"


Journal ArticleDOI
TL;DR: This article investigates the IRS-aided multiple-input–multiple-output (MIMO) simultaneous wireless information and power transfer (SWIPT) for Internet-of-Things (IoT) networks, where the active base station transmits beamforming and the passive IRS reflection coefficients are jointly optimized for maximizing the minimum signal-to-interference-plus-noise ratio (SINR).
Abstract: Intelligent reflecting surface (IRS) is capable of constructing the favorable wireless propagation environment by leveraging massive low-cost reconfigurable reflect array elements. In this article, we investigate the IRS-aided multiple-input–multiple-output (MIMO) simultaneous wireless information and power transfer (SWIPT) for Internet-of-Things (IoT) networks, where the active base station (BS) transmits beamforming and the passive IRS reflection coefficients are jointly optimized for maximizing the minimum signal-to-interference-plus-noise ratio (SINR) among all information decoders (IDs), while maintaining the minimum total harvested energy at all energy receivers (ERs). Moreover, the IRS with practical discrete phase shifts is considered, and thereby the max–min SINR problem becomes an NP-hard combinatorial optimization problem with a strong coupling among optimization variables. To explore the insights and generality of this max–min design, both the single-ID single-ER (SISE) scenario and the multiple-IDs multiple-ERs (MIME) scenario are studied. In the SISE scenario, the classical combinatorial optimization techniques, namely, the special ordered set of type 1 (SOS1) and the reformulation-linearization (RL) technique, are applied to overcome the difficulty of this max–min design imposed by discrete optimization variables. Then, the optimal branch-and-bound algorithm and suboptimal alternating optimization algorithm are, respectively, proposed. We further extend the idea of alternating optimization to the MIME scenario. Moreover, to reduce the iteration complexity, a two-stage scheme is considered aiming to separately optimize the BS transmit beamforming and the IRS reflection coefficients. Finally, numerical simulations demonstrate the superior performance of the proposed algorithms over the benchmarks in both the two scenarios.

54 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that the proposed swarm reinforcement learning (SRL) can obtain a larger total benefit than genetic algorithm (GA), particle swarm optimization (PSO), grasshopper optimization algorithm (GOA), harris hawks optimizer (HHO), butterfly optimization algorithms (BOA), and Q-learning, in which the benefit increment can reach from 2.12% ( against PSO) to 10.62% (against Q- learning).

45 citations


Journal ArticleDOI
TL;DR: BATCH leverages collective matrix factorization to learn a common latent space for the labels and different modalities, and embeds the labels into binary codes by minimizing a distance-distance difference problem and introduces a quantization minimization term and orthogonal constraints into the optimization problem.
Abstract: Supervised cross-modal hashing has attracted much attention. However, there are still some challenges, e.g., how to effectively embed the label information into binary codes, how to avoid using a large similarity matrix and make a model scalable to large-scale datasets, how to efficiently solve the binary optimization problem. To address these challenges, in this paper, we present a novel supervised cross-modal hashing method, i.e., scalaBle Asymmetric discreTe Cross-modal Hashing, BATCH for short. It leverages collective matrix factorization to learn a common latent space for the labels and different modalities, and embeds the labels into binary codes by minimizing a distance-distance difference problem. Furthermore, it builds a connection between the common latent space and the hash codes by an asymmetric strategy. In the light of this, it can perform cross-modal retrieval and embed more similarity information into the binary codes. In addition, it introduces a quantization minimization term and orthogonal constraints into the optimization problem, and generates the binary codes discretely. Therefore, the quantization error and redundancy may be much reduced. Moreover, it is a two-step method, making the optimization simple and scalable to large-scale datasets. Extensive experimental results on three benchmark datasets demonstrate that BATCH outperforms some state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.

41 citations


Journal ArticleDOI
TL;DR: The experimental results and comparisons show that the proposed Jaya algorithm, called DJAYA, is highly competitive and robust optimizer for the problem dealt with, which is one of the famous discrete problems in the discrete optimization.

30 citations


Journal ArticleDOI
TL;DR: A powerful optimization scheme based on tabu search, called discrete tabU search, has been proposed for sizing three stand-alone solar/wind/energy storage (battery) hybrid systems and leads to better outputs on the basis of mean, standard deviation and worst indexes.
Abstract: Renewable energy technologies have been developed in recent years due to the limited sources of fossil fuels, the possibility of depletion of fossil fuels and the related environmental issues. In these types of systems, it is crucial to reach optimum sizing in order to have an affordable system based on solar and wind energy and energy storage. In this study, a powerful optimization scheme based on tabu search, called discrete tabu search, has been proposed for sizing three stand-alone solar/wind/energy storage (battery) hybrid systems. For validating the applied algorithm effectiveness, the results are compared with the results found by the discrete harmony search. The obtained outcomes are compared on the basis of total annual cost. The components of the scheme are analyzed in different operating conditions by applying meteorological data in addition to real time information from three typical regions of Iran. According to the obtained data, applying ‘discrete tabu search’ leads to better outputs on the basis of mean, standard deviation and worst indexes.

27 citations


Journal ArticleDOI
TL;DR: In this article, a probabilistic model-based multi-objective transfer evolutionary optimization (TrEO) framework with solution representation learning is proposed, capable of activating positive transfers while simultaneously curbing the threat of negative transfers.
Abstract: This paper presents a first study on solution representation learning for inducing greater alignment and hence positive transfers between distinct multi-objective optimization tasks that bear discrepancies in their original search spaces. We first establish a novel probabilistic model-based multi-objective transfer evolutionary optimization (TrEO) framework with solution representation learning, capable of activating positive transfers while simultaneously curbing the threat of negative transfers . In particular, well-aligned solution representations are learned via spatial transformations to handle mismatches in search space dimensionalities between distinct multi-objective problems, as well as to increase the overlap between their optimized search distributions. We then showcase different algorithmic instantiations and case studies of the proposed framework in applications spanning continuous as well as discrete optimization; illustrative examples include multi-objective engineering design and route planning of unmanned aerial vehicles. The experimental results show that our framework helps induce positive transfers by unveiling useful but hidden inter-task relationships, thus bringing about faster search convergence to solutions of high quality in multi-objective TrEO.

23 citations


Journal ArticleDOI
TL;DR: A general framework for discrete matrix factorization based on discrete optimization, which can 1) optimize multiple loss functions; 2) handle both explicit and implicit feedback datasets; and 3) take auxiliary information into account without any hyperparameters is proposed.
Abstract: Binary representation of users and items can dramatically improve efficiency of recommendation and reduce size of recommendation models. However, learning optimal binary codes for them is challenging due to binary constraints, even if squared loss is optimized. In this article, we propose a general framework for discrete matrix factorization based on discrete optimization, which can 1) optimize multiple loss functions; 2) handle both explicit and implicit feedback datasets; and 3) take auxiliary information into account without any hyperparameters. To tackle the challenging discrete optimization problem, we propose block coordinate descent based on semidefinite relaxation of binary quadratic programming. We theoretically show that it is equivalent to discrete coordinate descent when only one coordinate is in each block. We extensively evaluate the proposed algorithms on eight real-world datasets. The results of evaluation show that they outperform the state-of-the-art baselines significantly and that auxiliary information of items improves recommendation performance. For better showing the advantages of binary representation, we further propose a two-stage recommender system, consisting of an item-recalling stage and a subsequent fine-ranking stage. Its extensive evaluation shows hashing can dramatically accelerate item recommendation with little degradation of accuracy.

21 citations


Journal ArticleDOI
Yongxin Wang1, Zhen-Duo Chen1, Xin Luo1, Rui Li2, Xin-Shun Xu1 
TL;DR: Fast cross-modal hashing (FCMH) as discussed by the authors leverages not only global similarity information but also the local similarity in a group to solve the discrete optimization problem, which is more efficient and scalable to large-scale datasets.
Abstract: Recently, supervised cross-modal hashing has attracted much attention and achieved promising performance. To learn hash functions and binary codes, most methods globally exploit the supervised information, for example, preserving an at-least-one pairwise similarity into hash codes or reconstructing the label matrix with binary codes. However, due to the hardness of the discrete optimization problem, they are usually time consuming on large-scale datasets. In addition, they neglect the class correlation in supervised information. From another point of view, they only explore the global similarity of data but overlook the local similarity hidden in the data distribution. To address these issues, we present an efficient supervised cross-modal hashing method, that is, fast cross-modal hashing (FCMH). It leverages not only global similarity information but also the local similarity in a group. Specifically, training samples are partitioned into groups; thereafter, the local similarity in each group is extracted. Moreover, the class correlation in labels is also exploited and embedded into the learning of binary codes. In addition, to solve the discrete optimization problem, we further propose an efficient discrete optimization algorithm with a well-designed group updating scheme, making its computational complexity linear to the size of the training set. In light of this, it is more efficient and scalable to large-scale datasets. Extensive experiments on three benchmark datasets demonstrate that FCMH outperforms some state-of-the-art cross-modal hashing approaches in terms of both retrieval accuracy and learning efficiency.

20 citations


Journal ArticleDOI
TL;DR: Simulated annealing, which can select inducing points that are not in the training set, can perform competitively with support vector machines and full Gaussian processes on synthetic data, as well as on challenging real-world DNA sequence data.
Abstract: Kernel methods on discrete domains have shown great promise for many challenging data types, for instance, biological sequence data and molecular structure data. Scalable kernel methods like Support Vector Machines may offer good predictive performances but do not intrinsically provide uncertainty estimates. In contrast, probabilistic kernel methods like Gaussian Processes offer uncertainty estimates in addition to good predictive performance but fall short in terms of scalability. While the scalability of Gaussian processes can be improved using sparse inducing point approximations, the selection of these inducing points remains challenging. We explore different techniques for selecting inducing points on discrete domains, including greedy selection, determinantal point processes, and simulated annealing. We find that simulated annealing, which can select inducing points that are not in the training set, can perform competitively with support vector machines and full Gaussian processes on synthetic data, as well as on challenging real-world DNA sequence data.

19 citations


Proceedings Article
03 May 2021
TL;DR: Co-Mixup as discussed by the authors proposes a new perspective on batch mixup and formulate the optimal construction of a batch of mixup data and formulate a discrete optimization problem minimizing the difference between submodular functions.
Abstract: While deep neural networks show great performance on fitting to the training distribution, improving the networks' generalization performance to the test distribution and robustness to the sensitivity to input perturbations still remain as a challenge. Although a number of mixup based augmentation strategies have been proposed to partially address them, it remains unclear as to how to best utilize the supervisory signal within each input data for mixup from the optimization perspective. We propose a new perspective on batch mixup and formulate the optimal construction of a batch of mixup data maximizing the data saliency measure of each individual mixup data and encouraging the supermodular diversity among the constructed mixup data. This leads to a novel discrete optimization problem minimizing the difference between submodular functions. We also propose an efficient modular approximation based iterative submodular minimization algorithm for efficient mixup computation per each minibatch suitable for minibatch based neural network training. Our experiments show the proposed method achieves the state of the art generalization, calibration, and weakly supervised localization results compared to other mixup methods. The source code is available at https://github.com/snu-mllab/Co-Mixup.

18 citations


Journal ArticleDOI
TL;DR: The Discrete Jaya with Refraction Learning and Three Mutation Methods (DJRL3M) as mentioned in this paper is a discrete variation of the DJaya algorithm that has been recently proposed for solving discrete real-world problems.
Abstract: The Permutation Flow Shop Scheduling Problem (PFSSP) is an interesting scheduling problem that has many real-world applications. It has been widely used as a benchmark to prove the efficiency of many discrete optimization algorithms. The DJaya algorithm is a discrete variation of the Jaya algorithm that has been recently proposed for solving discrete real-world problems. However, DJaya may get stuck in a local optima because of some limitations in its optimization operators. In this paper, we propose a new discrete optimization algorithm called Discrete Jaya with Refraction Learning and Three Mutation Methods (DJRL3M) for solving the PFSSP. DJRL3M incorporates five modifications into DJaya. First, it utilizes Refraction Learning (RL), which is a special type of opposition learning, to generate a diverse initial population of solutions. Second, it uses three mutation methods to explore the search space of a problem: DJaya mutation, highly disruptive polynomial mutation and Pitch Adjustment mutation. Third, it employs RL at each iteration to generate the opposite solutions of the best and worst solutions in an attempt to jump out local optima. Fourth, it uses the abandon method at the end of each iteration to discard a predefined percentage of the worst solutions and generate new random solutions. Finally, it uses the smallest position value to determine the correct values of the decision variables in a given candidate solution. The performance of DJRL3M was evaluated and compared with six well-recognized optimization algorithms [(New Cuckoo Search (NCS) (Wang et al. in SC 21:4297–4307, 2017), DJaya (Gao et al. in ITC 49:1944–1955, 2018), Hybrid Harmony Search (HHS) (Zhao et al. in EAAI 65:178-199, 2017), Modified Genetic algorithm (MGA) (Mumtaz et al. in: Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, 2018), Generalised Accelerations for Insertion-based Heuristics (GAIbH) (Fernandez-Viagas et al. in EJOR 282:858–872, 2020), Memetic algorithm with novel semi-constructive crossover and mutation operators (MASC) (Kurdi in ASC 94:106548, 2020)] using a set of Taillard’s benchmark instances. The experimental and statistical results show that DJRL3M obtains better performance than the performance of NCS, DJaya, HHS and MGA and exhibits competitive performance compared to the performance of MASC and GAIbH.

Journal ArticleDOI
TL;DR: A novel controller placement algorithm has been introduced using the advantages of nature-inspired optimization algorithms and network portioning and has been compared with several other state-of-the-art algorithms regarding network propagation delay and convergence rate in experiments.
Abstract: Software defined network (SDN) has shown significant advantages in numerous real-life aspects with separating the control plane from the data plane that provides programmable management for networks. However, with the increase in the network size, a single controller of SDN imposes considerable limitations on various features. Therefore, in networks with immense scalability, multiple controllers are essential. Specifying the optimal number of controllers and their deployment place is known as the controller placement problem (CPP), which affects the network's performance. In the present paper, a novel controller placement algorithm has been introduced using the advantages of nature-inspired optimization algorithms and network portioning. Firstly, the Manta Ray Foraging Optimization (MRFO) and Salp Swarm Algorithm (SSA) have been discretized to solve CPP. Three new operators comprising a two-point swap, random insert, and half points crossover operators were introduced to discretized the algorithms. Afterward, the resulting discrete MRFO and SSA algorithms were hybridized in a promoting manner. Next, the proposed discrete algorithm has been evaluated on six well-known software-defined networks with a different number of controllers. In addition, the networks have been chosen from various sizes to evaluate the scalability of the proposed algorithm. The proposed algorithm has been compared with several other state-of-the-art algorithms regarding network propagation delay and convergence rate in experiments. The findings indicated the effectiveness of the contributions and the superiority of the proposed algorithm over the competitor algorithms.

Journal ArticleDOI
TL;DR: Simulation results demonstrate the effectiveness of the proposed method and indicate its better performance in extreme traffic scenarios compared to traditional discrete optimization methods, while balancing computational burden at the same time.
Abstract: This paper presents a hierarchical motion planning approach based on discrete optimization method. Well-coupled longitudinal and lateral planning strategies with adaptability features are applied for better performance of on-road autonomous driving with avoidance of both static and moving obstacles. In the path planning level, the proposed method starts with a speed profile designing for the determination of longitudinal horizon, then a set of candidate paths will be constructed with lateral offsets shifting from the base reference. Cost functions considering driving comfort and energy consumption are applied to evaluate each candidate path and the optimal one will be selected as tracking reference afterwards. Re-determination of longitudinal horizon in terms of relative distance between ego vehicle and surrounding obstacles, together with update of speed profile, will be triggered for re-planning if candidate paths ahead fail the safety checking. In the path tracking level, a pure-pursuit-based tracking controller is implemented to obtain the corresponding control sequence and further smooth the trajectory of autonomous vehicle. Simulation results demonstrate the effectiveness of the proposed method and indicate its better performance in extreme traffic scenarios compared to traditional discrete optimization methods, while balancing computational burden at the same time.

Journal ArticleDOI
Zhan Yang1, Liu Yang1, Wenti Huang1, Longzhi Sun1, Jun Long1 
TL;DR: A fast discrete optimization algorithm is developed, which can directly generate discrete binary codes in single step, and introduce an intermediate term before iterations to avoid the problems caused by directly the use of large semantic-visual similarity matrix, which results in a significant reduction in the computational overhead.
Abstract: Hashing has been shown to be successful in a number of Approximate Nearest Neighbor (ANN) domains, ranging from medicine, computer vision to information retrieval. However, current deep hashing methods either ignore both rich information of labels and visual linkages of image pairs, or leverage relaxation-based algorithms to address discrete problems, resulting in a large information loss. To address the aforementioned problems, in this paper, we propose an E nhanced D eep D iscrete H ashing (EDDH) method to leverage both label embedding and semantic-visual similarity to learn the compact hash codes. In EDDH, the discriminative capability of hash codes is enhanced by a distribution-based continuous semantic-visual similarity matrix, where not only the margin between the positive pairs and negative pairs is expanded, but also the visual linkages between image pairs is considered. Specifically, the semantic-visual continuous similarity matrix is constructed by analyzing the asymmetric generalized Gaussian distribution of the visual linkages between pairs with label consideration. Besides, in order to achieve an efficient hash learning framework, EDDH employs an asymmetric real-valued learning structure to learn the compact hash codes. In addition, we develop a fast discrete optimization algorithm, which can directly generate discrete binary codes in single step, and introduce an intermediate term before iterations to avoid the problems caused by directly the use of large semantic-visual similarity matrix, which results in a significant reduction in the computational overhead. Finally, we conducted extensive experiments on three datasets to show that EDDH has a significantly enhanced performance compared to the compared state-of-the-art baselines.

Journal ArticleDOI
TL;DR: The experimental results indicate that scheduling using the DMFO-DE algorithm can outperform other metrics such as the number of applied VMs, and energy consumption.

Journal ArticleDOI
TL;DR: Inference-based optimization via simulation, which substitutes Gaussian process (GP) learning for the structural properties exploited in mathematical programming, is a powerful paradigm that has been exploited in many areas of science and engineering.
Abstract: Inference-based optimization via simulation, which substitutes Gaussian process (GP) learning for the structural properties exploited in mathematical programming, is a powerful paradigm that has be...

02 Mar 2021
TL;DR: A variant of the Heavy Ball algorithm is proposed which has the best state of the art convergence rate for first order methods to minimize strongly, composite non smooth convex functions.
Abstract: In this paper, we study the behavior of solutions of the ODE associated to the Heavy Ball method. Since the pioneering work of B.T. Polyak [25], it is well known that such a scheme is very efficient for C2 strongly convex functions with Lipschitz gradient. But much less is known when the C2 assumption is dropped. Depending on the geometry of the function to minimize, we obtain optimal convergence rates for the class of convex functions with some additional regularity such as quasi-strong convexity or strong convexity. We perform this analysis in continuous time for the ODE, and then we transpose these results for discrete optimization schemes. In particular, we propose a variant of the Heavy Ball algorithm which has the best state of the art convergence rate for first order methods to minimize strongly, composite non smooth convex functions.

Journal ArticleDOI
TL;DR: Experimental results show that the performance of proposed DSSA is especially good for low and middle-scale TSP datasets, and DSSA can be used as an alternative discrete algorithm for discrete optimization tasks.
Abstract: Heuristic algorithms are often used to find solutions to real complex world problems. These algorithms can provide solutions close to the global optimum at an acceptable time for optimization problems. Social Spider Algorithm (SSA) is one of the newly proposed heuristic algorithms and based on the behavior of the spider. Firstly it has been proposed to solve the continuous optimization problems. In this paper, SSA is rearranged to solve discrete optimization problems. Discrete Social Spider Algorithm (DSSA) is developed by adding explorer spiders and novice spiders in discrete search space. Thus, DSSA's exploration and exploitation capabilities are increased. The performance of the proposed DSSA is investigated on traveling salesman benchmark problems. The Traveling Salesman Problem (TSP) is one of the standard test problems used in the performance analysis of discrete optimization algorithms. DSSA has been tested on a low, middle, and large-scale thirty-eight TSP benchmark datasets. Also, DSSA is compared to eighteen well-known algorithms in the literature. Experimental results show that the performance of proposed DSSA is especially good for low and middle-scale TSP datasets. DSSA can be used as an alternative discrete algorithm for discrete optimization tasks.

Journal ArticleDOI
TL;DR: In recent years the use of decision diagrams within the context of discrete optimization has proliferated, and this paper continues this expansion by introducing decision diagrams for modeling discrete optimization problems.
Abstract: In recent years the use of decision diagrams within the context of discrete optimization has proliferated. This paper continues this expansion by proposing the use of decision diagrams for modeling...

Journal ArticleDOI
TL;DR: A novel multi-objective teaching–learning-based optimization (MOTLBO) based on the framework of non-dominated sorting and solution storage in an external archive is proposed that shows its promise with coherence and diversification of solutions for producing the desired Pareto fronts.
Abstract: Teaching–learning-based optimization is a specific parameter-free and powerful algorithm. However, in large and diverse spaces it often gets trapped in local optima and faces criticism of premature...

Journal ArticleDOI
TL;DR: It is observed that the proposed meta-heuristic algorithm performs remarkably well to solve NP-hard problem and is applied to solve some large-size benchmarking LP and Internet of vehicles problems efficiently.
Abstract: Meta-heuristic algorithms have been proposed to solve several optimization problems in different research areas due to their unique attractive features. Traditionally, heuristic approaches are designed separately for discrete and continuous problems. This paper leverages the meta-heuristic algorithm for solving NP-hard problems in both continuous and discrete optimization fields, such as nonlinear and multi-level programming problems through extensive simulations of volcano eruption process. In particular, a new optimization solution named volcano eruption algorithm is proposed in this paper, which is inspired from the nature of volcano eruption. The feasibility and efficiency of the algorithm are evaluated using numerical results obtained through several test problems reported in the state-of-the-art literature. Based on the solutions and number of required iterations, we observed that the proposed meta-heuristic algorithm performs remarkably well to solve NP-hard problem. Furthermore, the proposed algorithm is applied to solve some large-size benchmarking LP and Internet of vehicles problems efficiently.

Journal Article
TL;DR: In this article, the authors consider a discrete optimization formulation for learning sparse classifiers, where the outcome depends upon a linear combination of a small subset of features, and propose two classes of scalable algorithms: an exact algorithm that can handle $p\approx 50,000$ features in a few minutes, and approximate algorithms that can address instances with $p \approx 10^6$ in times comparable to the fast $\ell_1$-based algorithms.
Abstract: We consider a discrete optimization formulation for learning sparse classifiers, where the outcome depends upon a linear combination of a small subset of features. Recent work has shown that mixed integer programming (MIP) can be used to solve (to optimality) $\ell_0$-regularized regression problems at scales much larger than what was conventionally considered possible. Despite their usefulness, MIP-based global optimization approaches are significantly slower compared to the relatively mature algorithms for $\ell_1$-regularization and heuristics for nonconvex regularized problems. We aim to bridge this gap in computation times by developing new MIP-based algorithms for $\ell_0$-regularized classification. We propose two classes of scalable algorithms: an exact algorithm that can handle $p\approx 50,000$ features in a few minutes, and approximate algorithms that can address instances with $p\approx 10^6$ in times comparable to the fast $\ell_1$-based algorithms. Our exact algorithm is based on the novel idea of \textsl{integrality generation}, which solves the original problem (with $p$ binary variables) via a sequence of mixed integer programs that involve a small number of binary variables. Our approximate algorithms are based on coordinate descent and local combinatorial search. In addition, we present new estimation error bounds for a class of $\ell_0$-regularized estimators. Experiments on real and synthetic data demonstrate that our approach leads to models with considerably improved statistical performance (especially, variable selection) when compared to competing methods.

Book
31 Jul 2021
TL;DR: In the last few years, Algorithms for Convex Optimization (AOCO) have revolutionized algorithm design, both for discrete and continuous optimization problems as discussed by the authors.
Abstract: In the last few years, Algorithms for Convex Optimization have revolutionized algorithm design, both for discrete and continuous optimization problems. For problems like maximum flow, maximum matching, and submodular function minimization, the fastest algorithms involve essential methods such as gradient descent, mirror descent, interior point methods, and ellipsoid methods. The goal of this self-contained book is to enable researchers and professionals in computer science, data science, and machine learning to gain an in-depth understanding of these algorithms. The text emphasizes how to derive key algorithms for convex optimization from first principles and how to establish precise running time bounds. This modern text explains the success of these algorithms in problems of discrete optimization, as well as how these methods have significantly pushed the state of the art of convex optimization itself.

Journal ArticleDOI
TL;DR: This paper argues that multistage robust discrete problems can be seen through the lens of quantified integer programs, where powerful tools to reduce the search tree size have been developed and compares the performance of state-of-the-art solvers from both worlds.

Journal ArticleDOI
TL;DR: An alternative and simpler proof of Lehre's (2010) negative drift in populations method is obtained, and an exponential lower bound for the runtime of the mutation-only simple genetic algorithm on OneMax for arbitrary population size is shown.
Abstract: A decent number of lower bounds for non-elitist population-based evolutionary algorithms has been shown by now. Most of them are technically demanding due to the (hard to avoid) use of negative drift theorems – general results which translate an expected progress away from the target into a high hitting time.

Journal ArticleDOI
TL;DR: The results showed that not only the algorithm enabled to generate an optimal design efficiently, but also the robustness of Flex-PLI impact is significantly enhanced and the proposed algorithm can be potentially used for other engineering design problems with similar complexity.
Abstract: Pedestrian lower-leg protection and lower-speed crashworthiness often present two important yet competing criteria on the design of front-bumper structures. Conventional design optimization is largely focused on a single loading condition without considering multiple impact cases. Furthermore, design of front-bumper structures is usually discrete in engineering practice and impacting conditions are commonly random. To cope with such a sophisticated nondeterministic design problem, this study aimed to develop a successive multiple attribute decision making (MADM) algorithm for optimizing a functionally graded thickness (FGT) front-bumper structure subject to multiple impact loading cases. The finite element (FE) model of front-end vehicle was constructed and validated with the in-house experimental tests under the loads of both Flexible Pedestrian Legform Impactor (Flex-PLI) impact and lower-speed impact. In the proposed successive MADM algorithm, the order preference by similarity to ideal solution (TOPSIS) based upon relative entropy was coupled with the analytic hierarchy process (AHP) to develop a MADM model for converting multiple conflicting objectives into a unified single cost function. The presented optimization procedure is algorithmically iterated using the successive Taguchi method to deal with a large number of design variables and design levels. The results showed that not only the algorithm enabled to generate an optimal design efficiently, but also the robustness of Flex-PLI impact is significantly enhanced. The proposed algorithm can be potentially used for other engineering design problems with similar complexity.

Journal ArticleDOI
TL;DR: A Multi-modal Discrete Collaborative Filtering (MDCF) for efficient cold-start recommendation is proposed, which map the multi- modal features of users and items to a consensus Hamming space based on the matrix factorization framework to support large-scale recommendation.
Abstract: Hashing is an effective technique to improve the efficiency of large-scale recommender system by representing both users and items into binary codes. However, existing hashing-based recommendation methods still suffer from two important problems: 1) Cold-start. They employ the user-item interactions and single auxiliary information to learn the binary hash codes. But the full interaction history is not always available and the single auxiliary information may be missing. 2) Efficient optimization. They learn the hash codes with two-step relaxed optimization or one-step discrete hash optimization based on the cyclic coordinate descent, which results in significant quantization loss or still consumes considerable computation time. In this paper, we propose a Multi-modal Discrete Collaborative Filtering (MDCF) for efficient cold-start recommendation. We map the multi-modal features of users and items to a consensus Hamming space based on the matrix factorization framework. Specifically, a low-rank self-weighted multi-modal fusion module is designed to adaptively fuse the multi-modal features into binary hash codes. Additionally, to support large-scale recommendation, a fast discrete optimization method based on augmented Lagrangian multiplier is developed to directly compute the binary hash codes with simple operations. Experiments show the superior performance of the proposed method over state-of-the-art baselines.

Journal ArticleDOI
TL;DR: In this paper, a novel Simulated Annealing framework for graph and sequence optimization is proposed, which integrates powerful neural networks into metaheuristics (e.g., simulated annealing) to restrict the search space in discrete optimization.

Journal ArticleDOI
TL;DR: In this paper, the authors thank anonymous reviewers for their valuable comments, and the ANR AI Chair AIGRETTE for her valuable comments on the anonymous reviewers' anonymous reviewers.
Abstract: The authors thank the anonymous reviewers for their valuable comments. Parts of this work were supported by the KAUST OSR Award No. CRG-2017-3426, the ERC Starting Grants No. 758800 (EXPROTEA) and No. 802554 (SPECGEO), and the ANR AI Chair AIGRETTE.