scispace - formally typeset
Search or ask a question
Topic

Discrete optimization

About: Discrete optimization is a research topic. Over the lifetime, 4598 publications have been published within this topic receiving 158297 citations. The topic is also known as: discrete optimisation.


Papers
More filters
Proceedings ArticleDOI
Thomas Weise1, Zijun Wu1
06 Jul 2018
TL;DR: The W-Model is put into the context of related model problems targeting ruggedness, neutrality, and epistasis and given an idea about suitable configurations of it that could be included in the BB-DOB benchmark suite.
Abstract: The first event of the Black-Box Discrete Optimization Benchmarking (BB-DOB) workshop series aims to establish a set of example problems for benchmarking black-box optimization algorithms for discrete or combinatorial domains In this paper, we 1) discuss important features that should be embodied by these benchmark functions and 2) present the W-Model problem which exhibits them The W-Model follows a layered approach, where each layer can either be omitted or introduce a different characteristic feature such as neutrality via redundancy, ruggedness and deceptiveness, epistasis, and multi-objectivity, in a tunable way The model problem is defined over bit string representations, which allows for extracting some of its layers and stacking them on top of existing problems that use this representation, such as OneMax, the Maximum Satisfiability or the Set Covering tasks, and the NK landscape The ruggedness and deceptiveness layer can be stacked on top of any problem with integer-valued objectives We put the W-Model into the context of related model problems targeting ruggedness, neutrality, and epistasis We then present the results of a series of experiments to further substantiate the utility of the W-Model and to give an idea about suitable configurations of it that could be included in the BB-DOB benchmark suite

33 citations

Journal ArticleDOI
19 Nov 1978
TL;DR: This paper describes a technique of global optimization of microprograms including loops and recursive subroutines and its effectiveness is evaluated and confirmed by applying it to an existing microprogrammable computer composed of LSI processor modules.
Abstract: This paper describes a technique of global optimization of microprograms including loops and recursive subroutines. This technique can be applied to a wide variety of microprogrammable machines. The principle of global optimization, four basic types of global optimization, and extended types of global optimization are discussed and the optimization algorithm is shown. Its effectiveness is evaluated and confirmed by applying it to an existing microprogrammable computer composed of LSI processor modules.

33 citations

Posted Content
TL;DR: This letter proposes an alternative information-maximization clustering method based on a squared-loss variant of mutual information that gives a clustering solution analytically in a computationally efficient way via kernel eigenvalue decomposition and provides a practical model selection procedure that allows us to objectively optimize tuning parameters included in the kernel function.
Abstract: Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized. A notable advantage of this approach is that it only involves continuous optimization of model parameters, which is substantially easier to solve than discrete optimization of cluster assignments. However, existing methods still involve non-convex optimization problems, and therefore finding a good local optimal solution is not straightforward in practice. In this paper, we propose an alternative information-maximization clustering method based on a squared-loss variant of mutual information. This novel approach gives a clustering solution analytically in a computationally efficient way via kernel eigenvalue decomposition. Furthermore, we provide a practical model selection procedure that allows us to objectively optimize tuning parameters included in the kernel function. Through experiments, we demonstrate the usefulness of the proposed approach.

33 citations

BookDOI
01 Jan 1994

33 citations

Proceedings ArticleDOI
13 May 2019
TL;DR: This paper is the first to analyze the vulnerability of deep fraud detector to slight perturbations on input transactions, which is very challenging since the sparsity and discretization of transaction data result in a non-convex discrete optimization.
Abstract: Fraud transactions are one of the major threats faced by online e-commerce platforms. Recently, deep learning based classifiers have been deployed to detect fraud transactions. Inspired by findings on adversarial examples, this paper is the first to analyze the vulnerability of deep fraud detector to slight perturbations on input transactions, which is very challenging since the sparsity and discretization of transaction data result in a non-convex discrete optimization. Inspired by the iterative Fast Gradient Sign Method (FGSM) for the L8 attack, we first propose the Iterative Fast Coordinate Method (IFCM) for discrete L1 and L2 attacks which is efficient to generate large amounts of instances with satisfactory effectiveness. We then provide two novel attack algorithms to solve the discrete optimization. The first one is the Augmented Iterative Search (AIS) algorithm, which repeatedly searches for effective “simple” perturbation. The second one is called the Rounded Relaxation with Reparameterization (R3), which rounds the solution obtained by solving a relaxed and unconstrained optimization problem with reparameterization tricks. Finally, we conduct extensive experimental evaluation on the deployed fraud detector in TaoBao, one of the largest e-commerce platforms in the world, with millions of real-world transactions. Results show that (i) The deployed detector is highly vulnerable to attacks as the average precision is decreased from nearly 90% to as low as 20% with little perturbations; (ii) Our proposed attacks significantly outperform the adaptions of the state-of-the-art attacks. (iii) The model trained with an adversarial training process is significantly robust against attacks and performs well on the unperturbed data.

33 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
90% related
Optimal control
68K papers, 1.2M citations
84% related
Robustness (computer science)
94.7K papers, 1.6M citations
84% related
Scheduling (computing)
78.6K papers, 1.3M citations
83% related
Linear system
59.5K papers, 1.4M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202313
202236
2021104
2020128
2019113
2018140