scispace - formally typeset
Search or ask a question

Showing papers by "Reuven Y. Rubinstein published in 2005"


Journal ArticleDOI
TL;DR: This tutorial presents the CE methodology, the basic algorithm and its modifications, and discusses applications in combinatorial optimization and machine learning.
Abstract: The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine learning.

2,367 citations


Proceedings ArticleDOI
07 Aug 2005
TL;DR: This work considers support vector machines for binary classification using the number of support vectors as a regularizing term instead of the L1 or L2 norms and uses the cross entropy method to solve the optimization problem.
Abstract: We consider support vector machines for binary classification. As opposed to most approaches we use the number of support vectors (the "L0 norm") as a regularizing term instead of the L1 or L2 norms. In order to solve the optimization problem we use the cross entropy method to search over the possible sets of support vectors. The algorithm consists of solving a sequence of efficient linear programs. We report experiments where our method produces generalization errors that are similar to support vector machines, while using a considerably smaller number of support vectors.

216 citations


Journal ArticleDOI
TL;DR: The efficiency of the proposed stochastic algorithm is demonstrated and it is shown that the method can quickly generate (near-)optimal buffer allocations for fairly large production lines.
Abstract: The buffer allocation problem (BAP) is a well-known difficult problem in the design of production lines. We present a stochastic algorithm for solving the BAP, based on the cross-entropy method, a new paradigm for stochastic optimization. The algorithm involves the following iterative steps: (a) the generation of buffer allocations according to a certain random mechanism, followed by (b) the modification of this mechanism on the basis of cross-entropy minimization. Through various numerical experiments we demonstrate the efficiency of the proposed algorithm and show that the method can quickly generate (near-)optimal buffer allocations for fairly large production lines.

137 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of estimating ℙ(Y 1+ ǫ+ǫ − 1 + n − 1 of the Y i have distribution F and one the conditional distribution of Y given Y > x given Y i is i.i.d and heavy-tailed.
Abstract: We consider the problem of estimating ℙ(Y 1 + … + Y n > x) by importance sampling when the Y i are i.i.d. and heavy-tailed. The idea is to exploit the cross-entropy method as a tool for choosing good parameters in the importance sampling distribution; in doing sso, we use the asymptotic description that given ℙ(Y 1 + … + Y n > x), n − 1 of the Y i have distribution F and one the conditional distribution of Y given Y > x. We show in some specific parametric examples (Pareto and Weibull) how this leads to precise answers which, as demonstrated numerically, are close to being variance minimal within the parametric class under consideration. Related problems for M/G/1 and GI/G/1 queues are also discussed.

55 citations


Journal ArticleDOI
TL;DR: The MCE method is presented, which can be viewed as an alternative to the standard cross entropy (CE) method, and it is shown numerically that MCE is a little more accurate than CE, but at the same time a little slower than CE.
Abstract: We present a new method, called the minimum cross-entropy (MCE) method for approximating the optimal solution of NP-hard combinatorial optimization problems and rare-event probability estimation, which can be viewed as an alternative to the standard cross entropy (CE) method. The MCE method presents a generic adaptive stochastic version of Kull-back’s classic MinxEnt method. We discuss its similarities and differences with the standard cross-entropy (CE) method and prove its convergence. We show numerically that MCE is a little more accurate than CE, but at the same time a little slower than CE. We also present a new method for trajectory generation for TSP and some related problems. We finally give some numerical results using MCE for rare-events probability estimation for simple static models, the maximal cut problem and the TSP, and point out some new areas of possible applications.

52 citations


Book ChapterDOI
01 Jan 2005
TL;DR: It is shown that Polyak’s (1990) stochastic approximation algorithm with averaging originally developed for unconstrained minimization of a smooth strongly convex objective function observed with noise can be naturally modified to solve convex-concave Stochastic saddle point problems.
Abstract: We show that Polyak’s (1990) stochastic approximation algorithm with averaging originally developed for unconstrained minimization of a smooth strongly convex objective function observed with noise can be naturally modified to solve convex-concave stochastic saddle point problems. We also show that the extended algorithm, considered on general families of stochastic convex-concave saddle point problems, possesses a rate of convergence unimprovable in order in the minimax sense. We finally present supporting numerical results for the proposed algorithm.

25 citations



01 Jan 2005
TL;DR: A new idea of finding the importance sampling density in rare events simulations: the MinxEnt method (shorthand for minimum cross-entropy).
Abstract: This paper describes a new idea of finding the importance sampling density in rare events simulations: the MinxEnt method (shorthand for minimum cross-entropy). Some preliminary results show that the method might be very promising. 1 The minxent program Assume • X = (X1, . . . ,Xn) is a random vector (with values denoted by x); • h is the joint density function of X; • Sj(·) (j = 1, . . . , k) are functions of x; Recall the Kullback-Leibler distance between any two density functions f, h of X: D(f |h) = Ef [