scispace - formally typeset
Search or ask a question
Author

Sidney Yakowitz

Bio: Sidney Yakowitz is an academic researcher. The author has contributed to research in topics: Random search & Beam search. The author has an hindex of 1, co-authored 1 publications receiving 55 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A search for the global minimum of a function is proposed; the search is on the basis of sequential noisy measurements and the search plan is shown to be convergent in probability to a set of minimizers.
Abstract: A search for the global minimum of a function is proposed; the search is on the basis of sequential noisy measurements. Because no unimodality assumptions are made, stochastic approximation and other well-known methods are not directly applicable. The search plan is shown to be convergent in probability to a set of minimizers. This study was motivated by investigations into machine learning. This setting is explained, and the methodology is applied to create an adaptively improving strategy for 8-puzzle problems.

55 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A modification of the simulated annealing algorithm designed for solving discrete stochastic optimization problems that uses a constant (rather than decreasing) temperature for estimating the optimal solution and shows that both variants of the method are guaranteed to converge almost surely to the set of global optimal solutions.
Abstract: We present a modification of the simulated annealing algorithm designed for solving discrete stochastic optimization problems. Like the original simulated annealing algorithm, our method has the hill climbing feature, so it can find global optimal solutions to discrete stochastic optimization problems with many local solutions. However, our method differs from the original simulated annealing algorithm in that it uses a constant (rather than decreasing) temperature. We consider two approaches for estimating the optimal solution. The first approach uses the number of visits the algorithm makes to the different states (divided by a normalizer) to estimate the optimal solution. The second approach uses the state that has the best average estimated objective function value as estimate of the optimal solution. We show that both variants of our method are guaranteed to converge almost surely to the set of global optimal solutions, and discuss how our work applies in the discrete deterministic optimization setting. We also show how both variants can be applied for solving discrete optimization problems when the objective function values are estimated using either transient or steady-state simulation. Finally, we include some encouraging numerical results documenting the behavior of the two variants of our algorithm when applied for solving two versions of a particular discrete stochastic optimization problem, and compare their performance with that of other variants of the simulated annealing algorithm designed for solving discrete stochastic optimization problems.

213 citations

Journal ArticleDOI
TL;DR: Experimental results indicate that training upon exemplars selected in this fashion can save computation in general purpose use as well, and its use during network training is demonstrated.
Abstract: The authors derive a method for selecting exemplars for training a multilayer feedforward network architecture to estimate an unknown (deterministic) mapping from clean data, i.e., data measured either without error or with negligible error. The objective is to minimize the data requirement of learning. The authors choose a criterion for selecting training examples that works well in conjunction with the criterion used for learning, here, least squares. They proceed sequentially, selecting an example that, when added to the previous set of training examples and learned, maximizes the decrement of network squared error over the input space. When dealing with clean data and deterministic relationships, concise training sets that minimize the integrated squared bias (ISB) are desired. The ISB is used to derive a selection criterion for evaluating individual training examples, the DISB, that is maximized to select new exemplars. They conclude with graphical illustrations of the method, and demonstrate its use during network training. Experimental results indicate that training upon exemplars selected in this fashion can save computation in general purpose use as well. >

164 citations

Journal ArticleDOI
TL;DR: In this article, the authors present two versions of a new iterative method for solving discrete stochastic optimization problems where the objective function is evaluated using transient or steady-state simulation.
Abstract: This paper addresses the problem of optimizing a function over a finite or countably infinite set of alternatives, in situations where this objective function cannot be evaluated exactly, but has to be estimated or measured. A special focus is on situations where simulation is used to evaluate the objective function. We present two versions of a new iterative method for solving such discrete stochastic optimization problems. In each iteration of the proposed method, a neighbor of the “current” alternative is selected, and estimates of the objective function evaluated at the current and neighboring alternatives are compared. The alternative that has a better observed function value becomes the next current alternative. We show how one version of the proposed method can be used to solve discrete optimization problems where the objective function is evaluated using transient or steady-state simulation, and we show how the other version can be applied to solve a special class of discrete stochastic optimizati...

133 citations

Journal ArticleDOI
TL;DR: A new variant of the stochastic comparison method is proved that it is guaranteed to converge almost surely to the set of global optimal solutions and a result is presented that demonstrates that this method is likely to perform well in practice.
Abstract: We discuss the choice of the estimation of the optimal solution when random search methods are applied to solve discrete stochastic optimization problems. At the present time, such optimization methods usually estimate the optimal solution using either the feasible solution the method is currently exploring or the feasible solution visited most often so far by the method. We propose using all the observed objective function values generated as the random search method moves around the feasible region seeking an optimal solution to obtain increasingly more precise estimates of the objective function values at the different points in the feasible region. At any given time, the feasible solution that has the best estimated objective function value (largest one for maximization problems; the smallest one for minimization problems) is used as the estimate of the optimal solution. We discuss the advantages of using this approach for estimating the optimal solution and present numerical results showing that modifying an existing random search method to use tnhis approach for estimating the optimal soluation appears to yield improved performance. We also present sereval rate of convergence results for random search methods using our approach for estimating the optimal solution. One these random search methods is a new variant of the stochastic comparison method; in addition to specifying the rate of convergence of this method, we prove that it is guaranteed to converge almost surely to the set of global optimal solutions and present a result that demonstrates that this method is likely to perform well in practice.

128 citations

Journal ArticleDOI
TL;DR: In this article, a learning system explores a space of possible heuristic methods for one well-suited to the eccentricities of the given domain and problem distribution, and identifies strategies that not only decrease the amount of CPU time required to produce schedules, but also increase the percentage of problems that are solvable within computational resource limitations.
Abstract: Although most scheduling problems are NP-hard, domain specific techniques perform well in practice but are quite expensive to construct. In adaptive problem-solving, domain specific knowledge is acquired automatically for a general problem solver with a flexible control architecture. In this approach, a learning system explores a space of possible heuristic methods for one well-suited to the eccentricities of the given domain and problem distribution. In this article, we discuss an application of the approach to scheduling satellite communications. Using problem distributions based on actual mission requirements, our approach identifies strategies that not only decrease the amount of CPU time required to produce schedules, but also increase the percentage of problems that are solvable within computational resource limitations.

60 citations