scispace - formally typeset
Search or ask a question

An Overview of the Simultaneous Perturbation Method for Efficient Optimization

01 Jan 1999-pp 141-154
TL;DR: Simultaneous perturbation stochastic approximation (SPSA) as mentioned in this paper is a widely used method for multivariate optimization problems that requires only two measurements of the objective function regardless of the dimension of the optimization problem.
Abstract: ultivariate stochastic optimization plays a major role in the analysis and control of many engineering systems. In almost all real-world optimization problems, it is necessary to use a mathematical algorithm that iteratively seeks out the solution because an analytical (closed-form) solution is rarely available. In this spirit, the “simultaneous perturbation stochastic approximation (SPSA)” method for difficult multivariate optimization problems has been developed. SPSA has recently attracted considerable international attention in areas such as statistical parameter estimation, feedback control, simulation-based optimization, signal and image processing, and experimental design. The essential feature of SPSA—which accounts for its power and relative ease of implementation—is the underlying gradient approximation that requires only two measurements of the objective function regardless of the dimension of the optimization problem. This feature allows for a significant decrease in the cost of optimization, especially in problems with a large number of variables to be optimized. (
Citations
More filters
Journal ArticleDOI
TL;DR: A Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.
Abstract: This paper presents an overview of our most recent results concerning the Particle Swarm Optimization (PSO) method. Techniques for the alleviation of local minima, and for detecting multiple minimizers are described. Moreover, results on the ability of the PSO in tackling Multiobjective, Minimax, Integer Programming and e1 errors-in-variables problems, as well as problems in noisy and continuously changing environments, are reported. Finally, a Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.

1,436 citations

Journal ArticleDOI
TL;DR: The main focus will be on the different approaches to perform robust optimization in practice including the methods of mathematical programming, deterministic nonlinear optimization, and direct search methods such as stochastic approximation and evolutionary computation.

1,435 citations

Journal ArticleDOI
TL;DR: It is shown that, on average, PSO outperforms GA in all cases considered, though the relative advantages of PSO vary from case to case.
Abstract: Determining the optimum type and location of new wells is an essential component in the efficient development of oil and gas fields. The optimization problem is, however, demanding due to the potentially high dimension of the search space and the computational requirements associated with function evaluations, which, in this case, entail full reservoir simulations. In this paper, the particle swarm optimization (PSO) algorithm is applied for the determination of optimal well type and location. The PSO algorithm is a stochastic procedure that uses a population of solutions, called particles, which move in the search space. Particle positions are updated iteratively according to particle fitness (objective function value) and position relative to other particles. The general PSO procedure is first discussed, and then the particular variant implemented for well optimization is described. Four example cases are considered. These involve vertical, deviated, and dual-lateral wells and optimization over single and multiple reservoir realizations. For each case, both the PSO algorithm and the widely used genetic algorithm (GA) are applied to maximize net present value. Multiple runs of both algorithms are performed and the results are averaged in order to achieve meaningful comparisons. It is shown that, on average, PSO outperforms GA in all cases considered, though the relative advantages of PSO vary from case to case. Taken in total, these findings are very promising and demonstrate the applicability of PSO for this challenging problem.

402 citations

Journal ArticleDOI
TL;DR: This work proposes a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models, and focuses on finding sets of experiments that provide the most information about targeted sets of parameters.

372 citations

Journal ArticleDOI
TL;DR: The first compelling evidence for a neuronal dissociation between the different phases of precision grasping in human premotor cortex is provided, in healthy subjects performing a grip–lift task with their right, dominant hand.
Abstract: Small-object manipulation is essential in numerous human activities, although its neural bases are still essentially unknown. Recent functional imaging studies have shown that precision grasping activates a large bilateral frontoparietal network, including ventral (PMv) and dorsal (PMd) premotor areas. To dissociate the role of PMv and PMd in the control of hand and finger movements, we produced, by means of transcranial magnetic stimulation (TMS), transient virtual lesions of these two areas in both hemispheres, in healthy subjects performing a grip-lift task with their right, dominant hand. We found that a virtual lesion of PMv specifically impaired the grasping component of these movements: a lesion of either the left or right PMv altered the correct positioning of fingers on the object, a prerequisite for an efficient grasping, whereas lesioning the left, contralateral PMv disturbed the sequential recruitment of intrinsic hand muscles, all other movement parameters being unaffected by PMv lesions. Conversely, we found that a virtual lesion of the left PMd impaired the proper coupling between the grasping and lifting phases, as evidenced by the TMS-induced delay in the recruitment of proximal muscles responsible for the lifting phase; lesioning the right PMd failed to affect dominant hand movements. Finally, an analysis of the time course of these effects allowed us to demonstrate the sequential involvement of PMv and PMd in movement preparation. These results provide the first compelling evidence for a neuronal dissociation between the different phases of precision grasping in human premotor cortex.

331 citations

References
More filters
Journal ArticleDOI
TL;DR: The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures that can be significantly more efficient than the standard algorithms in large-dimensional problems.
Abstract: The problem of finding a root of the multivariate gradient equation that arises in function minimization is considered. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm for the general Kiefer-Wolfowitz type is appropriate for estimating the root. The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures. Theory and numerical experience indicate that the algorithm can be significantly more efficient than the standard algorithms in large-dimensional problems. >

2,149 citations

Journal ArticleDOI
TL;DR: In this article, the authors give a scheme whereby, starting from an arbitrary point, one obtains successively $x_2, x_3, \cdots$ such that the regression function converges to the unknown point in probability as n \rightarrow \infty.
Abstract: Let $M(x)$ be a regression function which has a maximum at the unknown point $\theta. M(x)$ is itself unknown to the statistician who, however, can take observations at any level $x$. This paper gives a scheme whereby, starting from an arbitrary point $x_1$, one obtains successively $x_2, x_3, \cdots$ such that $x_n$ converges to $\theta$ in probability as $n \rightarrow \infty$.

2,141 citations

Book
01 Jan 1997
TL;DR: Applications and issues application to learning, state dependent noise and queueing applications to signal processing and adaptive control mathematical background convergence with probability one, introduction weak convergence methods for general algorithms applications, proofs of convergence rate of convergence averaging of the iterates distributed/decentralized and asynchronous algorithms.
Abstract: Applications and issues application to learning, state dependent noise and queueing applications to signal processing and adaptive control mathematical background convergence with probability one - Martingale difference noise convergence with probability one - correlated noise weak convergence - introduction weak convergence methods for general algorithms applications - proofs of convergence rate of convergence averaging of the iterates distributed/decentralized and asynchronous algorithms.

1,172 citations

Journal ArticleDOI
TL;DR: This paper presents a simple step-by-step guide to implementation of SPSA in generic optimization problems and offers some practical suggestions for choosing certain algorithm coefficients.
Abstract: The need for solving multivariate optimization problems is pervasive in engineering and the physical and social sciences. The simultaneous perturbation stochastic approximation (SPSA) algorithm has recently attracted considerable attention for challenging optimization problems where it is difficult or impossible to directly obtain a gradient of the objective function with respect to the parameters being optimized. SPSA is based on an easily implemented and highly efficient gradient approximation that relies on measurements of the objective function, not on measurements of the gradient of the objective function. The gradient approximation is based on only two function measurements (regardless of the dimension of the gradient vector). This contrasts with standard finite-difference approaches, which require a number of function measurements proportional to the dimension of the gradient vector. This paper presents a simple step-by-step guide to implementation of SPSA in generic optimization problems and offers some practical suggestions for choosing certain algorithm coefficients.

759 citations

Journal ArticleDOI
TL;DR: This note presents a form of SPSA that requires only one function measurement (for any dimension), and theory is presented that identifies the class of problems for which this one-measurement form will be asymptotically superior to the standard two-measuresment form.

263 citations