Topic
Stochastic programming
About: Stochastic programming is a research topic. Over the lifetime, 12343 publications have been published within this topic receiving 421049 citations.
Papers published on a yearly basis
Papers
More filters
01 Jan 2007
TL;DR: A stochastic programming based approach to account for the design of sustainable logistics network under uncertainty and a solution approach integrating the sample average approximation scheme with an importance sampling strategy is developed.
Abstract: The design of sustainable logistics network has attracted growing attention with the stringent pressures from environmental and social requirements. This paper proposes a stochastic programming based approach to account for the design of sustainable logistics network under uncertainty. A solution approach integrating the sample average approximation scheme with an importance sampling strategy is developed. A case study involving a large-scale sustainable logistics network in Asia Pacific Region is presented to demonstrate the significance of the developed stochastic model as well as the efficiency of the proposed solution approach.
130 citations
••
TL;DR: In this article, the sufficiency tests are applied to the necessary conditions to determine when solutions of the stochastic optimization problems also solve the deterministic robust stability problems, and the modified Riccati equation approach of Petersen and Hollot is generalized in the static case and extended to dynamic compensation.
Abstract: Three parallel gaps in robust feedback control theory are examined: sufficiency versus necessity, deterministic versus stochastic uncertainty modeling, and stability versus performance. Deterministic and stochastic output-feedback control problems are considered with both static and dynamic controllers. The static and dynamic robust stabilization problems involve deterministically modeled bounded but unknown measurable time-varying parameter variations, while the static and dynamic stochastic optimal control problems feature state-, control-, and measurement-dependent white noise. General sufficiency conditions for the deterministic problems are obtained using Lyapunov's direct method, while necessary conditions for the stochastic problems are derived as a consequence of minimizing a quadratic performance criterion. The sufficiency tests are then applied to the necessary conditions to determine when solutions of the stochastic optimization problems also solve the deterministic robust stability problems. As an additional application of the deterministic result, the modified Riccati equation approach of Petersen and Hollot is generalized in the static case and extended to dynamic compensation.
130 citations
••
TL;DR: This document emphasizes the difficulties in simulation optimization as compared to algebraic model-based mathematical programming, makes reference to state-of-the-art algorithms in the field, examines and contrasts the different approaches used, reviews some of the diverse applications that have been tackled by these methods, and speculates on future directions in the fields.
Abstract: Simulation optimization refers to the optimization of an objective function subject to constraints, both of which can be evaluated through a stochastic simulation. To address specific features of a particular simulation—discrete or continuous decisions, expensive or cheap simulations, single or multiple outputs, homogeneous or heterogeneous noise—various algorithms have been proposed in the literature. As one can imagine, there exist several competing algorithms for each of these classes of problems. This document emphasizes the difficulties in simulation optimization as compared to algebraic model-based mathematical programming makes reference to state-of-the-art algorithms in the field, examines and contrasts the different approaches used, reviews some of the diverse applications that have been tackled by these methods, and speculates on future directions in the field.
130 citations
••
TL;DR: It is speculated that the stochastic training method implemented in this study for training recurrent perceptrons can be used to train perceptron networks that have radically recurrent architectures.
Abstract: Evolutionary programming, a systematic multi-agent stochastic search technique, is used to generate recurrent perceptrons (nonlinear IIR filters). A hybrid optimization scheme is proposed that embeds a single-agent stochastic search technique, the method of Solis and Wets, into the evolutionary programming paradigm. The proposed hybrid optimization approach is further augmented by "blending" randomly selected parent vectors to create additional offspring. The first part of this work investigates the performance of the suggested hybrid stochastic search method. After demonstration on the Bohachevsky and Rosenbrock response surfaces, the hybrid stochastic optimization approach is applied in determining both the model order and the coefficients of recurrent perceptron time-series models. An information criterion is used to evaluate each recurrent perceptron structure as a candidate solution. It is speculated that the stochastic training method implemented in this study for training recurrent perceptrons can be used to train perceptron networks that have radically recurrent architectures. >
130 citations
••
01 Dec 2012TL;DR: In this paper, the authors analyzed the convergence of gradient-based distributed optimization algorithms that base their updates on delayed stochastic gradient information and showed that the delay is asymptotically negligible.
Abstract: We analyze the convergence of gradient-based optimization algorithms that base their updates on delayed stochastic gradient information. The main application of our results is to gradient-based distributed optimization algorithms where a master node performs parameter updates while worker nodes compute stochastic gradients based on local information in parallel, which may give rise to delays due to asynchrony. We take motivation from statistical problems where the size of the data is so large that it cannot fit on one computer; with the advent of huge datasets in biology, astronomy, and the internet, such problems are now common. Our main contribution is to show that for smooth stochastic problems, the delays are asymptotically negligible and we can achieve order-optimal convergence results. We show n-node architectures whose optimization error in stochastic problems—in spite of asynchronous delays—scales asymptotically as O(1/√nT) after T iterations. This rate is known to be optimal for a distributed system with n nodes even in the absence of delays. We additionally complement our theoretical results with numerical experiments on a logistic regression task.
130 citations