scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1977"


Journal ArticleDOI
01 Oct 1977

1,016 citations


Journal ArticleDOI
TL;DR: How still sharper bounds may be generated based on the simple idea of sequentially applying the classic bounds to smaller and smaller subintervals of the range of the random variable is indicated.
Abstract: This paper is concerned with the determination of tight lower and upper bounds on the expectation of a convex function of a random variable. The classic bounds are those of Jensen and Edmundson-Madansky and were recently generalized by Ben-Tal and Hochman. This paper indicates how still sharper bounds may be generated based on the simple idea of sequentially applying the classic bounds to smaller and smaller subintervals of the range of the random variable. The bounds are applicable in the multivariate case if the random variables are independent. In the dependent case bounds based on the Edmundson-Madansky inequality are not available; however, bounds may be developed using the conditional form of Jensen's inequality. We give some examples to illustrate the geometrical interpretation and the calculations involved in the numerical determination of the new bounds. Special attention is given to the problem of maximizing a nonlinear program that has a stochastic objective function.

107 citations


Journal ArticleDOI
TL;DR: It is found that goal programming can be used as a means of generating a range of possible solutions to the planning problem.
Abstract: Goal Programming is similar in structure to linear programming, but offers a more flexible approach to planning problems by allowing a number of goals which are not necessarily compatible to be taken into account, simultaneously. The use of linear programming in farm planning is reviewed briefly. Consideration is given to published evidence of the goals of farmers, and ways in which these goals can be represented. A goal programming model of a 600 acre mixed farm is described and evaluated. Advantages and shortcomings of goal programming in relation to linear programming are considered. It is found that goal programming can be used as a means of generating a range of possible solutions to the planning problem.

62 citations


Book
30 Jun 1977

59 citations


Journal ArticleDOI
TL;DR: Sharp bounds on the value of perfect information for static and dynamic simple recourse stochastic programming problems are presented and some recent extensions of Jensen's upper bound and the Edmundson-Madansky lower bound are used.
Abstract: We present sharp bounds on the value of perfect information for static and dynamic simple recourse stochastic programming problems. The bounds are sharper than the available bounds based on Jensen's inequality. The new bounds use some recent extensions of Jensen's upper bound and the Edmundson-Madansky lower bound on the expectation of a concave function of several random variables. Bounds are obtained for nonlinear return functions and linear and strictly increasing concave utility functions for static and dynamic problems. When the random variables are jointly dependent, the Edmundson-Madansky type bound must be replaced by a less sharp "feasible point" bound. Bounds that use constructs from mean-variance analysis are also presented. With independent random variables the calculation of the bounds generally involves several simple univariate numerical integrations and the solution of several similar nonlinear programs. These bounds may be made as sharp as desired with increasing computational effort. The bounds are illustrated on a well-known problem in the literature and on a portfolio selection problem.

49 citations


01 Jan 1977
TL;DR: This work considers a stochastic linear programming problem with random RHS elements and obtains a minimax solution of the problem as an optimal solution of an equivalent deterministic convex separable programming problem.
Abstract: tribution functions are otherwise unspecified. A minimax solution of the stochastic programming model is obtained by solving an equivalent deterministic convex programming problem. We derive these deterministic equivalents under different assumptions regarding the stochastic nature of the random parameters. IN FORMULATING a stochastic linear programming model, we generally assume definite probability distribution for the parameters (A, b, c) of the model. In this note we avoid making the assumption that the precise form of the probability distribution of the parameters is known. What we assume, however, is that the random aij and bi elements have known (finite) means and variances. The problem is then to obtain a minimax solution that minimizes the maximum of the objective function over all distributions with the given mean and standard deviation. The situation of a decision maker facing an unknown probability distribution can be viewed as a zero-sum game against nature. Zackova [3] proves that the general min-max theorem holds in this case if the set P of all possible distributions is assumed to be convex and compact (in the sense of Levy's distance). However, neither the above result nor its proof presents any effective method for finding the value of the game or determining explicit solutions. In Section 1 we obtain some results that are used later in determining minimax solutions under different assumptions regarding the stochastic nature of the random parameters of a linear programming model. Section 2 considers a stochastic linear programming problem with random RHS elements and obtains a minimax solution of the problem as an optimal solution of an equivalent deterministic convex separable programming problem. Section 3 presents similar results for the case of random aii elements.

46 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a linear programming problem with random aij and bi elements that have known finite mean and variance, but whose distribution functions are otherwise unspecified, and derive deterministic equivalents under different assumptions regarding the stochastic nature of the random parameters.
Abstract: We consider a linear programming problem with random aij and bi elements that have known finite mean and variance, but whose distribution functions are otherwise unspecified. A minimax solution of the stochastic programming model is obtained by solving an equivalent deterministic convex programming problem. We derive these deterministic equivalents under different assumptions regarding the stochastic nature of the random parameters.

45 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of minimizing the number of passes required to remove a given total depth of cut from a workpiece, considering both the probabilistic nature of the objective function and the constraints in the machining processes.
Abstract: This paper deals with the problem of optimizing the number of passes required together with the cutting speed, the feed, and the depth of cut at each pass for a given total depth of cut to be removed from a workpiece, considering both the probabilistic nature of the objective function and the constraints in the machining processes. Applying the concept of dynamic programming and stochastic programming, the problem is formulated in an analytically tractable form and a new algorithm is developed for determining the optimum value of the cutting speed, feed, depth of cut, and number of passes, simultaneously. For illustration, a typical example is solved to obtain the cost-minimizing cutting conditions in a turning operation, and the effect on the optimum cutting conditions of the various factors such as total depth of cut, uncertainty of the tool life, and constraints are discussed.

43 citations


Journal ArticleDOI
TL;DR: In this article, the mean and variance of a linear function with arbitrary multivanate randomness in its components are estimated using Tchebycheff-type probability statements, which can be used to accomodate and exploit stochastic dependence.
Abstract: Applications in operations research often employ models which contain linear functions. These linear functions may have some components coefficients and variables which are random. For instance, linear functions in mathematical programming often represent models of processes which exhibit randomness in resource availability, consumption rates, and activity levels. Even when the linearity assumptions of these models is unquestioned, the effects of the randomness in the functions is of concern. Methods to accomodate, or at least estimate for a linear function the implications of randomness in its components typically make several simplifying assumptions. Unfortunately, when components are known to be random in a general, multivariate dependent fashion, concise specification of the randomness exhibited by the linear function is, at best, extremely complicated, usually requiring severe, unrealistic restrictions on the density functions of the random components. Frequent stipulations include assertion of normality of independence-yet, observed data, accepted collateral theory and common sense may dictate that a symmetric distribution with infinite domain limits is inappropriate, or that a dependent structure is definitely present. For example, random resource levels may be highly correlated due to economic conditions, and non-negative for physical reasons. Often, an investigation is performed by discretizing the random components at point quantile levels, or by replacing the random components by their means-methods which give a deterministic “equivalent” model with constant terms, but possibly very misleading results. Outright simulation can be used, but requires considerable time investment for setup and debugging especially for generation of dependent sequences of pseudorandom variates and gives results with high parametric specificity and computation cost. This paper shows how to use elementary methods to estimate the mean and variance of a linear function with arbitrary multivanate randomness in its components. Expressions are given for the mean and variance and are used to make Tchebycheff-type probability statements which can accomodate and exploit stochastic dependence. Simple estimation examples are given which lead to illustrative applications with dependent-stochastic programming models.

31 citations


01 May 1977
TL;DR: In this paper, the model specifies annual accessions plus minimum allocations to formal and on-the-job training needed to maintain future inventories within specified limits of manpower requirements, and plans are derived simultaneously for many skill categories over several years.
Abstract: : The model specifies annual accessions plus minimum allocations to formal and on-the-job training needed to maintain future inventories within specified limits of manpower requirements. Plans are derived simultaneously for many skill categories over several years. Restrictions are imposed on the size of annual inventories, flows between skill categories and smoothness of flows into formal training. Experience levels within skill category are explicitly accounted for by allowing specification of up to 3 length-of-service groups. The methodology is linear programming, which can be extended to stochastic programming to account for uncertainty in projections of future requirements. Plans derived using actual Navy data are presented. (Author)

29 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the concrete application of stochastic programming to a multi-period production planning problem in which the demand for the products during the various periods is assumed stochastically with known probability distribution.
Abstract: In modeling real world planning problems as optimization programs the assumption that all parameters are known with certainty is often more seriously violated than the assumption that the objective function and the constraints can be approximated sufficiently accurate by lineair functions. In this paper we discuss the concrete application of stochastic programming to a multiperiod production planning problem in which the demand for the products during the various periods is assumed stochastic with known probability distribution. Since the resulting stochastic program does not possess the property of “simple” recourse no direct use can be made of existing methods that have been proposed in literature for solving problems of this type.

Journal ArticleDOI
TL;DR: In this article, a duality theory is developed for multistage convex stochastic programming problems whose decision (or recourse) functions can be approximated by continuous functions satisfying the same constraints.

Journal ArticleDOI
TL;DR: It is shown how the design engineer can extract a considerable amount of information from the solution of merely one small optimization problem and knowledge of discrete probability distributions for the individual random parameters is required and additional optimization problems must be solved.
Abstract: This paper presents an application of stochastic posynomial geometric programming to an optimal engineering design problem. A theory developed by Avriel and Wilde for calculating and bounding the expected value of the objective function is summarized. Moreover, a method known as the statistical error propagation method is used to calculate approximate confidence intervals for the cost function. Stochastic geometric programming is applied to the design of a conventional “once-through” condensing system for a steam power plant in the presence of uncertainty e.g., fuel costs can vary with market conditions. It is shown how the design engineer can extract a considerable amount of information from the solution of merely one small optimization problem. If tighter bounds on the expected cost value are desired, knowledge of discrete probability distributions for the individual random parameters is required and additional optimization problems must be solved.

Book ChapterDOI
01 Jan 1977
TL;DR: In this paper, the authors focus on stochastic control and decision processes that occur in a variety of theoretical and applied contexts, such as statistical decision problems, statistical dynamic programming problems, gambling processes, optimal stopping problems, and so on, and present some general conditions under which optimal policies are guaranteed to exist.
Abstract: Publisher Summary This chapter focuses on stochastic control and decision processes that occur in a variety of theoretical and applied contexts, such as statistical decision problems, stochastic dynamic programming problems, gambling processes, optimal stopping problems, stochastic adaptive control processes, and so on. It has long been recognized that these are all mathematically closely related. That being the case, all of these decision processes can be viewed as variations on a single theoretical formulation. The chapter presents some general conditions under which optimal policies are guaranteed to exist. The given theoretical formulation is flexible enough to include most variants of the types of processes. In statistical problems, the distribution of the observed variables depends on the true value of the parameter. The parameter space has no topological or other structure here; it is merely a set indexing the possible distributions. Hence, the formulation is not restricted to those problems known in the statistical literature as parametric problems. In nonstatistical contexts, the distribution does not depend on an unknown parameter. All such problems may be included in the formulation by the device of choosing the parameter space to consist of only one point, corresponding to the given distribution.

Proceedings ArticleDOI
05 Dec 1977
TL;DR: This paper examines several procedures for optimizing simulation models having controllable input variables and yielding responses and applies mathematical programming techniques to a set of second-order response surfaces.
Abstract: This paper examines several procedures for optimizing simulation models having controllable input variables xi,i = 1,...,n and yielding responses nj,j = 1,...,m. This problem is often formulated as a constrained optimization problem, or it can be formulated in one of several multiple-objective formats, including goal programming. Whatever the mode of problem formulation, the optimization of multiple-response simulations can be approached through direct search methods, a sequence of first-order response-surface experiments, or by applying mathematical programming techniques to a set of second-order response surfaces.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the general problem of designing building subsystems which include probabilistic demands thus producing a stochastic problem and show that dynamic programming is the more powerful method both in terms of obtaining the solution and in its post-optimality analysis.

Proceedings ArticleDOI
04 May 1977
TL;DR: It is shown that nonserial dynamic programming is optimal among one class of algorithms for an important class of discrete optimization problems, and the results' strong implications for choosing deterministic, adaptive, and nondeterministic algorithms for the optimization problem.
Abstract: We show that nonserial dynamic programming is optimal among one class of algorithms for an important class of discrete optimization problems. We consider discrete, multivariate, optimization problems in which the objective function is given as a sum of terms. Each term is a function of only a subset of the variables. We first consider a class of optimization algorithms which eliminate suboptimal solutions by comparing the objective function on “comparable” partial solutions. A large, natural subclass of comparison algorithms in which the subproblems considered are either nested or nonadjacent (i.e., noninteracting) is then defined. It is shown that a variable-elimination procedure, nonserial dynamic programming, is optimal in an extremely strong sense among all algorithms in the subclass. The results' strong implications for choosing deterministic, adaptive, and nondeterministic algorithms for the optimization problem, for defining a complexity measure for a pattern of interactions, and for describing general classes of decomposition procedures are discussed. Several possible extensions and unsolved problems are mentioned.

Journal ArticleDOI
TL;DR: In this paper, several highly flexible capacity planning models for nonstationary demands can be formulated, and are computationally feasible, and produce excellent approximations to known solutions, however, these models should only be used after sensitivity testing in conjunction with simpler approaches.

Journal ArticleDOI
TL;DR: The purpose of this note is to work out some specific instances of stochastic programming problems with simple recourse and develop closed-form expressions for the objective functions.
Abstract: The purpose of this note is to work out in detail some specific instances of stochastic programming problems with simple recourse. We assume specific structure and specific distribution functions for the random elements and develop closed-form expressions for the objective functions.






Journal ArticleDOI
TL;DR: In this paper, the authors make use of a general selection theorem to answer the first question positively, and obtain an answer to the second question at least theoretically at the same time.
Abstract: The “distribution problem” of stochastic linear programming consists in answering the following two questions: Is the optimal value of a given stochastic linear program—regarded as a function—measurable, and if so, what is its distribution? In the present note we make use of a general selection theorem to answer the first question positively. By this approach, an answer to the second question is obtained—at least theoretically—at the same time.

Journal ArticleDOI
TL;DR: This is a critical review of recent results in stochastic approximation methods for optimization problems and deals with constrained optimization.
Abstract: This is a critical review of recent results in stochastic approximation methods for optimization problems. Part II deals with constrained optimization; here, the stochastic variants of the penalty function method, the method of feasible directions and the method of Lagrange functions are discussed.


Journal ArticleDOI
TL;DR: This paper describes how a water resources system, including both water quantity and water quality, can be modeled to form a non-linear programming problem, which is solved by two techniques: a Generalized Reduced Gradient method, and a conjugate gradient projection method.
Abstract: Optimization of a water resources system necessarily must appropriately mesh the modeling of the system with the optimization technique used. If the system model is linear, many effective optimization techniques exist. But if the model is non-linear in the objective function and/or constraints, very few effective optimization methods exist. This paper describes how a water resources system, including both water quantity and water quality, can be modeled to form a non-linear programming problem. The latter is solved by two techniques: (a) a Generalized Reduced Gradient method, and (b) a conjugate gradient projection method. The relationship between the model and the formulation of the non-linear programming problem is discussed, and computational experience with each of the algorithms is described.

Book
01 Jan 1977