scispace - formally typeset
Search or ask a question
Topic

Stochastic programming

About: Stochastic programming is a research topic. Over the lifetime, 12343 publications have been published within this topic receiving 421049 citations.


Papers
More filters
Journal ArticleDOI
Yaochu Jin1, Bernhard Sendhoff1
01 May 2008
TL;DR: An overview of the existing research on multiobjective machine learning, focusing on supervised learning is provided, and a number of case studies are provided to illustrate the major benefits of the Pareto-based approach to machine learning.
Abstract: Machine learning is inherently a multiobjective task. Traditionally, however, either only one of the objectives is adopted as the cost function or multiple objectives are aggregated to a scalar cost function. This can be mainly attributed to the fact that most conventional learning algorithms can only deal with a scalar cost function. Over the last decade, efforts on solving machine learning problems using the Pareto-based multiobjective optimization methodology have gained increasing impetus, particularly due to the great success of multiobjective optimization using evolutionary algorithms and other population-based stochastic search methods. It has been shown that Pareto-based multiobjective learning approaches are more powerful compared to learning algorithms with a scalar cost function in addressing various topics of machine learning, such as clustering, feature selection, improvement of generalization ability, knowledge extraction, and ensemble generation. One common benefit of the different multiobjective learning approaches is that a deeper insight into the learning problem can be gained by analyzing the Pareto front composed of multiple Pareto-optimal solutions. This paper provides an overview of the existing research on multiobjective machine learning, focusing on supervised learning. In addition, a number of case studies are provided to illustrate the major benefits of the Pareto-based approach to machine learning, e.g., how to identify interpretable models and models that can generalize on unseen data from the obtained Pareto-optimal solutions. Three approaches to Pareto-based multiobjective ensemble generation are compared and discussed in detail. Finally, potentially interesting topics in multiobjective machine learning are suggested.

399 citations

Journal ArticleDOI
TL;DR: This paper discusses statistical properties and convergence of the Stochastic Dual Dynamic Programming method applied to multistage linear stochastic programming problems, and argues that the computational complexity of the corresponding SDDP algorithm is almost the same as in the risk neutral case.

399 citations

Journal ArticleDOI
TL;DR: The optimal bidding strategy of an electric vehicle (EV) aggregator participating in day-ahead energy and regulation markets using stochastic optimization is determined and a new battery model is proposed for better approximation of the battery charging characteristic.
Abstract: This paper determines the optimal bidding strategy of an electric vehicle (EV) aggregator participating in day-ahead energy and regulation markets using stochastic optimization. Key sources of uncertainty affecting the bidding strategy are identified and incorporated in the stochastic optimization model. The aggregator portfolio optimization model should include inevitable deviations between day-ahead cleared bids and actual real-time energy purchases as well as uncertainty for the energy content of regulation signals in order to ensure profit maximization and reliable reserve provision. Energy deviations are characterized as “uninstructed” or “instructed” depending on whether or not the responsibility resides with the aggregator. Price deviations and statistical characteristics of regulation signals are also investigated. Finally, a new battery model is proposed for better approximation of the battery charging characteristic. Test results with an EV aggregator representing one thousand EVs are presented and discussed.

399 citations

Journal ArticleDOI
TL;DR: It is proved that, if constraints in the SP problem are optimally removed—i.e., one deletes those constraints leading to the largest possible cost improvement—, then a precise optimality link to the original chance-constrained problem CCP in addition holds.
Abstract: In this paper, we study the link between a Chance-Constrained optimization Problem (CCP) and its sample counterpart (SP). SP has a finite number, say N, of sampled constraints. Further, some of these sampled constraints, say k, are discarded, and the final solution is indicated by $x^{\ast}_{N,k}$ . Extending previous results on the feasibility of sample convex optimization programs, we establish the feasibility of $x^{\ast}_{N,k}$ for the initial CCP problem. Constraints removal allows one to improve the cost function at the price of a decreased feasibility. The cost improvement can be inspected directly from the optimization result, while the theory here developed permits to keep control on the other side of the coin, the feasibility of the obtained solution. In this way, trading feasibility for performance is put on solid mathematical grounds in this paper. The feasibility result here obtained applies to a vast class of chance-constrained optimization problems, and has the distinctive feature that it holds true irrespective of the algorithm used to discard k constraints in the SP problem. For constraints discarding, one can thus, e.g., resort to one of the many methods introduced in the literature to solve chance-constrained problems with discrete distribution, or even use a greedy algorithm, which is computationally very low-demanding, and the feasibility result remains intact. We further prove that, if constraints in the SP problem are optimally removed—i.e., one deletes those constraints leading to the largest possible cost improvement—, then a precise optimality link to the original chance-constrained problem CCP in addition holds.

397 citations

Journal ArticleDOI
TL;DR: A novel parallel decomposition algorithm for large, multistage stochastic optimization problems that decomposes the problem into subproblems that correspond to scenarios and has promise for solving Stochastic programs that lie outside current capabilities.
Abstract: A novel parallel decomposition algorithm is developed for large, multistage stochastic optimization problems. The method decomposes the problem into subproblems that correspond to scenarios. The subproblems are modified by separable quadratic terms to coordinate the scenario solutions. Convergence of the coordination procedure is proven for linear programs. Subproblems are solved using a nonlinear interior point algorithm. The approach adjusts the degree of decomposition to fit the available hardware environment. Initial testing on a distributed network of workstations shows that an optimal number of computers depends upon the work per subproblem and its relation to the communication capacities. The algorithm has promise for solving stochastic programs that lie outside current capabilities.

397 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
86% related
Scheduling (computing)
78.6K papers, 1.3M citations
85% related
Optimal control
68K papers, 1.2M citations
84% related
Supply chain
84.1K papers, 1.7M citations
83% related
Markov chain
51.9K papers, 1.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023175
2022423
2021526
2020598
2019578
2018532