scispace - formally typeset
Search or ask a question

Showing papers in "Operations Research in 2010"


Journal ArticleDOI
TL;DR: This paper proposes a model that describes uncertainty in both the distribution form (discrete, Gaussian, exponential, etc.) and moments (mean and covariance matrix) and demonstrates that for a wide range of cost functions the associated distributionally robust stochastic program can be solved efficiently.
Abstract: Stochastic programming can effectively describe many decision-making problems in uncertain environments. Unfortunately, such programs are often computationally demanding to solve. In addition, their solution can be misleading when there is ambiguity in the choice of a distribution for the random parameters. In this paper, we propose a model that describes uncertainty in both the distribution form (discrete, Gaussian, exponential, etc.) and moments (mean and covariance matrix). We demonstrate that for a wide range of cost functions the associated distributionally robust (or min-max) stochastic program can be solved efficiently. Furthermore, by deriving a new confidence region for the mean and the covariance matrix of a random vector, we provide probabilistic arguments for using our model in problems that rely heavily on historical data. These arguments are confirmed in a practical example of portfolio selection, where our framework leads to better-performing policies on the “true” distribution underlying the daily returns of financial assets.

1,569 citations


Journal ArticleDOI
TL;DR: A modular framework is presented to obtain an approximate solution to the problem that is distributionally robust and more flexible than the standard technique of using linear rules.
Abstract: In this paper we focus on a linear optimization problem with uncertainties, having expectations in the objective and in the set of constraints. We present a modular framework to obtain an approximate solution to the problem that is distributionally robust and more flexible than the standard technique of using linear rules. Our framework begins by first affinely extending the set of primitive uncertainties to generate new linear decision rules of larger dimensions and is therefore more flexible. Next, we develop new piecewise-linear decision rules that allow a more flexible reformulation of the original problem. The reformulated problem will generally contain terms with expectations on the positive parts of the recourse variables. Finally, we convert the uncertain linear program into a deterministic convex program by constructing distributionally robust bounds on these expectations. These bounds are constructed by first using different pieces of information on the distribution of the underlying uncertainties to develop separate bounds and next integrating them into a combined bound that is better than each of the individual bounds.

660 citations


Journal ArticleDOI
TL;DR: The basic theory of kriging is extended, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables.
Abstract: We extend the basic theory of kriging, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting. Our goal is to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables, or uncontrollable environmental variables. To accomplish this, we characterize both the intrinsic uncertainty inherent in a stochastic simulation and the extrinsic uncertainty about the unknown response surface. We use tractable examples to demonstrate why it is critical to characterize both types of uncertainty, derive general results for experiment design and analysis, and present a numerical example that illustrates the stochastic kriging method.

576 citations


Journal ArticleDOI
TL;DR: This paper presents a unified model representation scheme, classify existing models into several different classes, and for each class of the models give an overview of the optimality properties, computational tractability, and solution algorithms for the various problems studied in the literature.
Abstract: In many applications involving make-to-order or time-sensitive (e.g., perishable, seasonal) products, finished orders are often delivered to customers immediately or shortly after the production. Consequently, there is little or no finished product inventory in the supply chain such that production and outbound distribution are very intimately linked and must be scheduled jointly to achieve a desired on-time delivery performance at minimum total cost. Research on integrated scheduling models of production and outbound distribution is relatively recent but is growing very rapidly. In this paper, we provide a survey of such existing models. We present a unified model representation scheme, classify existing models into several different classes, and for each class of the models give an overview of the optimality properties, computational tractability, and solution algorithms for the various problems studied in the literature. We clarify the tractability of some open problems left in the literature and some new problems by providing intractability proofs or polynomial-time exact algorithms. We also identify several problem areas and issues for future research.

477 citations


Journal ArticleDOI
TL;DR: A compact mixed integer program (MIP) formulation and a continuum approximation (CA) model are proposed to study the reliable uncapacitated fixed charge location problem (RUFL) which seeks to minimize initial setup costs and expected transportation costs in normal and failure scenarios.
Abstract: Reliable facility location models consider unexpected failures with site-dependent probabilities, as well as possible customer reassignment. This paper proposes a compact mixed integer program (MIP) formulation and a continuum approximation (CA) model to study the reliable uncapacitated fixed charge location problem (RUFL), which seeks to minimize initial setup costs and expected transportation costs in normal and failure scenarios. The MIP determines the optimal facility locations as well as the optimal customer assignments and is solved using a custom-designed Lagrangian relaxation (LR) algorithm. The CA model predicts the total system cost without details about facility locations and customer assignments, and it provides a fast heuristic to find near-optimum solutions. Our computational results show that the LR algorithm is efficient for mid-sized RUFL problems and that the CA solutions are close to optimal in most of the test instances. For large-scale problems, the CA method is a good alternative to the LR algorithm that avoids prohibitively long running times.

431 citations


Journal ArticleDOI
TL;DR: This work develops an adaptive policy that learns the unknown parameters from past data and at the same time optimizes the profit and develops a simple algorithm for computing a profit-maximizing assortment based on the geometry of lines in the plane.
Abstract: We consider an assortment optimization problem where a retailer chooses an assortment of products that maximizes the profit subject to a capacity constraint. The demand is represented by a multinomial logit choice model. We consider both the static and dynamic optimization problems. In the static problem, we assume that the parameters of the logit model are known in advance; we then develop a simple algorithm for computing a profit-maximizing assortment based on the geometry of lines in the plane and derive structural properties of the optimal assortment. For the dynamic problem, the parameters of the logit model are unknown and must be estimated from data. By exploiting the structural properties found for the static problem, we develop an adaptive policy that learns the unknown parameters from past data and at the same time optimizes the profit. Numerical experiments based on sales data from an online retailer indicate that our policy performs well.

429 citations


Journal ArticleDOI
TL;DR: In this article, a continuous-time stochastic model for the dynamics of a limit order book is proposed, which can be estimated easily from data, and its analytical tractability allows for fast computation of various quantities of interest without resorting to simulation.
Abstract: We propose a continuous-time stochastic model for the dynamics of a limit order book. The model strikes a balance between three desirable features: it can be estimated easily from data, it captures key empirical properties of order book dynamics, and its analytical tractability allows for fast computation of various quantities of interest without resorting to simulation. We describe a simple parameter estimation procedure based on high-frequency observations of the order book and illustrate the results on data from the Tokyo Stock Exchange. Using simple matrix computations and Laplace transform methods, we are able to efficiently compute probabilities of various events, conditional on the state of the order book: an increase in the midprice, execution of an order at the bid before the ask quote moves, and execution of both a buy and a sell order at the best quotes before the price moves. Using high-frequency data, we show that our model can effectively capture the short-term dynamics of a limit order book. We also evaluate the performance of a simple trading strategy based on our results.

396 citations


Journal ArticleDOI
TL;DR: This work reviews several new and existing MIP formulations for continuous piecewise-linear functions with special attention paid to multivariate nonseparable functions.
Abstract: We study the modeling of nonconvex piecewise-linear functions as mixed-integer programming (MIP) problems. We review several new and existing MIP formulations for continuous piecewise-linear functions with special attention paid to multivariate nonseparable functions. We compare these formulations with respect to their theoretical properties and their relative computational performance. In addition, we study the extension of these formulations to lower semicontinuous piecewise-linear functions.

343 citations


Journal ArticleDOI
TL;DR: A fast and easy-to-implement heuristic works fairly well, on average, across many instances, and the robust method performs approximately as well as the heuristic, is much faster than solving the stochastic recourse model, and has the benefit of limiting the worst-case outcome of the recourse problem.
Abstract: The allocation of surgeries to operating rooms (ORs) is a challenging combinatorial optimization problem. There is also significant uncertainty in the duration of surgical procedures, which further complicates assignment decisions. In this paper, we present stochastic optimization models for the assignment of surgeries to ORs on a given day of surgery. The objective includes a fixed cost of opening ORs and a variable cost of overtime relative to a fixed length-of-day. We describe two types of models. The first is a two-stage stochastic linear program with binary decisions in the first stage and simple recourse in the second stage. The second is its robust counterpart, in which the objective is to minimize the maximum cost associated with an uncertainty set for surgery durations. We describe the mathematical models, bounds on the optimal solution, and solution methodologies, including an easy-to-implement heuristic. Numerical experiments based on real data from a large health-care provider are used to contrast the results for the two models and illustrate the potential for impact in practice. Based on our numerical experimentation, we find that a fast and easy-to-implement heuristic works fairly well, on average, across many instances. We also find that the robust method performs approximately as well as the heuristic, is much faster than solving the stochastic recourse model, and has the benefit of limiting the worst-case outcome of the recourse problem.

337 citations


Journal ArticleDOI
TL;DR: The approach builds on a classical worst-case bound for order statistics problems and is applicable even if the constraints are correlated, and provides an application of the model on a network resource allocation problem with uncertain demand.
Abstract: We review and develop different tractable approximations to individual chance-constrained problems in robust optimization on a variety of uncertainty sets and show their interesting connections with bounds on the conditional-value-at-risk (CVaR) measure. We extend the idea to joint chance-constrained problems and provide a new formulation that improves upon the standard approach. Our approach builds on a classical worst-case bound for order statistics problems and is applicable even if the constraints are correlated. We provide an application of the model on a network resource allocation problem with uncertain demand.

296 citations


Journal ArticleDOI
TL;DR: A set of percentile criteria that are conceptually natural and representative of the trade-off between optimistic and pessimistic views of the question are presented and the use of these criteria under different forms of uncertainty for both the rewards and the transitions is studied.
Abstract: Markov decision processes are an effective tool in modeling decision making in uncertain dynamic environments. Because the parameters of these models typically are estimated from data or learned from experience, it is not surprising that the actual performance of a chosen strategy often differs significantly from the designer's initial expectations due to unavoidable modeling ambiguity. In this paper, we present a set of percentile criteria that are conceptually natural and representative of the trade-off between optimistic and pessimistic views of the question. We study the use of these criteria under different forms of uncertainty for both the rewards and the transitions. Some forms are shown to be efficiently solvable and others highly intractable. In each case, we outline solution concepts that take parametric uncertainty into account in the process of decision making.

Journal ArticleDOI
TL;DR: This paper utilizes concepts from mathematical programming and game theory and design a mechanism to guide the carriers in an alliance to pursue an optimal collaborative strategy, and results suggest that the mechanism can be used to help carriers form sustainable alliances.
Abstract: Many real-world systems operate in a decentralized manner, where individual operators interact with varying degrees of cooperation and self motive. In this paper, we study transportation networks that operate as an alliance among different carriers. In particular, we study alliance formation among carriers in liner shipping. We address tactical problems such as the design of large-scale networks (that result from integrating the service networks of different carriers in an alliance) and operational problems such as the allocation of limited capacity on a transportation network among the carriers in the alliance. We utilize concepts from mathematical programming and game theory and design a mechanism to guide the carriers in an alliance to pursue an optimal collaborative strategy. The mechanism provides side payments to the carriers, as an added incentive, to motivate them to act in the best interest of the alliance while maximizing their own profits. Our computational results suggest that the mechanism can be used to help carriers form sustainable alliances.

Journal ArticleDOI
TL;DR: This approach relaxes the nonanticipativity constraints that require decisions to depend only on the information available at the time a decision is made and imposes a “penalty” that punishes violations of nonant anticipativity.
Abstract: We describe a general technique for determining upper bounds on maximal values (or lower bounds on minimal costs) in stochastic dynamic programs. In this approach, we relax the nonanticipativity constraints that require decisions to depend only on the information available at the time a decision is made and impose a “penalty” that punishes violations of nonanticipativity. In applications, the hope is that this relaxed version of the problem will be simpler to solve than the original dynamic program. The upper bounds provided by this dual approach complement lower bounds on values that may be found by simulating with heuristic policies. We describe the theory underlying this dual approach and establish weak duality, strong duality, and complementary slackness results that are analogous to the duality results of linear programming. We also study properties of good penalties. Finally, we demonstrate the use of this dual approach in an adaptive inventory control problem with an unknown and changing demand distribution and in valuing options with stochastic volatilities and interest rates. These are complex problems of significant practical interest that are quite difficult to solve to optimality. In these examples, our dual approach requires relatively little additional computation and leads to tight bounds on the optimal values.

Journal ArticleDOI
TL;DR: The results of that experiment suggest that the new allocation process increases sales by 3% to 4%, which is equivalent to $275 M in additional revenues for 2007, reduces transshipments, and increases the proportion of time that Zara's products spend on display within their life cycle.
Abstract: Working in collaboration with Spain-based retailer Zara, we address the problem of distributing, over time, a limited amount of inventory across all the stores in a fast-fashion retail network. Challenges specific to that environment include very short product life cycles, and store policies whereby an article is removed from display whenever one of its key sizes stocks out. To solve this problem, we first formulate and analyze a stochastic model predicting the sales of an article in a single store during a replenishment period as a function of demand forecasts, the inventory of each size initially available, and the store inventory management policy just stated. We then formulate a mixed-integer program embedding a piecewise-linear approximation of the first model applied to every store in the network, allowing us to compute store shipment quantities maximizing overall predicted sales, subject to inventory availability and other constraints. We report the implementation of this optimization model by Zara to support its inventory distribution process, and the ensuing controlled pilot experiment performed to assess the model's impact relative to the prior procedure used to determine weekly shipment quantities. The results of that experiment suggest that the new allocation process increases sales by 3% to 4%, which is equivalent to $275 M in additional revenues for 2007, reduces transshipments, and increases the proportion of time that Zara's products spend on display within their life cycle. Zara is currently using this process for all of its products worldwide.

Journal ArticleDOI
TL;DR: Computer results demonstrate that decay balancing offers significant revenue gains over recently studied certainty equivalent and greedy heuristics, and establish that changes in inventory and uncertainty in the arrival rate bear appropriate directional impacts on decay balancing prices in contrast to these alternatives.
Abstract: We study a problem of dynamic pricing faced by a vendor with limited inventory, uncertain about demand, and aiming to maximize expected discounted revenue over an infinite time horizon. The vendor learns from purchase data, so his strategy must take into account the impact of price on both revenue and future observations. We focus on a model in which customers arrive according to a Poisson process of uncertain rate, each with an independent, identically distributed reservation price. Upon arrival, a customer purchases a unit of inventory if and only if his reservation price equals or exceeds the vendor's prevailing price. We propose a simple heuristic approach to pricing in this context, which we refer to as decay balancing. Computational results demonstrate that decay balancing offers significant revenue gains over recently studied certainty equivalent and greedy heuristics. We also establish that changes in inventory and uncertainty in the arrival rate bear appropriate directional impacts on decay balancing prices in contrast to these alternatives, and we derive worst-case bounds on performance loss. We extend the three aforementioned heuristics to address a model involving multiple customer segments and stores, and provide experimental results demonstrating similar relative merits in this context.

Journal ArticleDOI
TL;DR: This paper presents a class of scalable dynamic programming algorithms for runway scheduling under constrained position shifting and other system constraints, and the results from a prototype implementation, which is fast enough to be used in real time.
Abstract: The efficient operation of airports, and runways in particular, is critical to the throughput of the air transportation system as a whole. Scheduling arrivals and departures at runways is a complex problem that needs to address diverse and often competing considerations of efficiency, safety, and equity among airlines. One approach to runway scheduling that arises from operational and fairness considerations is that of constrained position shifting (CPS), which requires that an aircraft's position in the optimized sequence not deviate significantly from its position in the first-come-first-served sequence. This paper presents a class of scalable dynamic programming algorithms for runway scheduling under constrained position shifting and other system constraints. The results from a prototype implementation, which is fast enough to be used in real time, are also presented.

Journal ArticleDOI
TL;DR: A novel and tractable approximate dynamic programming method is developed that, coupled with Monte Carlo simulation, computes lower and upper bounds on the value of storage and finds that these heuristics are extremely fast to execute but significantly suboptimal compared to the upper bound.
Abstract: The valuation of the real option to store natural gas is a practically important problem that entails dynamic optimization of inventory trading decisions with capacity constraints in the face of uncertain natural gas price dynamics. Stochastic dynamic programming is a natural approach to this valuation problem, but it does not seem to be widely used in practice because it is at odds with the high-dimensional natural gas price evolution models that are widespread among traders. According to the practice-based literature, practitioners typically value natural gas storage heuristically. The effectiveness of the heuristics discussed in this literature is currently unknown because good upper bounds on the value of storage are not available. We develop a novel and tractable approximate dynamic programming method that, coupled with Monte Carlo simulation, computes lower and upper bounds on the value of storage, which we use to benchmark these heuristics on a set of realistic instances. We find that these heuristics are extremely fast to execute but significantly suboptimal compared to our upper bound, which appears to be fairly tight and much tighter than a simpler perfect information upper bound; computing our lower bound takes more time than using these heuristics, but our lower bound substantially outperforms them in terms of valuation. Moreover, with periodic reoptimizations embedded in Monte Carlo simulation, the practice-based heuristics become nearly optimal, with one exception, at the expense of higher computational effort. Our lower bound with reoptimization is also nearly optimal, but exhibits a higher computational requirement than these heuristics. Besides natural gas storage, our results are potentially relevant for the valuation of the real option to store other commodities, such as metals, oil, and petroleum products.

Journal ArticleDOI
TL;DR: The framework for the two-agent scheduling problem is enlarged by including the total tardiness objective, allowing for preemptions, and considering jobs with different release dates; the relationships between two- agent scheduling problems and other areas within the scheduling field, namely rescheduling and scheduling subject to availability constraints are established.
Abstract: We consider a scheduling environment with m (m ≥ 1) identical machines in parallel and two agents. Agent A is responsible for n1 jobs and has a given objective function with regard to these jobs; agent B is responsible for n2 jobs and has an objective function that may be either the same or different from the one of agent A. The problem is to find a schedule for the n1 + n2 jobs that minimizes the objective of agent A (with regard to his n1 jobs) while keeping the objective of agent B (with regard to his n2 jobs) below or at a fixed level Q. The special case with a single machine has recently been considered in the literature, and a variety of results have been obtained for two-agent models with objectives such as fmax, Σ wjCj, and Σ Uj. In this paper, we generalize these results and solve one of the problems that had remained open. Furthermore, we enlarge the framework for the two-agent scheduling problem by including the total tardiness objective, allowing for preemptions, and considering jobs with different release dates; we consider also identical machines in parallel. We furthermore establish the relationships between two-agent scheduling problems and other areas within the scheduling field, namely rescheduling and scheduling subject to availability constraints.

Journal ArticleDOI
TL;DR: An ad hoc label-setting algorithm is developed for solving the split-delivery vehicle routing problem with time windows, where the column generation subproblem is a resource-constrained elementary shortest-path problem combined with the linear relaxation of a bounded knapsack problem.
Abstract: This paper addresses the split-delivery vehicle routing problem with time windows (SDVRPTW) that consists of determining least-cost vehicle routes to service a set of customer demands while respect...

Journal ArticleDOI
TL;DR: In this article, a nonparametric variant of the corrected ordinary least-squares (COLS) method, referred to as corrected concave non-parametric least squares (C2NLS), is presented.
Abstract: Data envelopment analysis (DEA) is known as a nonparametric mathematical programming approach to productive efficiency analysis. In this paper, we show that DEA can be alternatively interpreted as nonparametric least-squares regression subject to shape constraints on the frontier and sign constraints on residuals. This reinterpretation reveals the classic parametric programming model by Aigner and Chu [Aigner, D., S. Chu. 1968. On estimating the industry production function. Amer. Econom. Rev.58 826--839] as a constrained special case of DEA. Applying these insights, we develop a nonparametric variant of the corrected ordinary least-squares (COLS) method. We show that this new method, referred to as corrected concave nonparametric least squares (C2NLS), is consistent and asymptotically unbiased. The linkages established in this paper contribute to further integration of the econometric and axiomatic approaches to efficiency analysis.

Journal ArticleDOI
TL;DR: The proposed truncated linear replenishment policy (TLRP) is proposed, which is piecewise linear with respect to demand history, improves upon static and linear policies, and achieves objective values that are reasonably close to optimal.
Abstract: We propose a robust optimization approach to address a multiperiod inventory control problem under ambiguous demands, that is, only limited information of the demand distributions such as mean, support, and some measures of deviations. Our framework extends to correlated demands and is developed around a factor-based model, which has the ability to incorporate business factors as well as time-series forecast effects of trend, seasonality, and cyclic variations. We can obtain the parameters of the replenishment policies by solving a tractable deterministic optimization problem in the form of a second-order cone optimization problem (SOCP), with solution time; unlike dynamic programming approaches, it is polynomial and independent on parameters such as replenishment lead time, demand variability, and correlations. The proposed truncated linear replenishment policy (TLRP), which is piecewise linear with respect to demand history, improves upon static and linear policies, and achieves objective values that are reasonably close to optimal.

Journal ArticleDOI
TL;DR: An efficient dynamic programming algorithm is presented to determine the optimal assortment and inventory levels in a single-period problem with stockout-based substitution to establish structural properties of the value function of the dynamic program that help to characterize multiple local maxima.
Abstract: We present an efficient dynamic programming algorithm to determine the optimal assortment and inventory levels in a single-period problem with stockout-based substitution. In our model, total customer demand is random and comprises fixed proportion of customers of different types. Customer preferences are modeled through the definition of these types. Each customer type corresponds to a specific preference ordering among products. A customer purchases the highest-ranked product, according to his type (if any), that is available at the time of his visit to the store (stockout-based substitution). We solve the optimal assortment problem using a dynamic programming formulation. We establish structural properties of the value function of the dynamic program that, in particular, help to characterize multiple local maxima. We use the properties of the optima to solve the problem in pseudopolynomial time. Our algorithm also gives a heuristic for the general case, i.e., when the proportion of customers of each type is random. In numerical tests, this heuristic performs better and faster than previously known methods, especially when the mean demand is large, the degree of substitutability is high, the population is homogeneous, or prices and/or costs vary across products.

Journal ArticleDOI
TL;DR: It is shown that a more efficient, truth-sharing outcome can emerge as an equilibrium from a long-term relationship, in this equilibrium, forecast information is transmitted truthfully and trusted by the supplier, who in turn allocates the system-optimal capacity.
Abstract: In this paper, we study the practice of forecast sharing and supply chain coordination with a game-theoretical model. We find that in a one-shot version of the game, forecasts are not shared truthf...

Journal ArticleDOI
TL;DR: A framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set is proposed and connected closely to the theory of convex risk measures.
Abstract: In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate the methodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance.

Journal ArticleDOI
TL;DR: This paper examines consensus building in AHP-group decision making from a Bayesian perspective and integrates the attitudes of the actors implicated in the decision-making process and puts forward a number of semiautomatic initiatives for establishing consensus.
Abstract: This paper examines consensus building in AHP-group decision making from a Bayesian perspective. In accordance with the multicriteria procedural rationality paradigm, the methodology employed in this study permits the automatic identification, in a local context, of “agreement” and “disagreement” zones among the actors involved. This approach is based on the analysis of the pairwise comparison matrices provided by the actors themselves. In addition, the study integrates the attitudes of the actors implicated in the decision-making process and puts forward a number of semiautomatic initiatives for establishing consensus. This information is given to the actors as the first step in the negotiation processes. The knowledge obtained will be incorporated into the system via the learning process developed during the resolution of the problem. The proposed methodology, valid for the analysis of incomplete or imprecise pairwise comparison matrices, is illustrated by an example.

Journal ArticleDOI
TL;DR: This work proposes a uniform-price divisible-good auction-based contracting scheme, which can achieve full coordination when forecast investments are observable, and demonstrates the desirable properties for implementability of the proposed coordinating contracting schemes, including incentive-compatible and reliable demand forecast information revelation by the retailers, and being regret-free.
Abstract: We study the effect of downstream competition on incentives for demand forecast investments in supply chains. We show that with common pricing schemes, such as wholesale price or two-part tariffs, downstream firms under Cournot competition overinvest in demand forecasting. Analyzing the determinants of overinvestment, we demonstrate that under wholesale price contracts and two-part tariffs, total demand forecast investment can be very significant, and as a result, the supply chain can suffer substantial losses. We show that an increased number of competing retailers and uncertainty in consumer demand tend to increase inefficiency, whereas increased consumer market size and demand forecast costs reduce the loss in supply chain surplus. We identify the causes of inefficiency, and to coordinate the channel with forecast investments, we explore contracts in the general class of market-based contracts used in practice. When retailers' forecast investments are not observable, such a contract that employs an index-price can fully coordinate the supply chain. When forecast investments are observable to others, however, the retailers engage in an “arms race” for forecast investment, which can result in a significant increase in overinvestment and reduction in supply chain surplus. Furthermore, in that case, simple market-based contracts cannot coordinate the supply chain. To solve this problem, we propose a uniform-price divisible-good auction-based contracting scheme, which can achieve full coordination when forecast investments are observable. We also demonstrate the desirable properties for implementability of our proposed coordinating contracting schemes, including incentive-compatible and reliable demand forecast information revelation by the retailers, and being regret-free.

Journal ArticleDOI
TL;DR: The stochastic-programming-based method for scheduling electric power generation subject to uncertainty gives a system of locational marginal prices that reflect the uncertainty, and these may be used in a market settlement scheme in which payment is for energy only.
Abstract: We discuss a stochastic-programming-based method for scheduling electric power generation subject to uncertainty. Such uncertainty may arise from either imperfect forecasting or moment-to-moment fluctuations, and on either the supply or the demand side. The method gives a system of locational marginal prices that reflect the uncertainty, and these may be used in a market settlement scheme in which payment is for energy only. We show that this scheme is revenue adequate in expectation.

Journal ArticleDOI
TL;DR: A capacity planning strategy that collects commitments to purchase before the capacity decision and uses the acquired advance sales information to decide on the capacity is investigated and it is shown that advance selling can improve the manufacturer's profit significantly.
Abstract: This paper investigates a capacity planning strategy that collects commitments to purchase before the capacity decision and uses the acquired advance sales information to decide on the capacity. In particular, we study a profit-maximization model in which a manufacturer collects advance sales information periodically prior to the regular sales season for a capacity decision. Customer demand is stochastic and price sensitive. Once the capacity is set, the manufacturer produces and satisfies customer demand (to the extent possible) from the installed capacity during the regular sales period. We study scenarios in which the advance sales and regular sales season prices are set exogenously and optimally. For both scenarios, we establish the optimality of a control band or a threshold policy that determines when to stop acquiring advance sales information and how much capacity to build. We show that advance selling can improve the manufacturer's profit significantly. We generate insights into how operating conditions (such as the capacity building cost) and market characteristics (such as demand variability) affect the value of information acquired through advance selling. From this analysis, we identify the conditions under which advance selling for capacity planning is most valuable. Finally, we study the joint benefits of acquiring information for capacity planning through advance selling and revenue management of installed capacity through dynamic pricing.

Journal ArticleDOI
TL;DR: A robust optimization method that is suited for unconstrained problems with a nonconvex cost function as well as for problems based on simulations, such as large partial differential equations (PDE) solver, response surface, and Kriging metamodels is presented.
Abstract: In engineering design, an optimized solution often turns out to be suboptimal when errors are encountered. Although the theory of robust convex optimization has taken significant strides over the past decade, all approaches fail if the underlying cost function is not explicitly given; it is even worse if the cost function is nonconvex. In this work, we present a robust optimization method that is suited for unconstrained problems with a nonconvex cost function as well as for problems based on simulations, such as large partial differential equations (PDE) solver, response surface, and Kriging metamodels. Moreover, this technique can be employed for most real-world problems because it operates directly on the response surface and does not assume any specific structure of the problem. We present this algorithm along with the application to an actual engineering problem in electromagnetic multiple scattering of aperiodically arranged dielectrics, relevant to nanophotonic design. The corresponding objective function is highly nonconvex and resides in a 100-dimensional design space. Starting from an “optimized” design, we report a robust solution with a significantly lower worst-case cost, while maintaining optimality. We further generalize this algorithm to address a nonconvex optimization problem under both implementation errors and parameter uncertainties.

Journal ArticleDOI
TL;DR: The approach provides a theoretical justification for the widely held maxim that adding a small number of links to the process flexibility structure can significantly enhance the ability of the system to match (fixed) production capacity with (random) demand.
Abstract: The concept of chaining, or in more general terms, sparse process structure, has been extremely influential in the process flexibility area, with many large automakers already making this the cornerstone of their business strategies to remain competitive in the industry. The effectiveness of the process strategy, using chains or other sparse structures, has been validated in numerous empirical studies. However, to the best of our knowledge, there have been relatively few concrete analytical results on the performance of such strategies vis-a-vis the full flexibility system, especially when the system size is large or when the demand and supply are asymmetrical. This paper is an attempt to bridge this gap. We study the problem from two angles: (1) For the symmetrical system where the (mean) demand and plant capacity are balanced and identical, we utilize the concept of a generalized random walk to evaluate the asymptotic performance of the chaining structure in this environment. We show that a simple chaining structure performs surprisingly well for a variety of realistic demand distributions, even when the system size is large. (2) For the more general problem, we identify a class of conditions under which only a sparse flexible structure is needed so that the expected performance is already within e optimality of the full flexibility system. Our approach provides a theoretical justification for the widely held maxim: In many practical situations, adding a small number of links to the process flexibility structure can significantly enhance the ability of the system to match (fixed) production capacity with (random) demand.