scispace - formally typeset
Search or ask a question

Showing papers in "Operations Research in 2009"


Journal ArticleDOI
TL;DR: In this paper, the authors introduce the reader to the field of closed-loop supply chains with a strong business perspective, i.e., they focus on profitable value recovery from returned products.
Abstract: The purpose of this paper is to introduce the reader to the field of closed-loop supply chains with a strong business perspective, i.e., we focus on profitable value recovery from returned products. It recounts the evolution of research in this growing area over the past 15 years, during which it developed from a narrow, technically focused niche area to a fully recognized subfield of supply chain management. We use five phases to paint an encompassing view of this evolutionary process for the reader to understand past achievements and potential future operations research opportunities.

1,201 citations


Journal ArticleDOI
TL;DR: The application of the worst-case CVaR to robust portfolio optimization is proposed, and the corresponding problems are cast as linear programs and second-order cone programs that can be solved efficiently.
Abstract: This paper considers the worst-case Conditional Value-at-Risk (CVaR) in the situation where only partial information on the underlying probability distribution is available. The minimization of the worst-case CVaR under mixture distribution uncertainty, box uncertainty, and ellipsoidal uncertainty are investigated. The application of the worst-case CVaR to robust portfolio optimization is proposed, and the corresponding problems are cast as linear programs and second-order cone programs that can be solved efficiently. Market data simulation and Monte Carlo simulation examples are presented to illustrate the proposed approach.

454 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a single-product revenue management problem where, given an initial inventory, the objective is to dynamically adjust prices over a finite sales horizon to maximize expected revenues.
Abstract: We consider a single-product revenue management problem where, given an initial inventory, the objective is to dynamically adjust prices over a finite sales horizon to maximize expected revenues. Realized demand is observed over time, but the underlying functional relationship between price and mean demand rate that governs these observations (otherwise known as the demand function or demand curve) is not known. We consider two instances of this problem: (i) a setting where the demand function is assumed to belong to a known parametric family with unknown parameter values; and (ii) a setting where the demand function is assumed to belong to a broad class of functions that need not admit any parametric representation. In each case we develop policies that learn the demand function “on the fly,” and optimize prices based on that. The performance of these algorithms is measured in terms of the regret: the revenue loss relative to the maximal revenues that can be extracted when the demand function is known prior to the start of the selling season. We derive lower bounds on the regret that hold for any admissible pricing policy, and then show that our proposed algorithms achieve a regret that is “close” to this lower bound. The magnitude of the regret can be interpreted as the economic value of prior knowledge on the demand function, manifested as the revenue loss due to model uncertainty.

382 citations


Journal ArticleDOI
TL;DR: This paper utilizes the theory of coherent risk measures initiated by Artzner et al. (1999) to show that such risk measures, in conjunction with the support of the uncertain parameters, are equivalent to explicit uncertainty sets for robust optimization.
Abstract: In this paper, we propose a methodology for constructing uncertainty sets within the framework of robust optimization for linear optimization problems with uncertain parameters. Our approach relies on decision maker risk preferences. Specifically, we utilize the theory of coherent risk measures initiated by Artzner et al. (1999) [Artzner, P., F. Delbaen, J. Eber, D. Heath. 1999. Coherent measures of risk. Math. Finance9 203--228.], and show that such risk measures, in conjunction with the support of the uncertain parameters, are equivalent to explicit uncertainty sets for robust optimization. We explore the structure of these sets in detail. In particular, we study a class of coherent risk measures, called distortion risk measures, which give rise to polyhedral uncertainty sets of a special structure that is tractable in the context of robust optimization. In the case of discrete distributions with rational probabilities, which is useful in practical settings when we are sampling from data, we show that the class of all distortion risk measures (and their corresponding polyhedral sets) are generated by a finite number of conditional value-at-risk (CVaR) measures. A subclass of the distortion risk measures corresponds to polyhedral uncertainty sets symmetric through the sample mean. We show that this subclass is also finitely generated and can be used to find inner approximations to arbitrary, polyhedral uncertainty sets.

350 citations


Journal ArticleDOI
TL;DR: To solve the CDLP for real-size networks, it is proved that the associated column generation subproblem is indeed NP-hard and a simple, greedy heuristic is proposed to overcome the complexity of an exact algorithm.
Abstract: During the past few years, there has been a trend to enrich traditional revenue management models built upon the independent demand paradigm by accounting for customer choice behavior. This extension involves both modeling and computational challenges. One way to describe choice behavior is to assume that each customer belongs to a segment, which is characterized by a consideration set, i.e., a subset of the products provided by the firm that a customer views as options. Customers choose a particular product according to a multinomial-logit criterion, a model widely used in the marketing literature. In this paper, we consider the choice-based, deterministic, linear programming model (CDLP) of Gallego et al. (2004) [Gallego, G., G. Iyengar, R. Phillips, A. Dubey. 2004. Managing flexible products on a network. Technical Report CORC TR-2004-01, Department of Industrial Engineering and Operations Research, Columbia University, New York], and the follow-up dynamic programming decomposition heuristic of van Ryzin and Liu (2008) [van Ryzin, G. J., Q. Liu. 2008. On the choice-based linear programming model for network revenue management. Manufacturing Service Oper. Management10(2) 288--310]. We focus on the more general version of these models, where customers belong to overlapping segments. To solve the CDLP for real-size networks, we need to develop a column generation algorithm. We prove that the associated column generation subproblem is indeed NP-hard and propose a simple, greedy heuristic to overcome the complexity of an exact algorithm. Our computational results show that the heuristic is quite effective and that the overall approach leads to high-quality, practical solutions.

303 citations


Journal ArticleDOI
TL;DR: A risk-averse newsvendor with stochastic price-dependent demand with Conditional Value-at-Risk (CVaR) as the decision criterion is considered to investigate the optimal pricing and ordering decisions in such a setting.
Abstract: The classical risk-neutral newsvendor problem is to decide the order quantity that maximizes the one-period expected profit. In this note, we consider a risk-averse newsvendor with stochastic price-dependent demand. We adopt Conditional Value-at-Risk ( CVaR ), a risk measure commonly used in finance, as the decision criterion. The aim of our study is to investigate the optimal pricing and ordering decisions in such a setting. For both additive and multiplicative demand models, we provide sufficient conditions for the uniqueness and existence of the optimal policy. Comparative statics show the monotonicity properties and other characteristics of the optimal pricing and ordering decisions. We also compare our results with those of the newsvendor with a risk-neutral attitude and a general utility function.

282 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a class of portfolios that have better stability properties than the traditional minimum-variance portfolios, which can be computed by solving a single nonlinear program, where robust estimation and portfolio optimization are performed in a single step.
Abstract: Mean-variance portfolios constructed using the sample mean and covariance matrix of asset returns perform poorly out of sample due to estimation error. Moreover, it is commonly accepted that estimation error in the sample mean is much larger than in the sample covariance matrix. For this reason, researchers have recently focused on the minimum-variance portfolio, which relies solely on estimates of the covariance matrix, and thus usually performs better out of sample. However, even the minimum-variance portfolios are quite sensitive to estimation error and have unstable weights that fluctuate substantially over time. In this paper, we propose a class of portfolios that have better stability properties than the traditional minimum-variance portfolios. The proposed portfolios are constructed using certain robust estimators and can be computed by solving a single nonlinear program, where robust estimation and portfolio optimization are performed in a single step. We show analytically that the resulting portfolio weights are less sensitive to changes in the asset-return distribution than those of the traditional portfolios. Moreover, our numerical results on simulated and empirical data confirm that the proposed portfolios are more stable than the traditional minimum-variance portfolios, while preserving (or slightly improving) their relatively good out-of-sample performance.

224 citations


Journal ArticleDOI
TL;DR: A planning model for a firm or public organization that needs to cover uncertain demand for a given item by procuring supplies from multiple sources is analyzed, and a highly efficient procedure is developed that generates the optimal set of suppliers as well as the optimal orders to be assigned to each.
Abstract: We analyze a planning model for a firm or public organization that needs to cover uncertain demand for a given item by procuring supplies from multiple sources. The necessity to employ multiple suppliers arises from the fact that when an order is placed with any of the suppliers, only a random fraction of the order size is usable. The model considers a single demand season with a given demand distribution, where all supplies need to be ordered simultaneously before the start of the season. The suppliers differ from one another in terms of their yield distributions, their procurement costs, and capacity levels. The planning model determines which of the potential suppliers are to be retained and what size order is to be placed with each. We consider two versions of the planning model: in the first, the service constraint model (SCM), the orders must be such that the available supply of usable units covers the random demand during the season with (at least) a given probability. In the second version of the model, the total cost model (TCM), the orders are determined so as to minimize the aggregate of procurement costs and end-of-the-season inventory and shortage costs. In the classical inventory model with a single, fully reliable supplier, these two models are known to be equivalent, but the equivalency breaks down under multiple suppliers with unreliable yields. For both the service constraint and total cost models, we develop a highly efficient procedure that generates the optimal set of suppliers as well as the optimal orders to be assigned to each. Most importantly, these procedures generate a variety of important qualitative insights, for example, regarding which sets of suppliers allow for a feasible solution, both when they have ample supply and when they are capacitated, and how various model parameters influence the selected set of suppliers, the aggregate order size, and the optimal cost values.

189 citations


Journal ArticleDOI
TL;DR: In this article, a retailer is endowed with a finite inventory of a nonperishable product and demand for this product is driven by a price sensitive Poisson process that depends on an unknown parameter that is a proxy for the market size.
Abstract: A retailer is endowed with a finite inventory of a nonperishable product Demand for this product is driven by a price-sensitive Poisson process that depends on an unknown parameter that is a proxy for the market size The retailer has a prior belief on the value of this parameter that he updates as time and available information (prices and sales) evolve The retailer's objective is to maximize the discounted long-term average profits of his operation using dynamic pricing policies We consider two cases In the first case, the retailer is constrained to sell the entire initial stock of the nonperishable product before a different assortment is considered In the second case, the retailer is able to stop selling the nonperishable product at any time and switch to a different menu of products For both cases, we formulate the retailer's problem as a (Poisson) intensity control problem and derive structural properties of an optimal solution, and suggest a simple and efficient approximated solution We use numerical computations, together with asymptotic analysis, to evaluate the performance of our proposed policy

188 citations


Journal ArticleDOI
TL;DR: This work studies an oligopoly consisting of M leaders and N followers that supply a homogeneous product (or service) noncooperatively and proposes a computational approach to find the equilibrium based on the sample average approximation method and analyze its rate of convergence.
Abstract: We study an oligopoly consisting of M leaders and N followers that supply a homogeneous product (or service) noncooperatively. Leaders choose their supply levels first, knowing the demand function only in distribution. Followers make their decisions after observing the leader supply levels and the realized demand function. We term the resulting equilibrium a stochastic multiple-leader Stackelberg-Nash-Cournot (SMS) equilibrium. We show the existence and uniqueness of SMS equilibrium under mild assumptions. We also propose a computational approach to find the equilibrium based on the sample average approximation method and analyze its rate of convergence. Finally, we apply this framework to model competition in the telecommunication industry.

185 citations


Journal ArticleDOI
TL;DR: The correspondence between uncertainty sets in robust optimization and some popular risk measures in finance are illustrated and how robust optimization can be used to generalize the concepts of these risk measures are shown.
Abstract: We illustrate the correspondence between uncertainty sets in robust optimization and some popular risk measures in finance and show how robust optimization can be used to generalize the concepts of these risk measures. We also show that by using properly defined uncertainty sets in robust optimization models, one can construct coherent risk measures and address the issue of the computational tractability of the resulting formulations. Our results have implications for efficient portfolio optimization under different measures of risk.

Journal ArticleDOI
TL;DR: Comparisons with an existing heuristic from the literature and a lower bound computed with complete knowledge of customer demands show that the best partial reoptimization heuristics outperform this heuristic and are on average no more than 10%--13% away from this lower bound, depending on the type of instances.
Abstract: We consider the vehicle-routing problem with stochastic demands (VRPSD) under reoptimization. We develop and analyze a finite-horizon Markov decision process (MDP) formulation for the single-vehicle case and establish a partial characterization of the optimal policy. We also propose a heuristic solution methodology for our MDP, named partial reoptimization, based on the idea of restricting attention to a subset of all the possible states and computing an optimal policy on this restricted set of states. We discuss two families of computationally efficient partial reoptimization heuristics and illustrate their performance on a set of instances with up to and including 100 customers. Comparisons with an existing heuristic from the literature and a lower bound computed with complete knowledge of customer demands show that our best partial reoptimization heuristics outperform this heuristic and are on average no more than 10%--13% away from this lower bound, depending on the type of instances.

Journal ArticleDOI
TL;DR: This paper introduces the extended affinely adjustable robust counterpart to modeling and solving multistage uncertain linear programs with fixed recourse and ends up with deterministic optimization formulations that are tractable and scalable to multistages problems.
Abstract: In this paper, we introduce the extended affinely adjustable robust counterpart to modeling and solving multistage uncertain linear programs with fixed recourse. Our approach first reparameterizes the primitive uncertainties and then applies the affinely adjustable robust counterpart proposed in the literature, in which recourse decisions are restricted to be linear in terms of the primitive uncertainties. We propose a special case of the extended affinely adjustable robust counterpart---the splitting-based extended affinely adjustable robust counterpart---and illustrate both theoretically and computationally that the potential of the affinely adjustable robust counterpart method is well beyond the one presented in the literature. Similar to the affinely adjustable robust counterpart, our approach ends up with deterministic optimization formulations that are tractable and scalable to multistage problems.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the performance impact of delay announcements to arriving customers who must wait before starting service in a many-server queue with customer abandonment, where the queue is assumed to be invisible to waiting customers, as in most customer contact centers.
Abstract: This paper studies the performance impact of making delay announcements to arriving customers who must wait before starting service in a many-server queue with customer abandonment. The queue is assumed to be invisible to waiting customers, as in most customer contact centers, when contact is made by telephone, e-mail, or instant messaging. Customers who must wait are told upon arrival either the delay of the last customer to enter service or an appropriate average delay. Models for the customer response are proposed. For a rough-cut performance analysis, prior to detailed simulation, two approximations are proposed: (1) the equilibrium delay in a deterministic fluid model, and (2) the equilibrium steady-state delay in a stochastic model with fixed delay announcements. These approximations are shown to be effective in overloaded regimes, where delay announcements are important, by making comparisons with simulations. Within the fluid model framework, conditions are established for the existence and uniqueness of an equilibrium delay, where the actual delay coincides with the announced delay. Multiple equilibria can occur if a key monotonicity condition is violated.

Journal ArticleDOI
TL;DR: This paper addresses the two-fare problem in greatest detail, but also treats the general multifare problem and the bid-price control problem.
Abstract: In this paper, we consider the revenue management problem from the perspective of online algorithms. This approach eliminates the need for both demand forecasts and a risk-neutrality assumption. The competitive ratio of a policy relative to a given input sequence is the ratio of the policy's performance to the offline optimal. Under the online algorithm approach, revenue management policies are evaluated based on the highest competitive ratio they can guarantee. We are able to define lower bounds on the best-possible performance and describe policies that achieve these lower bounds. We address the two-fare problem in greatest detail, but also treat the general multifare problem and the bid-price control problem.

Journal ArticleDOI
TL;DR: This work considers a constraint satisfaction problem, where one chooses the minimal staffing level n that adheres to a given cost constraint, and proposes a new ED + QED operational regime that enables QED tuning of the ED regime.
Abstract: Motivated by call center practice, we study asymptotically optimal staffing of many-server queues with abandonment. A call center is modelled as an M/M/n + G queue, which is characterized by Poisson arrivals, exponential service times, n servers, and generally distributed patience times of customers. Our asymptotic analysis is performed as the arrival rate, and hence the number of servers n, increases indefinitely. We consider a constraint satisfaction problem, where one chooses the minimal staffing level n that adheres to a given cost constraint. The cost can incorporate the fraction abandoning, average wait, and tail probabilities of wait. Depending on the cost, several operational regimes arise as asymptotically optimal: Efficiency-Driven (ED), Quality and Efficiency-Driven (QED), and also a new ED + QED operational regime that enables QED tuning of the ED regime. Numerical experiments demonstrate that, over a wide range of system parameters, our approximations provide useful insight as well as excellent fit to exact optimal solutions. It turns out that the QED regime is preferable either for small-to-moderate call centers or for large call centers with relatively tight performance constraints. The other two regimes are more appropriate for large call centers with loose constraints. We consider two versions of the constraint satisfaction problem. The first one is constraint satisfaction on a single time interval, say one hour, which is common in practice. Of special interest is a constraint on the tail probability, in which case our new ED + QED staffing turns out asymptotically optimal. We also address a global constraint problem, say over a full day. Here several time intervals, say 24 hours, are considered, with interval-dependent staffing levels allowed; one seeks to minimize staffing levels, or more generally costs, given the overall performance constraint. In this case, there is the added flexibility of trading service levels among time intervals, but we demonstrate that only little gain is associated with this flexibility if one is concerned with the fraction abandoning.

Journal ArticleDOI
TL;DR: In this paper, a branch-and-bound algorithm is proposed to solve a portfolio optimization problem with a probabilistic constraint, where the expected return of the constructed portfolio must exceed a prescribed return threshold with a high confidence level.
Abstract: In this paper, we study extensions of the classical Markowitz mean-variance portfolio optimization model. First, we consider that the expected asset returns are stochastic by introducing a probabilistic constraint, which imposes that the expected return of the constructed portfolio must exceed a prescribed return threshold with a high confidence level. We study the deterministic equivalents of these models. In particular, we define under which types of probability distributions the deterministic equivalents are second-order cone programs and give closed-form formulations. Second, we account for real-world trading constraints (such as the need to diversify the investments in a number of industrial sectors, the nonprofitability of holding small positions, and the constraint of buying stocks by lots) modeled with integer variables. To solve the resulting problems, we propose an exact solution approach in which the uncertainty in the estimate of the expected returns and the integer trading restrictions are simultaneously considered. The proposed algorithmic approach rests on a nonlinear branch-and-bound algorithm that features two new branching rules. The first one is a static rule, called idiosyncratic risk branching, while the second one is dynamic and is called portfolio risk branching. The two branching rules are implemented and tested using the open-source Bonmin framework. The comparison of the computational results obtained with state-of-the-art MINLP solvers ( MINLP_BB and CPLEX ) and with our approach shows the effectiveness of the latter, which permits to solve to optimality problems with up to 200 assets in a reasonable amount of time. The practicality of the approach is illustrated through its use for the construction of four fund-of-funds now available on the major trading markets.

Journal ArticleDOI
TL;DR: A robust optimization framework for dynamic empty repositioning problems modeled using time-space networks is developed and it is shown that the resulting problem is polynomially solvable.
Abstract: We develop a robust optimization framework for dynamic empty repositioning problems modeled using time-space networks. In such problems, uncertainty arises primarily from forecasts of future supplies and demands for assets at different time epochs. The proposed approach models such uncertainty using intervals about nominal forecast values and a limit on the systemwide scaled deviation from the nominal forecast values. A robust repositioning plan is defined as one in which the typical flow balance constraints and flow bounds are satisfied for the nominal forecast values, and the plan is recoverable under a limited set of recovery actions. A plan is recoverable when feasibility can be reestablished for any outcome in a defined uncertainty set. We develop necessary and sufficient conditions for flows to be robust under this definition for three types of allowable recovery actions. When recovery actions allow only flow changes on inventory arcs, we show that the resulting problem is polynomially solvable. When recovery actions allow limited reactive repositioning flows, we develop feasibility conditions that are independent of the size of the uncertainty set. A computational study establishes the practical viability of the proposed framework.

Journal ArticleDOI
TL;DR: A new method to compute bid prices in network revenue management problems that explicitly considers the temporal dynamics of the arrivals of the itinerary requests and generates bid prices that depend on the remaining leg capacities.
Abstract: We propose a new method to compute bid prices in network revenue management problems. The novel aspect of our method is that it explicitly considers the temporal dynamics of the arrivals of the itinerary requests and generates bid prices that depend on the remaining leg capacities. Our method is based on relaxing certain constraints that link the decisions for different flight legs by associating Lagrange multipliers with them. In this case, the network revenue management problem decomposes by the flight legs, and we can concentrate on one flight leg at a time. When compared with the so-called deterministic linear program, we show that our method provides a tighter upper bound on the optimal objective value of the network revenue management problem. Computational experiments indicate that the bid prices obtained by our method perform significantly better than the ones obtained by standard benchmark methods.

Journal ArticleDOI
TL;DR: This paper proposes an infinitesimal-perturbation-analysis (IPA) estimator and shows that the quantile sensitivities can be written in the form of conditional expectations, and obtains a consistent estimator by dividing data into batches and averaging the IPA estimates of all batches.
Abstract: Quantiles of a random performance serve as important alternatives to the usual expected value. They are used in the financial industry as measures of risk and in the service industry as measures of service quality. To manage the quantile of a performance, we need to know how changes in the input parameters affect the output quantiles, which are called quantile sensitivities. In this paper, we show that the quantile sensitivities can be written in the form of conditional expectations. Based on the conditional-expectation form, we first propose an infinitesimal-perturbation-analysis (IPA) estimator. The IPA estimator is asymptotically unbiased, but it is not consistent. We then obtain a consistent estimator by dividing data into batches and averaging the IPA estimates of all batches. The estimator satisfies a central limit theorem for the i.i.d. data, and the rate of convergence is strictly slower than n-1/3. The numerical results show that the estimator works well for practical problems.

Journal ArticleDOI
TL;DR: The performance of a stylized supply chain where two firms, a retailer and a producer, compete in a Stackelberg game is studied and it is found that the producer always prefers the flexible contract with hedging to the flexiblecontract without hedging.
Abstract: We study the performance of a stylized supply chain where two firms, a retailer and a producer, compete in a Stackelberg game. The retailer purchases a single product from the producer and afterward sells it in the retail market at a stochastic clearance price. The retailer, however, is budget constrained and is therefore limited in the number of units that he may purchase from the producer. We also assume that the retailer's profit depends in part on the realized path or terminal value of some observable stochastic process. We interpret this process as a financial process such as a foreign exchange rate or interest rate. More generally, the process can be interpreted as any relevant economic index. We consider a variation (the flexible contract) of the traditional wholesale price contract that is offered by the producer to the retailer. Under this flexible contract, at t = 0 the producer offers a menu of wholesale prices to the retailer, one for each realization of the financial process up to a future time τ. The retailer then commits to purchasing at time τ a variable number of units, with the specific quantity depending on the realization of the process up to time τ. Because of the retailer's budget constraint, the supply chain might be more profitable if the retailer was able to shift some of the budget from states where the constraint is not binding to states where it is binding. We therefore consider a variation of the flexible contract, where we assume that the retailer is able to trade dynamically between zero and τ in the financial market. We refer to this variation as the flexible contract with hedging. We compare the decentralized competitive solution for the two contracts with the solutions obtained by a central planner. We also compare the supply chain's performance across the two contracts. We find, for example, that the producer always prefers the flexible contract with hedging to the flexible contract without hedging. Depending on model parameters, however, the retailer might or might not prefer the flexible contract with hedging.

Journal ArticleDOI
TL;DR: An efficient approximation scheme for the difficult multistage stochastic integer program is developed and it is proved that the proposed scheme is asymptotically optimal.
Abstract: This paper addresses a general class of capacity planning problems under uncertainty, which arises, for example, in semiconductor tool purchase planning. Using a scenario tree to model the evolution of the uncertainties, we develop a multistage stochastic integer programming formulation for the problem. In contrast to earlier two-stage approaches, the multistage model allows for revision of the capacity expansion plan as more information regarding the uncertainties is revealed. We provide analytical bounds for the value of multistage stochastic programming (VMS) afforded over the two-stage approach. By exploiting a special substructure inherent in the problem, we develop an efficient approximation scheme for the difficult multistage stochastic integer program and prove that the proposed scheme is asymptotically optimal. Computational experiments with realistic-scale problem instances suggest that the VMS for this class of problems is quite high; moreover, the quality and performance of the approximation scheme is very satisfactory. Fortunately, this is more so for instances for which the VMS is high.

Journal ArticleDOI
TL;DR: This work analytically calculates the expected cost incurred by the manufacturer and uses simulation to obtain expected costs for the distributor and the retailer, and constructs a three- person cooperative game in characteristic-function form and derives necessary conditions for the stability of each of five possible coalitions.
Abstract: We analyze the problem of allocating cost savings from sharing demand information in a three-level supply chain with a manufacturer, a distributor, and a retailer. To find a unique allocation scheme, we use concepts from cooperative game theory. First, we analytically compute the expected cost incurred by the manufacturer and then use simulation to obtain expected costs for the distributor and the retailer. We construct a three-person cooperative game in characteristic-function form and derive necessary conditions for the stability of each of five possible coalitions. To divide the cost savings between two members, or among three supply chain members, we use various allocation schemes. We present numerical analyses to investigate the impacts of the demand autocorrelation coefficient, ρ, and the unit holding and shortage costs on the allocation scheme.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of allocating a fixed amount of an infinitely divisible resource among multiple competing, fully rational users, and study the efficiency guarantees that are possible when the mechanism is not allowed to price differentiate.
Abstract: We consider the problem of allocating a fixed amount of an infinitely divisible resource among multiple competing, fully rational users. We study the efficiency guarantees that are possible when we restrict to mechanisms that satisfy certain scalability constraints motivated by large-scale communication networks; in particular, we restrict attention to mechanisms where users are restricted to one-dimensional strategy spaces. We first study the efficiency guarantees possible when the mechanism is not allowed to price differentiate. We study the worst-case efficiency loss (ratio of the utility associated with a Nash equilibrium to the maximum possible utility), and show that Kelly's proportional allocation mechanism minimizes the efficiency loss when users are price anticipating. We then turn our attention to mechanisms where price differentiation is permitted; using an adaptation of the Vickrey-Clarke-Groves class of mechanisms, we construct a class of mechanisms with one-dimensional strategy spaces where Nash equilibria are fully efficient. These mechanisms are shown to be fully efficient even in general convex environments, under reasonable assumptions. Our results highlight a fundamental insight in mechanism design: when the pricing flexibility available to the mechanism designer is limited, restricting the strategic flexibility of bidders may actually improve the efficiency guarantee.

Journal ArticleDOI
TL;DR: Borders for the optimal protection limits that take little effort to compute and can be used to effectively solve large problems are described.
Abstract: We examine a multiperiod capacity allocation model with upgrading. There are multiple product types, corresponding to multiple classes of demand, and the firm purchases capacity of each product before the first period. Within each period, after demand arrives, products are allocated to customers. Customers who arrive to find that their product has been depleted can be upgraded by at most one level. We show that the optimal allocation policy is a simple two-step algorithm: First, use any available capacity to satisfy same-class demand, and then upgrade customers until capacity reaches a protection limit, so that in the second step the higher-level capacity is rationed. We show that these results hold both when all capacity is salvaged at the end of the last demand period as well as when capacity can be replenished (in the latter case, an order-up-to policy is optimal for replenishment). Although finding the optimal protection limits is computationally intensive, we describe bounds for the optimal protection limits that take little effort to compute and can be used to effectively solve large problems. Using these heuristics, we examine numerically the relative value of strictly optimal capacity and dynamic rationing, the value of perfect demand information, and the impact of demand and economic parameters on the value of optimal substitution.

Journal ArticleDOI
TL;DR: A max-min model is developed and solved that identifies resource-limited interdiction actions that maximally delay completion time of the proliferators' weapons project, given that the proliferator will observe any such actions and adjust his plans to minimize that time.
Abstract: A “proliferator” seeks to complete a first small batch of fission weapons as quickly as possible, whereas an “interdictor” wishes to delay that completion for as long as possible. We develop and solve a max-min model that identifies resource-limited interdiction actions that maximally delay completion time of the proliferator's weapons project, given that the proliferator will observe any such actions and adjust his plans to minimize that time. The model incorporates a detailed project-management (critical path method) submodel, and standard optimization software solves the model in a few minutes on a personal computer. We exploit off-the-shelf project-management software to manage a database, control the optimization, and display results. Using a range of levels for interdiction effort, we analyze a published case study that models three alternate uranium-enrichment technologies. The task of “cascade loading” appears in all technologies and turns out to be an inherent fragility for the proliferator at all levels of interdiction effort. Such insights enable policy makers to quantify the effects of interdiction options at their disposal, be they diplomatic, economic, or military.

Journal ArticleDOI
TL;DR: A multistage, stochastic, mixed-integer programming model for planning capacity expansion of production facilities, and applies “variable splitting” to two model variants, and solves those variants using Dantzig-Wolfe decomposition.
Abstract: We describe a multistage, stochastic, mixed-integer programming model for planning capacity expansion of production facilities. A scenario tree represents uncertainty in the model; a general mixed-integer program defines the operational submodel at each scenario-tree node, and capacity-expansion decisions link the stages. We apply “variable splitting” to two model variants, and solve those variants using Dantzig-Wolfe decomposition. The Dantzig-Wolfe master problem can have a much stronger linear programming relaxation than is possible without variable splitting, over 700% stronger in one case. The master problem solves easily and tends to yield integer solutions, obviating the need for a full branch-and-price solution procedure. For each scenario-tree node, the decomposition defines a subproblem that may be viewed as a single-period, deterministic, capacity-planning problem. An effective solution procedure results as long as the subproblems solve efficiently, and the procedure incorporates a good “duals stabilization method.” We present computational results for a model to plan the capacity expansion of an electricity distribution network in New Zealand, given uncertain future demand. The largest problem we solve to optimality has six stages and 243 scenarios, and corresponds to a deterministic equivalent with a quarter of a million binary variables.

Journal ArticleDOI
TL;DR: In this article, the worst-case inefficiency of Nash equilibria was studied in a generalization of the traffic assignment problem in which competitors, who may control a nonnegligible fraction of the total flow, ship goods across a network.
Abstract: In the traffic assignment problem, first proposed by Wardrop in 1952, commuters select the shortest available path to travel from their origins to their destinations. We study a generalization of this problem in which competitors, who may control a nonnegligible fraction of the total flow, ship goods across a network. This type of games, usually referred to as atomic games, readily applies to situations in which the competing freight companies have market power. Other applications include intelligent transportation systems, competition among telecommunication network service providers, and scheduling with flexible machines. Our goal is to determine to what extent these systems can benefit from some form of coordination or regulation. We measure the quality of the outcome of the game without centralized control by computing the worst-case inefficiency of Nash equilibria. The main conclusion is that although self-interested competitors will not achieve a fully efficient solution from the system's point of view, the loss is not too severe. We show how to compute several bounds for the worst-case inefficiency that depend on the characteristics of cost functions and on the market structure in the game. In addition, building upon the work of Catoni and Pallotino, we show examples in which market aggregation (or collusion) adversely impacts the aggregated competitors, even though their market power increases. For example, Nash equilibria of atomic network games may be less efficient than the corresponding Wardrop equilibria. When competitors are completely symmetric, we provide a characterization of the Nash equilibrium using a potential function, and prove that this counterintuitive phenomenon does not arise. Finally, we study a pricing mechanism that elicits more coordination from the players by reducing the worst-case inefficiency of Nash equilibria.

Journal ArticleDOI
TL;DR: In this article, the authors consider a manufacturer using a reverse auction in combination with supplier qualification screening to determine which qualified supplier will be awarded a contract, and analytically explore the trade-offs between varying levels of pre-and postqualification.
Abstract: We consider a manufacturer using a request-for-quotes (RFQ) reverse auction in combination with supplier qualification screening to determine which qualified supplier will be awarded a contract. Supplier qualification screening is costly for the manufacturer---for example, involving reference checks, financial audits, and on-site visits. The manufacturer seeks to minimize its total procurement costs, i.e., the contract payment plus qualification costs. Although suppliers can be qualified prior to the auction (prequalification), we allow the manufacturer to delay all or part of the qualification until after the auction (postqualification). Using an optimal mechanism analysis, we analytically explore the trade-offs between varying levels of pre-and postqualification. Although using postqualification causes the expected contract payment to increase (bids from unqualified suppliers are discarded), we find that standard industrial practices of prequalification can be improved upon by judicious use of postqualification, particularly when supplier qualification screening is moderately expensive relative to the value of the contract to the manufacturer.

Journal ArticleDOI
TL;DR: In this article, two market designs are discussed for electricity trade and transmission in Europe, and their performance in the presence of market power can be represented by two models from the literature.
Abstract: In Europe, two market designs are discussed for electricity trade and transmission. We argue that their performance in the presence of market power can be represented by two models from the literature. In contrast to examples for simple two-node networks, we show that in more complex networks a general ranking of both designs is not possible. Hence, computational models are required to evaluate the designs for realistic parameter assumptions. We extend existing formulations of both models to represent them as equilibrium problems with equilibrium constraints (EPEC) with equivalent representation of demand, fringe generation, and strategic generators. In a numerical simulation for the Northwestern European network, the integrated market design performs better. This difference illustrates the impact of small assumptions on the outcome of strategic models.