scispace - formally typeset
Search or ask a question

Showing papers in "Operations Research in 2008"


Journal ArticleDOI
TL;DR: It is shown that multigrid ideas can be used to reduce the computational complexity of estimating an expected value arising from a stochastic differential equation using Monte Carlo path simulations.
Abstract: We show that multigrid ideas can be used to reduce the computational complexity of estimating an expected value arising from a stochastic differential equation using Monte Carlo path simulations. In the simplest case of a Lipschitz payoff and a Euler discretisation, the computational cost to achieve an accuracy of O(e) is reduced from O(e-3) to O(e-2 (log e)2). The analysis is supported by numerical results showing significant computational savings.

1,619 citations


Journal ArticleDOI
TL;DR: The results establish DEA as a nonparametric stochastic frontier estimation (SFE) methodology as well as the best of the parametric methods in the estimation of the impact of contextual variables on productivity.
Abstract: A DEA-based stochastic frontier estimation framework is presented to evaluate contextual variables affecting productivity that allows for both one-sided inefficiency deviations as well as two-sided random noise. Conditions are identified under which a two-stage procedure consisting of DEA followed by ordinary least squares (OLS) regression analysis yields consistent estimators of the impact of contextual variables. Conditions are also identified under which DEA in the first stage followed by maximum likelihood estimation (MLE) in the second stage yields consistent estimators of the impact of contextual variables. This requires the contextual variables to be independent of the input variables, but the contextual variables may be correlated with each other. Monte Carlo simulations are carried out to compare the performance of our two-stage approach with one-stage and two-stage parametric approaches. Simulation results indicate that DEA-based procedures with OLS, maximum likelihood, or even Tobit estimation in the second stage perform as well as the best of the parametric methods in the estimation of the impact of contextual variables on productivity. Simulation results also indicate that DEA-based procedures perform better than parametric methods in the estimation of individual decision-making unit (DMU) productivity. Overall, the results establish DEA as a nonparametric stochastic frontier estimation (SFE) methodology.

700 citations


Journal ArticleDOI
TL;DR: A method to dynamically schedule patients with different priorities to a diagnostic facility in a public health-care setting and the form of the optimal linear value function approximation and the resulting policy is presented.
Abstract: We present a method to dynamically schedule patients with different priorities to a diagnostic facility in a public health-care setting. Rather than maximizing revenue, the challenge facing the resource manager is to dynamically allocate available capacity to incoming demand to achieve wait-time targets in a cost-effective manner. We model the scheduling process as a Markov decision process. Because the state space is too large for a direct solution, we solve the equivalent linear program through approximate dynamic programming. For a broad range of cost parameter values, we present analytical results that give the form of the optimal linear value function approximation and the resulting policy. We investigate the practical implications and the quality of the policy through simulation.

361 citations


Journal ArticleDOI
TL;DR: The results show that applying subset-row inequalities in the master problem significantly improves the lower bound and, in many cases, makes it possible to prove optimality in the root node.
Abstract: This paper presents a branch-and-cut-and-price algorithm for the vehicle-routing problem with time windows. The standard Dantzig-Wolfe decomposition of the arc flow formulation leads to a set-partitioning problem as the master problem and an elementary shortest-path problem with resource constraints as the pricing problem. We introduce the subset-row inequalities, which are Chvatal-Gomory rank-1 cuts based on a subset of the constraints in the master problem. Applying a subset-row inequality in the master problem increases the complexity of the label-setting algorithm used to solve the pricing problem because an additional resource is added for each inequality. We propose a modified dominance criterion that makes it possible to dominate more labels by exploiting the step-like structure of the objective function of the pricing problem. Computational experiments have been performed on the Solomon benchmarks where we were able to close several instances. The results show that applying subset-row inequalities in the master problem significantly improves the lower bound and, in many cases, makes it possible to prove optimality in the root node.

351 citations


Journal ArticleDOI
TL;DR: This paper derives the order quantities that minimize the newsvendor's maximum regret of not acting optimally, which can be extended to a variety of problems that require a robust but not conservative solution.
Abstract: Traditional stochastic inventory models assume full knowledge of the demand probability distribution. However, in practice, it is often difficult to completely characterize the demand distribution, especially in fast-changing markets. In this paper, we study the newsvendor problem with partial information about the demand distribution (e.g., mean, variance, symmetry, unimodality). In particular, we derive the order quantities that minimize the newsvendor's maximum regret of not acting optimally. Most of our solutions are tractable, which makes them attractive for practical application. Our analysis also generates insights into the choice of the demand distribution as an input to the newsvendor model. In particular, the distributions that maximize the entropy perform well under the regret criterion. Our approach can be extended to a variety of problems that require a robust but not conservative solution.

329 citations


Journal ArticleDOI
TL;DR: The original DEA cross-efficiency concept is generalized to game cross efficiency, where each DMU is viewed as a player that seeks to maximize its own efficiency, under the condition that the cross efficiency of each of the other DMUs does not deteriorate.
Abstract: In this paper, we examine the cross-efficiency concept in data envelopment analysis (DEA). Cross efficiency links one decision-making unit's (DMU) performance with others and has the appeal that scores arise from peer evaluation. However, a number of the current cross-efficiency approaches are flawed because they use scores that are arbitrary in that they depend on a particular set of optimal DEA weights generated by the computer code in use at the time. One set of optimal DEA weights (possibly out of many alternate optima) may improve the cross efficiency of some DMUs, but at the expense of others. While models have been developed that incorporate secondary goals aimed at being more selective in the choice of optimal multipliers, the alternate optima issue remains. In cases where there is competition among DMUs, this situation may be seen as undesirable and unfair. To address this issue, this paper generalizes the original DEA cross-efficiency concept to game cross efficiency. Specifically, each DMU is viewed as a player that seeks to maximize its own efficiency, under the condition that the cross efficiency of each of the other DMUs does not deteriorate. The average game cross-efficiency score is obtained when the DMU's own maximized efficiency scores are averaged. To implement the DEA game cross-efficiency model, an algorithm for deriving the best (game cross-efficiency) scores is presented. We show that the optimal game cross-efficiency scores constitute a Nash equilibrium point.

295 citations


Journal ArticleDOI
TL;DR: This paper conceptualizes an appointment system as a single-server queueing system in which customers who are about to enter service have a state-dependent probability of not being served and may rejoin the queue.
Abstract: Many primary care offices and other medical practices regularly experience long backlogs for appointments These backlogs are exacerbated by a significant level of last-minute cancellations or “no-shows,” which have the effect of wasting capacity In this paper, we conceptualize such an appointment system as a single-server queueing system in which customers who are about to enter service have a state-dependent probability of not being served and may rejoin the queue We derive stationary distributions of the queue size, assuming both deterministic as well as exponential service times, and compare the performance metrics to the results of a simulation of the appointment system Our results demonstrate the usefulness of the queueing models in providing guidance on identifying patient panel sizes for medical practices that are trying to implement a policy of “advanced access”

255 citations


Journal ArticleDOI
TL;DR: This paper proposes tractable methods of addressing a general class of multistage stochastic optimization problems, which assume only limited information of the distributions of the underlying uncertainties, such as known mean, support, and covariance, and proposes several new decision rules that improve upon linear decision rules, while keeping the approximate models computationally tractable.
Abstract: Stochastic optimization, especially multistage models, is well known to be computationally excruciating. Moreover, such models require exact specifications of the probability distributions of the underlying uncertainties, which are often unavailable. In this paper, we propose tractable methods of addressing a general class of multistage stochastic optimization problems, which assume only limited information of the distributions of the underlying uncertainties, such as known mean, support, and covariance. One basic idea of our methods is to approximate the recourse decisions via decision rules. We first examine linear decision rules in detail and show that even for problems with complete recourse, linear decision rules can be inadequate and even lead to infeasible instances. Hence, we propose several new decision rules that improve upon linear decision rules, while keeping the approximate models computationally tractable. Specifically, our approximate models are in the forms of the so-called second-order cone (SOC) programs, which could be solved efficiently both in theory and in practice. We also present computational evidence indicating that our approach is a viable alternative, and possibly advantageous, to existing stochastic optimization solution techniques in solving a two-stage stochastic optimization problem with complete recourse.

248 citations


Journal ArticleDOI
TL;DR: A Markov decision process model is developed for the appointment-booking problem in which the patients' choice behavior is modeled explicitly and it is proved that the optimal policy is a threshold-type policy as long as the choice probabilities satisfy a weak condition.
Abstract: In addition to having uncertain patient arrivals, primary-care clinics also face uncertainty arising from patient choices. Patients have different perceptions of the acuity of their need, different time-of-day preferences, as well as different degrees of loyalty toward their designated primary-care provider (PCP). Advanced access systems are designed to reduce wait and increase satisfaction by allowing patients to choose either a same-day or a scheduled future appointment. However, the clinic must carefully manage patients' access to physicians' slots to balance the needs of those who book in advance and those who require a same-day appointment. On the one hand, scheduling too many appointments in advance can lead to capacity shortages when same-day requests arrive. On the other hand, scheduling too few appointments increases patients' wait time, patient-PCP mismatch, and the possibility of clinic slots going unused. The capacity management problem facing the clinic is to decide which appointment requests to accept to maximize revenue. We develop a Markov decision process model for the appointment-booking problem in which the patients' choice behavior is modeled explicitly. When the clinic is served by a single physician, we prove that the optimal policy is a threshold-type policy as long as the choice probabilities satisfy a weak condition. For a multiple-doctor clinic, we partially characterize the structure of the optimal policy. We propose several heuristics and an upper bound. Numerical tests show that the two heuristics based on the partial characterization of the optimal policy are quite accurate. We also study the effect on the clinic's optimal profit of patients' loyalty to their PCPs, total clinic load, and load imbalance among physicians.

228 citations


Journal ArticleDOI
TL;DR: It is shown that production risks, taken currently by the vaccine manufacturer, lead to an insufficient supply of vaccine and a variant of the cost-sharing contract is designed that provides incentives to both parties so that the supply chain achieves global optimization and hence improves the supply of vaccines.
Abstract: Annual influenza outbreaks incur great expenses in both human and monetary terms, and billions of dollars are being allocated for influenza pandemic preparedness in an attempt to avert even greater potential losses. Vaccination is a primary weapon for fighting influenza outbreaks. The influenza vaccine supply chain has characteristics that resemble the newsvendor problem but possesses several characteristics that distinguish it from many other supply chains. Differences include a nonlinear value of sales (caused by the nonlinear health benefits of vaccination that are due to infection dynamics) and vaccine production yield issues. We show that production risks, taken currently by the vaccine manufacturer, lead to an insufficient supply of vaccine. Several supply contracts that coordinate buyer (governmental public health service) and supplier (vaccine manufacturer) incentives in many other industrial supply chains cannot fully coordinate the influenza vaccine supply chain. We design a variant of the cost-sharing contract and show that it provides incentives to both parties so that the supply chain achieves global optimization and hence improves the supply of vaccines.

222 citations


Journal ArticleDOI
TL;DR: Two approximations for the shortfall probability of a firm or public organization that needs to cover uncertain demand for a given item by procuring supplies from multiple sources are developed, one based on a central limit theorem (CLT) and the other on a large-deviations technique (LDT).
Abstract: We analyze a planning model for a firm or public organization that needs to cover uncertain demand for a given item by procuring supplies from multiple sources. Each source faces a random yield factor with a general probability distribution. The model considers a single demand season. All supplies need to be ordered before the start of the season. The planning problem amounts to selecting which of the given set of suppliers to retain, and how much to order from each, so as to minimize total procurement costs while ensuring that the uncertain demand is met with a given probability. The total procurement costs consist of variable costs that are proportional to the total quantity delivered by the suppliers, and a fixed cost for each participating supplier, incurred irrespective of his supply level. Each potential supplier is characterized by a given fixed cost and a given distribution of his random yield factor. The yield factors at different suppliers are assumed to be independent of the season's demand, which is described by a general probability distribution. Determining the optimal set of suppliers, the aggregate order and its allocation among the suppliers, on the basis of the exact shortfall probability, is prohibitively difficult. We have therefore developed two approximations for the shortfall probability. Although both approximations are shown to be highly accurate, the first, based on a large-deviations technique (LDT), has the advantage of resulting in a rigorous upper bound for the required total order and associated costs. The second approximation is based on a central limit theorem (CLT) and is shown to be asymptotically accurate, whereas the order quantities determined by this method are asymptotically optimal as the number of suppliers grows. Most importantly, this CLT-based approximation permits many important qualitative insights.

Journal ArticleDOI
Paul Zipkin1
TL;DR: A new approach to the structural analysis of the standard lost-sales inventory system is provided, which is easier to work with than the original one and derives new bounds on the optimal policy.
Abstract: We provide a new approach to the structural analysis of the standard lost-sales inventory system. This approach is, we think, easier to work with than the original one. We also derive new bounds on the optimal policy. Then, we show that more variable demand leads to higher cost. Finally, we extend the analysis to several important variations of the basic model.

Journal ArticleDOI
TL;DR: In this article, a model of two-settlement electricity markets is introduced, which accounts for flow congestion, demand uncertainty, system contingencies, and market power, and the subgame perfect Nash equilibrium for this model is formulated as an equilibrium problem with equilibrium constraints (EPEC).
Abstract: A model of two-settlement electricity markets is introduced, which accounts for flow congestion, demand uncertainty, system contingencies, and market power. We formulate the subgame perfect Nash equilibrium for this model as an equilibrium problem with equilibrium constraints (EPEC), in which each firm solves a mathematical program with equilibrium constraints (MPEC). The model assumes linear demand functions, quadratic generation cost functions, and a lossless DC network, resulting in equilibrium constraints as a parametric linear complementarity problem (LCP). We introduce an iterative procedure for solving this EPEC through repeated application of an MPEC algorithm. This MPEC algorithm is based on solving quadratic programming subproblems and on parametric LCP pivoting. Numerical examples demonstrate the effectiveness of the MPEC and EPEC algorithms and the tractability of the model for realistic-size power systems.

Journal ArticleDOI
TL;DR: The optimal dual-index policy mimics the behavior of the complex, globally optimal state-dependent policy found via dynamic programming: the dual- index policy is nearly optimal for the majority of cases, and significantly outperforms single sourcing.
Abstract: We examine a possibly capacitated, periodically reviewed, single-stage inventory system where replenishment can be obtained either through a regular fixed lead time channel, or, for a premium, via a channel with a smaller fixed lead time. We consider the case when the unsatisfied demands are backordered over an infinite horizon, introducing the easily implementable, yet informationally rich dual-index policy. We show very general separability results for the optimal parameter values, providing a simulation-based optimization procedure that exploits these separability properties to calculate the optimal inventory parameters within seconds. We explore the performance of the dual-index policy under stationary demands as well as capacitated production environments, demonstrating when the dual-sourcing option is most valuable. We find that the optimal dual-index policy mimics the behavior of the complex, globally optimal state-dependent policy found via dynamic programming: the dual-index policy is nearly optimal (within 1% or 2%) for the majority of cases, and significantly outperforms single sourcing (up to 50% better). Our results on optimal dual-index parameters are generic, extending to a variety of complex and realistic scenarios such as nonstationary demand, random yields, demand spikes, and supply disruptions.

Journal ArticleDOI
TL;DR: A broad class of stochastic dynamic programming problems that are amenable to relaxation via decomposition are considered, namely, Lagrangian relaxation and the linear programming (LP) approach to approximate dynamic programming.
Abstract: We consider a broad class of stochastic dynamic programming problems that are amenable to relaxation via decomposition. These problems comprise multiple subproblems that are independent of each other except for a collection of coupling constraints on the action space. We fit an additively separable value function approximation using two techniques, namely, Lagrangian relaxation and the linear programming (LP) approach to approximate dynamic programming. We prove various results comparing the relaxations to each other and to the optimal problem value. We also provide a column generation algorithm for solving the LP-based relaxation to any desired optimality tolerance, and we report on numerical experiments on bandit-like problems. Our results provide insight into the complexity versus quality trade-off when choosing which of these relaxations to implement.

Journal ArticleDOI
TL;DR: Modern data-mining methods are utilized, specifically classification trees and clustering algorithms, along with claims data from over 800,000 insured individuals over three years, to provide rigorously validated predictions of health-care costs in the third year, based on medical and cost data from the first two years.
Abstract: The rising cost of health care is one of the world's most important problems. Accordingly, predicting such costs with accuracy is a significant first step in addressing this problem. Since the 1980s, there has been research on the predictive modeling of medical costs based on (health insurance) claims data using heuristic rules and regression methods. These methods, however, have not been appropriately validated using populations that the methods have not seen. We utilize modern data-mining methods, specifically classification trees and clustering algorithms, along with claims data from over 800,000 insured individuals over three years, to provide rigorously validated predictions of health-care costs in the third year, based on medical and cost data from the first two years. We quantify the accuracy of our predictions using unseen (out-of-sample) data from over 200,000 members. The key findings are: (a) our data-mining methods provide accurate predictions of medical costs and represent a powerful tool for prediction of health-care costs, (b) the pattern of past cost data is a strong predictor of future costs, and (c) medical information only contributes to accurate prediction of medical costs of high-cost members.

Journal ArticleDOI
TL;DR: The pseudoflow algorithm is introduced, which is a simplex algorithm that employs only pseudoflows and does not generate flows explicitly, and is shown to solve the maximum-flow problem on s, t-tree networks in linear time.
Abstract: We introduce the pseudoflow algorithm for the maximum-flow problem that employs only pseudoflows and does not generate flows explicitly. The algorithm solves directly a problem equivalent to the minimum-cut problem---the maximum blocking-cut problem. Once the maximum blocking-cut solution is available, the additional complexity required to find the respective maximum-flow is O(m log n). A variant of the algorithm is a new parametric maximum-flow algorithm generating all breakpoints in the same complexity required to solve the constant capacities maximum-flow problem. The pseudoflow algorithm has also a simplex variant, pseudoflow-simplex, that can be implemented to solve the maximum-flow problem. One feature of the pseudoflow algorithm is that it can initialize with any pseudoflow. This feature allows it to reach an optimal solution quickly when the initial pseudoflow is “close” to an optimal solution. The complexities of the pseudoflow algorithm, the pseudoflow-simplex, and the parametric variants of pseudoflow and pseudoflow-simplex algorithms are all O(mn log n) on a graph with n nodes and m arcs. Therefore, the pseudoflow-simplex algorithm is the fastest simplex algorithm known for the parametric maximum-flow problem. The pseudoflow algorithm is also shown to solve the maximum-flow problem on s, t-tree networks in linear time, where s, t-tree networks are formed by joining a forest of capacitated arcs, with nodes s and t adjacent to any subset of the nodes.

Journal ArticleDOI
TL;DR: This work proposes a new high-order time discretization scheme for the PIDE based on the extrapolation approach to the solution of ODEs that also treats the diffusion term implicitly and the jump term explicitly and is remarkably fast and accurate.
Abstract: We propose a new computational method for the valuation of options in jump-diffusion models. The option value function for European and barrier options satisfies a partial integrodifferential equation (PIDE). This PIDE is commonly integrated in time by implicit-explicit (IMEX) time discretization schemes, where the differential (diffusion) term is treated implicitly, while the integral (jump) term is treated explicitly. In particular, the popular IMEX Euler scheme is first-order accurate in time. Second-order accuracy in time can be achieved by using the IMEX midpoint scheme. In contrast to the above approaches, we propose a new high-order time discretization scheme for the PIDE based on the extrapolation approach to the solution of ODEs that also treats the diffusion term implicitly and the jump term explicitly. The scheme is simple to implement, can be added to any PIDE solver based on the IMEX Euler scheme, and is remarkably fast and accurate. We demonstrate our approach on the examples of Merton's and Kou's jump-diffusion models, the diffusion-extended variance gamma model, as well as the two-dimensional Duffie-Pan-Singleton model with correlated and contemporaneous jumps in the stock price and its volatility. By way of example, pricing a one-year double-barrier option in Kou's jump-diffusion model, our scheme attains accuracy of 10-5 in 72 time steps (in 0.05 seconds). In contrast, it takes the first-order IMEX Euler scheme more than 1.3 million time steps (in 873 seconds) and the second-order IMEX midpoint scheme 768 time steps (in 0.49 seconds) to attain the same accuracy. Our scheme is also well suited for Bermudan options. Combining simplicity of implementation and remarkable gains in computational efficiency, we expect this method to be very attractive to financial engineering modelers.

Journal ArticleDOI
TL;DR: This work addresses a demand model involving multiplicative uncertainty, motivated by market share models often used in marketing, and finds that the objective function is still reasonably well behaved, and there is a unique solution to the first-order conditions, and this solution is optimal for the problem.
Abstract: We seek optimal inventory levels and prices of multiple products in a given assortment in a newsvendor model (single period, stochastic demand) under price-based substitution, but not stockout-based substitution. We address a demand model involving multiplicative uncertainty, motivated by market share models often used in marketing. The pricing problem that arises is known not to be well behaved in the sense that, in its deterministic version, the objective function is not jointly quasi-concave in prices. However, we find that the objective function is still reasonably well behaved in the sense that there is a unique solution to the first-order conditions, and this solution is optimal for our problem.

Journal ArticleDOI
TL;DR: It is concluded that, in general, to efficiently achieve a lifetime risk comparable to the current risk among U.S. women, screening should start relatively early in life and continue relatively late in life regardless of the screening interval(s) adopted.
Abstract: Questions regarding the relative value and frequency of mammography screening for premenopausal women versus postmenopausal women remain open due to the conflicting age-based dynamics of both the disease (increasing incidence, decreasing aggression) and the accuracy of the test results (increasing sensitivity and specificity). To investigate these questions, we formulate a partially observed Markov chain model that captures several of these age-based dynamics not previously considered simultaneously. Using sample-path enumeration, we evaluate a broad range of policies to generate the set of “efficient” policies, as measured by a lifetime breast cancer mortality risk metric and an expected mammogram count, from which a patient may select a policy based on individual circumstance. We demonstrate robustness with respect to small changes in the input data and conclude that, in general, to efficiently achieve a lifetime risk comparable to the current risk among U.S. women, screening should start relatively early in life and continue relatively late in life regardless of the screening interval(s) adopted. The frontier also exhibits interesting patterns with respect to policy type, where policy type is defined by the relationship between the screening interval prescribed in younger years and that prescribed later in life.

Journal ArticleDOI
TL;DR: The first HIV optimization models that aim to maximize the expected lifetime or quality-adjusted lifetime of a patient are developed, based on clinical data that support a strategy of treating HIV earlier in its course as opposed to recent trends toward treating it later.
Abstract: The question of when to initiate HIV treatment is considered the most important question in HIV care today. Benefits of delaying therapy include avoiding the negative side effects and toxicities associated with the drugs, delaying selective pressures that induce the development of resistant strains of the virus, and preserving a limited number of treatment options. On the other hand, the risks of delayed therapy include the possibility of irreversible damage to the immune system, development of AIDS-related complications, and death. We use Markov decision processes to develop the first HIV optimization models that aim to maximize the expected lifetime or quality-adjusted lifetime of a patient. We prove conditions that establish structural properties of the optimal solution and compare them to our data and results. Model solutions, based on clinical data, support a strategy of treating HIV earlier in its course as opposed to recent trends toward treating it later.

Journal ArticleDOI
Paul Zipkin1
TL;DR: This work considers the notoriously difficult discrete-time inventory model with stochastic demands, a constant lead time, and lost sales, and shows that the effective state space is a relatively manageable compact set.
Abstract: We consider the notoriously difficult discrete-time inventory model with stochastic demands, a constant lead time, and lost sales. We show that the effective state space is a relatively manageable compact set. Then, we test various plausible heuristics. We find that several perform reasonably well, although none is perfect. However, the standard base-stock policy (a direct analogue of the optimal policy for a backlog system) performs badly. We also show that the optimal cost is increasing in the lead time.

Journal ArticleDOI
TL;DR: A continuous model of the virtual nesting problem is analyzed that retains most of the desirable features of the Bertsimas-de Boer method, yet avoids many of its pitfalls, and is able to prove that stochastic gradient methods are at least locally convergent.
Abstract: Virtual nesting is a popular capacity control strategy in network revenue management. In virtual nesting, products (itinerary-fare-class combinations) are mapped (“indexed”) into a relatively small number of “virtual classes” on each resource (flight leg) of the network. Nested protection levels are then used to control the availability of these virtual classes; specifically, a product request is accepted if and only if its corresponding virtual class is available on each resource required. Bertsimas and de Boer proposed an innovative simulation-based optimization method for computing protection levels in a virtual nesting control scheme [Bertsimas, D., S. de Boer. 2005. Simulation-based booking-limits for airline revenue management. Oper. Res.53 90--106]. In contrast to traditional heuristic methods, this simulation approach captures the true network revenues generated by virtual nesting controls. However, because it is based on a discrete model of capacity and demand, the method has both computational and theoretical limitations. In particular, it uses first-difference estimates, which are computationally complex to calculate exactly. These gradient estimates are then used in a steepest-ascent-type algorithm, which, for discrete problems, has no guarantee of convergence. In this paper, we analyze a continuous model of the problem that retains most of the desirable features of the Bertsimas-de Boer method, yet avoids many of its pitfalls. Because our model is continuous, we are able to compute gradients exactly using a simple and efficient recursion. Indeed, our gradient estimates are often an order of magnitude faster to compute than first-difference estimates, which is an important practical feature given that simulation-based optimization is computationally intensive. In addition, because our model results in a smooth optimization problem, we are able to prove that stochastic gradient methods are at least locally convergent. On several test problems using realistic networks, the method is fast and produces significant performance improvements relative to the protection levels produced by heuristic virtual nesting schemes. These results suggest it has good practical potential.

Journal ArticleDOI
TL;DR: The financial planning model InnoALM, developed at Innovest for the Austrian pension fund of the electronics firm Siemens, uses a multiperiod stochastic linear programming framework with a flexible number of time periods of varying length to improve pension fund performance.
Abstract: This paper describes the financial planning model InnoALM we developed at Innovest for the Austrian pension fund of the electronics firm Siemens. The model uses a multiperiod stochastic linear programming framework with a flexible number of time periods of varying length. Uncertainty is modeled using multiperiod discrete probability scenarios for random return and other model parameters. The correlations across asset classes, of bonds, stocks, cash, and other financial instruments, are state dependent using multiple correlation matrices that correspond to differing market conditions. This feature allows InnoALM to anticipate and react to severe as well as normal market conditions. Austrian pension law and policy considerations can be modeled as constraints in the optimization. The concave risk-averse preference function is to maximize the expected present value of terminal wealth at the specified horizon net of expected discounted convex (piecewise-linear) penalty costs for wealth and benchmark targets in each decision period. InnoALM has a user interface that provides visualization of key model outputs, the effect of input changes, growing pension benefits from increased deterministic wealth target violations, stochastic benchmark targets, security reserves, policy changes, etc. The solution process using the IBM OSL stochastic programming code is fast enough to generate virtually online decisions and results and allows for easy interaction of the user with the model to improve pension fund performance. The model has been used since 2000 for Siemens Austria, Siemens worldwide, and to evaluate possible pension fund regulation changes in Austria.

Journal ArticleDOI
TL;DR: This work shows the optimality of (s, S)-type policies in these settings under both the backordering and lost-sales assumptions in a stationary, single-stage inventory system, under periodic review.
Abstract: We study a stationary, single-stage inventory system, under periodic review, with fixed ordering costs and multiple sales levers (such as pricing, advertising, etc.). We show the optimality of (s, S)-type policies in these settings under both the backordering and lost-sales assumptions. Our analysis is constructive and is based on a condition that we identify as being key to proving the (s, S) structure. This condition is entirely based on the single-period profit function and the demand model. Our optimality results complement the existing results in this area.

Journal ArticleDOI
TL;DR: A0-1 quadratic programming model consisting of only O(n2) 0-1 variables is proposed for the one-dimensional facility layout problem, and it is shown that the resulting mixed 0- 1 linear program is more efficient than previously published mixed-integer formulations.
Abstract: The one-dimensional facility layout problem is concerned with arranging n departments of given lengths on a line, while minimizing the weighted sum of the distances between all pairs of departments. The problem is NP-hard because it is a generalization of the minimum linear arrangement problem. In this paper, a 0-1 quadratic programming model consisting of only O(n2) 0-1 variables is proposed for the problem. Subsequently, this model is cast as an equivalent mixed-integer program and then reduced by preprocessing. Next, additional redundant constraints are introduced and linearized in a higher space to achieve an equivalent mixed 0-1 linear program, whose continuous relaxation provides an approximation of the convex hull of solutions to the quadratic program. It is shown that the resulting mixed 0-1 linear program is more efficient than previously published mixed-integer formulations. In the computational results, several problem instances taken from the literature were efficiently solved to optimality. Moreover, it is now possible to efficiently solve problems of a larger size.

Journal ArticleDOI
TL;DR: In this article, the authors show that the optimal risk exposure of a trader subject to a value at risk (VaR) limit is always lower than that of an unconstrained trader and that the probability of extreme losses is also lower.
Abstract: Value at Risk (VaR) has emerged in recent years as a standard tool to measure and control the risk of trading portfolios. Yet, existing theoretical analysis of the optimal behavior of a trader subject to VaR limits has produced a negative view of VaR as a risk-control tool. In particular, VaR limits have been found to induce increased risk exposure in some states and an increased probability of extreme losses. However, these conclusions are based on models that are either static or dynamically inconsistent. In this paper, we formulate a dynamically consistent model of optimal portfolio choice subject to VaR limits and show that the concerns expressed in earlier papers do not apply if, consistently with common practice, the VaR limit is reevaluated dynamically. In particular, we find that the optimal risk exposure of a trader subject to a VaR limit is always lower than that of an unconstrained trader and that the probability of extreme losses is also lower. We also consider risk limits formulated in terms of tail conditional expectation (TCE), a coherent risk measure often advocated as an alternative to VaR, and show that in our dynamic setting it is always possible to transform a TCE limit into an equivalent VaR limit, and conversely.

Journal ArticleDOI
TL;DR: Attention-sampling algorithms are developed that are shown to be asymptotically optimal and can be used to efficiently compute portfolio credit risk via Monte Carlo simulation and illustrate the implications of extremal dependence among obligors.
Abstract: We consider the risk of a portfolio comprising loans, bonds, and financial instruments that are subject to possible default. In particular, we are interested in performance measures such as the probability that the portfolio incurs large losses over a fixed time horizon, and the expected excess loss given that large losses are incurred during this horizon. Contrary to the normal copula that is commonly used in practice (e.g., in the CreditMetrics system), we assume a portfolio dependence structure that is semiparametric, does not hinge solely on correlation, and supports extremal dependence among obligors. A particular instance within the proposed class of models is the so-called t-copula model that is derived from the multivariate Student t distribution and hence generalizes the normal copula model. The size of the portfolio, the heterogeneous mix of obligors, and the fact that default events are rare and mutually dependent make it quite complicated to calculate portfolio credit risk either by means of exact analysis or naive Monte Carlo simulation. The main contributions of this paper are twofold. We first derive sharp asymptotics for portfolio credit risk that illustrate the implications of extremal dependence among obligors. Using this as a stepping stone, we develop importance-sampling algorithms that are shown to be asymptotically optimal and can be used to efficiently compute portfolio credit risk via Monte Carlo simulation.

Journal ArticleDOI
TL;DR: This work presents a new integer programming model with a polynomial number of variables and constraints and shows that the proposed model's solutions were within or slightly above 1% of the optimal solution and were obtained in a short computation time.
Abstract: Forest ecosystem management often requires spatially explicit planning because the spatial arrangement of harvests has become a critical economic and environmental concern. Recent research on exact methods has addressed both the design and the solution of forest management problems with constraints on the clearcut size, but where simultaneously harvesting two adjacent stands in the same period does not necessarily exceed the maximum opening size. Two main integer programming approaches have been proposed for this area restriction model. However, both encompass an exponential number of variables or constraints. In this work, we present a new integer programming model with a polynomial number of variables and constraints. Branch and bound is used to solve it. The model was tested with both real and hypothetical forests ranging from 45 to 1,363 polygons. Results show that the proposed model's solutions were within or slightly above 1% of the optimal solution and were obtained in a short computation time.

Journal ArticleDOI
TL;DR: It is shown that there can be at most one supply function equilibrium with this property, and a new numerical method to find asymmetric supply function equilibria is proposed, using piecewise-linear approximations and a discretization of the demand distribution.
Abstract: Firms compete in supply functions when they offer a schedule of prices and quantities into a market; for example, this occurs in many wholesale electricity markets. We study the equilibrium behaviour when firms differ, both with regard to their costs and their capacities. We characterize strong equilibrium solutions in which, given the other players' supply functions, optimal profits are achieved for every demand realisation. If the demand can be low enough for it to be met economically with supply from just one firm, then the supply function equilibria are ordered in a natural way. We consider equilibria in which, for the highest levels of demand, all but one of the firms have reached their capacity limit. We show that there can be at most one supply function equilibrium with this property. We also propose a new numerical method to find asymmetric supply function equilibria, using piecewise-linear approximations and a discretization of the demand distribution. We show that this approach has good theoretical convergence behaviour. Finally, we present numerical results from an implementation using GAMS to demonstrate that the approach is effective in practice.